Some HTML applications have an overlay div at the top and/or the bottom of the browser window. This causes a problem for elements that require ScrollToVisible selected. Test Studio will unconditionally scroll the element to the top of the browser window, which ends up scrolling the element so that it's hidden behind the overlay div. Currently the way to handle this scenario is a bit complex: 1) Add an extra step to scroll the element above the target element to the top 2) Convert the real click step to code and comment out the ScrollToVisible line of code This is not at all intuitive or obvious. Can't we come up with a better solution to handle this scenario? See the video added to the internal feature request for a demonstration of a real customer application that demonstrates this problem
It would be nice if we could set up parent test lists that can have child test lists added to them. I am requesting this because we are working on test cases that switch between applications. According to Telerik support we need to set up these test cases like Test Lists - the steps to execute against each app will have their own automated tests that will be put in order of execution in a test list; test 1 executed against first app, test 2 executed against 2nd app, test 3 executed again against the 1st app, etc. Since this is the solution currently provided it makes it difficult to set up our test lists - we end up having a large number of them. If we could set these "test lists" (which are really test cases that are individual tests executed in a specific order) up as Child test lists that can be called by a Parent test list it would make this process easier to manage - This would allow us to have something like a release-based Parent test list that includes all of these test cases as Child test lists.
When running a performance test in a test list, which has iterations, checking performance data and comparing it for the whole test makes it hard to compare results for specific step for different iterations, that is why if there is iteration information column when showing performance results in a test list will be helpful. Along with filtering of the columns so a specific step can be shown only.
When the project is checked out not from Test Studio you should close and reopen the project so the TFS status is updated. Please add that functionality in the Refresh button.
When a tfs connected project is up to date, editing a test list shows the project checked out
We have 3 test lists which differs only by BaseUrl. When we execute them and when we open the 'Performance: History view grid' for the specific test we see a lot of test run results, but it is not clear which run was made on which environment, because there's no such info in the grid. We like to see the BaseUrl info in that grid.
There should be a possibility to set "RunsAgainstVersion" property for the entire test or project as we can for the specific step.
Currently we are able to filter the data only in a specific range of rows. The grid filters are not applying during the execution and there is no other way to filter data except by specific row range. We want to be able to filter the data by a specific content (e.g. text content).
This feature would be very useful to determine which data set and binding properties to use for tests at the list level.
The title of this feature request is sufficient to describe desired functionality.
The reports provide extensive iinformation concerning failed test steps. I would like to request a more elaborate presentation of the reports (e.g. the option of including screen dumps of all execution steps). Also the option to adjust the layout of the repost instead of just showing a percentage of how many tests passed during a scheduled test.
Once a test step fails during execution several options are presented in order to give a clear understanding of why the step failed and the circumstances in which this failure occured. To this end two images are included, however these images are often small and too unclear to identify small differences. I would like to request some kind of zoom in, enlarge or export function for these images in order to clearly determine what went wrong during execution.
There needs to be an "In Your Face" dialog that says you are about to checkout the Project file (AIIS) and, as such, you may be affecting other users with their work. I often notice that I've got this file checked out but don't know what caused it or how long I've had the file checked out. A modal dialog box that is presented when it happens would help me to understand what other action I've taken that caused the file to be checked out. Furthermore, it would let me know to make such changes quickly so that the file can be checked back in as soon as possible.
I have used coded steps in a large number of tests. But the common theme that I noticed is that the coded step is using a dynamic object like DateTime.Today.ToShortDateString(). When I initially record a step, the date would be in a string. The step would playback as the same string. But with dynamic objects like DateTime.Today gives me the ability to test the system further. I could think of one option that relies on SetExtractedVariable(), but that still requires me to write some code. Today, days from today, months from today, and years from today. Probably other users may like to see similar time options as well. While it is easy enough to make a coded step, it would extremely helpful to have a quick setup for using dates instead of converting a step to a coded one. Maybe something that would be similar to how test steps have the ability to be bound against a specific data source.
Please add the ability to use extracted variable in find logic without having data source attached. Currently extracted variable in the find logic doesn’t work until you attach a data source to the test.