It would be useful to be able to schedule a test list to run between 2 times (for instance when an system is available). For example, I would like to run Test list A hourly between 9am and 6pm
Sometimes it would be nice to call a custom function before or after each individual test step. Adding additional on before test step and on after test step hooks into the execution extension would be useful here. The main use would be in the on after test step hook to wait for some global value or variable to be set or condition to be true before moving on to the next test step. Source ticket 993986
If you delete files or folders from your test suite using the TS standalone IDE, the Visual Studio plug-in will not detect the deleted files/folders when you compile your project so you must re-delete them there too.
HtmlOrderedList & HtmlUnorderedList classes have extended properties that would be useful in quick steps. Specifically Items and AllItems. There is value in being able to add a quick verification for Items.Count. Currently there is no translator in the recorder for these classes. They can be used in code, but it's not obvious or clear how to find an element and cast it to the correct type. It gets even more confusing if you use Add Element in the recorder because it's add as an "HtmlControl". You have to cast it first to a BaseElment then to a HtmlOrderedList before you can take advantage of the extended properties.
I would like to be able to provide each test list a separate description, as it is for the Tests in the Test Properties now.
On the Results tab for both report and calendar view, it would be helpful to know what the BaseURL was for each test. We often run the same test list against different applications by simply changing the BaseURL value for that list. As such it would be helpful to see that data at a glance on the calendar views and the report view on the Results tab.
It would be nice to have a "fix all" button when the "Used by" context menu displays broken test as step references. We have many tests that are used as steps in hundreds of other tests so if the reference gets messed up, it's rather lengthy clicking on OK hundreds of times.
Currently if the name is changed on a data source, you have to manually update each test that is bound to that source. Since there is no easy way to see all the tests that are bound to a source, it would be nice for those changes to update automatically.
When exporting a load test result to Excel, it would be nice if there were individual worksheets in the workbook corresponding to the individual server counters that are configured in the load test.
It would be VERY nice to be able to somehow export the “Analyze your results” area of a load test so that the graphical counter data can be viewed outside of Test Studio (maybe by the lightweight Telerik.TestStudio.ResultsViewer.exe file). This would make it much easier to share the high level load test results with executives and other interest parties who are not users of Test Studio.
It would be nice to be able to import/export load test counter templates as used in the “Monitor Performance” section of a load test. This way we could easily update the templates used across multiple load tests and multiple Test Studio projects.
It would be VERY nice to be able to use a distinct data source per load test user profile rather than one query for the entire load test. We have a scenario where we would like to be able to test variations of load based on percentage of user types that are using the application. Each user type would be a separate user profile in the load test and we could adjust the workload accordingly. In order for the load test to use multiple logins and search terms, we would need each user profile to be data bound to a different query or data source. This is because each user type accesses different dashboards in the application which may affect the performance differently, and we’d like to be able to see how different user type workload percentage configurations affect the load on the servers.
It would be nice to be able to include an existing test list as a selection when creating/editing a different test list. This would allow for simple lists to serve as the basis for more complex lists, making list changes more manageable and simplifying the creation of larger test lists.
It would be nice to be able to import multiple existing tests into a new project by folder/subfolder rather than one at a time. This way Test Studio could recreate a complex folder structure easily and import tests into the proper folder. Import should properly handle all "tests as step" scenarios so that the UniqueID values get updated accordingly.
It would be nice to be able to see all of the tests that are used by a data source. Currently there is no way other than brute force to determine what tests might need to be affected by a change to a data source
When drilling down the results of a test list to a failing test, it would be nice to be able to go directly to the test editor for a test rather than having to go back to the Tests tab and traverse the folder structure in order to locate and edit the desired test.
When viewing the top level results of a test list, it would be nice to be able to view the log of all of the tests in the list. As it is now, I have to view the log of each individual test and compile them manually into one large log file in order to parse that test log for data.
Currently when scheduled tests don’t run properly (which none of them do for us at the moment but that is a different story), I have to get the job ID and delete the job using Fiddler. I’ve tried deleting the schedule in the Results section of Test Studio and also tried deleting the files from the C:\Windows\SysWOW64\config\systemprofile\AppData\... folder as well to no avail. Reloading the test results on the Results tab would continually repopulate the timeline with the scheduled list. Once I deleted the jobs sing Fiddler it was all good but needs to be a way to do this is the application. It would also be nice to be able to edit previously scheduled jobs if something changes rather than having to delete and recreate them.
Some HTML applications have an overlay div at the top and/or the bottom of the browser window. This causes a problem for elements that require ScrollToVisible selected. Test Studio will unconditionally scroll the element to the top of the browser window, which ends up scrolling the element so that it's hidden behind the overlay div. Currently the way to handle this scenario is a bit complex: 1) Add an extra step to scroll the element above the target element to the top 2) Convert the real click step to code and comment out the ScrollToVisible line of code This is not at all intuitive or obvious. Can't we come up with a better solution to handle this scenario? See the video added to the internal feature request for a demonstration of a real customer application that demonstrates this problem
It would be nice if we could set up parent test lists that can have child test lists added to them. I am requesting this because we are working on test cases that switch between applications. According to Telerik support we need to set up these test cases like Test Lists - the steps to execute against each app will have their own automated tests that will be put in order of execution in a test list; test 1 executed against first app, test 2 executed against 2nd app, test 3 executed again against the 1st app, etc. Since this is the solution currently provided it makes it difficult to set up our test lists - we end up having a large number of them. If we could set these "test lists" (which are really test cases that are individual tests executed in a specific order) up as Child test lists that can be called by a Parent test list it would make this process easier to manage - This would allow us to have something like a release-based Parent test list that includes all of these test cases as Child test lists.