Steps to reproduce: 1. Using the project attached execute the existing test list. In the 0 test there is a test as step with enabled StopTestListOnFailure. While the feature works fine out of the box, it is not working if the test is included as test as step.
Please add the ability to create and schedule Test Studio API and Test Studio Mobile test lists.
When adding additional steps to an existing test, they will appear in the beginning or the end of the test. In web testing new steps place themselves under the step selected (often: run to here), and I would love this feature in mobile testing as well. Would save a huge amount of time that I spend finding and moving test-steps.
When Daylight Savings Time ended on November 6, our scheduled tests started running an hour earlier than scheduled. I had to "edit" each schedule (without actually changing any settings) and save in order for the time in the "TimeToRun" setting in the job details file to adjust for the time change. It would be nice if this could happen automatically. Or maybe add a right-click option to for updating the time or something.
When drilling down the results of a test list to a failing test, it would be nice to be able to go directly to the test editor for a test rather than having to go back to the Tests tab and traverse the folder structure in order to locate and edit the desired test.
There should be a way to run only selected test cases from a test list. If few test cases fail in a list, then we should be able to select those test cases and run only selected test cases.
It would be nice to be able to include an existing test list as a selection when creating/editing a different test list. This would allow for simple lists to serve as the basis for more complex lists, making list changes more manageable and simplifying the creation of larger test lists.
Now when you run the test through "Run From Here", "Run to Here" or "Run Selected" it doesn't stop at the break points.
When we generate our reports, Test Studio will only indicate to us the number of test cases that were executed. The problem here is when we use data driven testing, the number of scenarios executed can be much higher than the number of test cases. For example, if we have 10 test cases in our list, but each test case runs through 10 iterations, Test Studio will only report that 10 test cases have been run, not the 100 iterations. This makes getting an accurate report of our testing very difficult. If one iteration fails inside a test case, then the entire test case is considered a failure.
As part of the RunResults email that is sent after a scheduled test list is run, I would like to be able to include the test owner as part of the failure information. We have multiple people developing scripts and it would be nice for everyone to be able to glance at the email and see if any of their scripts are failing.
Many customers have custom templates and custom fields that are required and they wish to populate with the data provided by Test Studio. We have achieved the same request for Jira custom fields (http://docs.telerik.com/teststudio/features/integration/bug-tracking/jira-custom-fields) and for TFS custom fields (http://docs.telerik.com/teststudio/features/integration/bug-tracking/tfs-custom-fields), so hopefully this is possible. Thank you!
Currently, the selected browser will open automatically before the first steps are executed. If there was a way to prevent this from occurring until after a pretest script is executed that would be a HUGE plus. Much the same QTP allows you to just execute a script and doesn't require the browser even be open and later you InvokeApplication to start the web based scripting. For instance if I want to change my hosts file to point to a specific DEV box I need to execute a batch file to update my hosts. But if the browser is open first then it's going to wherever it wants really. The only way around I've found is to execute the first cycle with a blank row to skip to the next iteration of the script but at the end clear and go to next hosts.
TagIndex is not effective in apps with changing semantics. The ability to affix more specific traversal cascades with psuedo selectors would improve performance and specificity in targeting items in the DOM tree. Examples TagSemanticPath: 'is exactly' 'my-custom-element table tr td span' TagSemanticPath: 'is exactly' 'my-custom-element div div span:last' TagSemanticPath: 'is exactly' 'my-custom-element div table tr:nthChild(2n +1)' TagSemanticPath: 'is exactly' 'div div div div table tr td td td td' Reverse traversals would also be very nice to have.
After using the search for a test, all of the folders are expanded. It would be nice to have the ability to expand and collapse the folders
Add a search dialog to the datasource window. There are situations where a user might have up to 50 different datasources in a project and need to find out if there is already a particular datasource created for something - right now there is no folder structure within this view and also no ability to search.
Include option to stop a data driven test on the failed iteration
Please consider providing an option to lease time to execute tests on cloud-hosted runtimes. I would love to be able to schedule tests to run against all browser/OS permutations I need to cover for an application. This should be provided much like a mobile device cloud with a SAAS model that allows for both public cloud and private cloud options. This would eliminate a huge hurdle for organizations that are simply not allowed the level of access required to execute tests requiring admin-type permissions. Additionally customers will grasp a better understanding of the value of the runtime licenses if it is something they can easily use without the headache of setting it up and maintaining it. This coupled with the option to lease load VU's would be very valuable as well.
The ability to load testing of a OpenID Connect 1.0 and OAuth 2.0 protocols based applications. Request is to detect dynamic targets that can be used for such authentication.
Telerik Test Studio direct integration with Sourcegear and other major version control systems.