Add ability to specify on which execution servers to run the test list when run through the CommandLineClient. Currently you are only able to specify the scheduling server. When the test list is executed it will run on all execution machines that are connected to the scheduling.
Make Tests that are hung up on agents fail after a configurable amount of time. This might be an existing setting that is not functioning correctly.
Currently, the selected browser will open automatically before the first steps are executed. If there was a way to prevent this from occurring until after a pretest script is executed that would be a HUGE plus. Much the same QTP allows you to just execute a script and doesn't require the browser even be open and later you InvokeApplication to start the web based scripting. For instance if I want to change my hosts file to point to a specific DEV box I need to execute a batch file to update my hosts. But if the browser is open first then it's going to wherever it wants really. The only way around I've found is to execute the first cycle with a blank row to skip to the next iteration of the script but at the end clear and go to next hosts.
There should be a way to run only selected test cases from a test list. If few test cases fail in a list, then we should be able to select those test cases and run only selected test cases.
Make the test status box dockable so the user can still use the other functionality of test studio. Currently if it is open you cannot interact with any part of Test studio. Screenshot attached
We occasionally need to end a test list in order to fix a specific test, and the list could be halfway through and 45 minutes in. I think it would be beneficial to still be able to pick a starting point in a test list (with the default being the beginning), and run the test list starting from the test you choose. This way if you've run half your list and needed to stop it for some reason, you can just pick it back up from the next test in the list.
In the Test Studio licence, the reports menu does not provide anything other than that pictured in the Documentation site under "Test Results > Reports". I want to be able to use the data in the database and analyze the results in a different way. For example, what is the total time taken for a specific test over time. ... or total data downloaded. How many non-fatal errrors are genereated in the loading of a page. How many images are loaded? What is the size of those images.
Many verifications (e.g. verify visible) can be made data-driven; but not verify exists, which has to be fixed at true or false. Can you introduce a data-driven version of the exists verification please? This would be useful in order to have a testcase which, repeated for a number of different user accounts, verifies that screen elements either exist or do not exist according to the user's permissions.
Hey guys, I'm looking for a feature in test studio that would allow users to have multiple ramp up/down times using the load/performance testing functionality. For example, start with X users then gradually ramp up to Y users over the next N minutes. After N minutes maintain a constant load of Y users for the rest of the duration of the load test. To me, this is the one feature test studio is lacking compared to the other players in the load/performance testing industry. Please like if you agree that this would be a great feature for the test studio team to implement. Thanks all! Steve Charlton T4G Limited
The send results dialog that allows you to send email notification upon test list completion is only generated when scheduling tests for remote execution. I would like this feature to be available when test lists are run immediately using the "Run List" button/command
To be able to run browser as different user than the windows authenticated one. Pressing shift and right mouse click will allow the option to run the application as a different user. It would be useful if this could be automated with Telerik Test Studio.
Implement project setting property to define the Storyboard image quality (detailed or basic for example), where detailed will include high resolution image and basic will be the current resolution
This would be useful as an "object oriented" use of test steps. The goal would be to reuse test steps for different data inputs and website global settings for our website testing. Another benefit of the test bind data being set outside of the test step is that source control would be relatively unaffected by the modification of data input into the bound table. Considering using MS Access to have tables bound to specific test steps and deleting records and appending new records prior to test step execution within a system test (some are simple functional test and others are more complex system tests and setups). In our proposed implementation a 'coded step' would need to be executed prior to the test step to clear and append the table which is bound to the test step. Btw, many of the test steps are bound to a table with a single record. Perhaps allow instances of test as steps to be inserted in a test case with instance specific data binding. This may not be ideal but it should work.
Please provide the free "Testing Framework" assemblies via your NuGet Feed. As of now, I require the "ArtOfTest.WebAii.dll" for automation and testing of our WPF-Application which includes "Telerik UI for WPF". I am already installing those UI Components from the Telerik NuGet Package.
Schedule test list: Email settings Persist for the project. Currently you must fill out the attachment type, who the email is sent to, title, ect. Currently If you close out of the project these are lost. If you are in a project and do not close the project, it retains the settings. this should persist through the project.
Please add an easy way for changing the scheduling database through the UI, currently in order to change the database you need to either reinstall Test Studio or edit the config files (not recommended)
Telerik does not support test studio assemblies in .net standard and does not have a test runner in .net core. Considering all of our Azure DevOps and container agents are running linux we will not be able to use the tool.
Please add this feature to your product.
Schedule Lists: Cross project Scheduling of Lists in Bulk for regression. Currently we must go into each list and schedule one at a time in each project. I would like to have a Global view of the Scheduled test lists.