On the Executive Dashboard, the columns are all fixed sized, the problem with this is that it gives very little space for test and test list names and makes it difficult to view the naming.
Please provide a way to resize or view the names easily.
When one schedules a test list and the tests execute with the rerun failed tests option, there is only a limited number of things that can happen.
1) a test may pass.
2) a test may fail once but pass the second time.
3) a test may fail twice.
4) a test may not run.
My suggestion is that, if these are the four types of result, report only this.
When you say this:
Run Summary: 25 of 25 test(s) run; 22 passed, 3 failed, 0 not run.
there is ambiguity. It is not clear what happened. It could instead say:
Run Summary: (#) test(s) run, (#) passed, (#) failed but then passed, (#) failed twice, (#) not run.
If the summary is in the second form, there is no ambiguity.
It would be helpful extract the current selected value of a RadDropDownList, so it can be compared to the expected value. This is commonly used to verify if the selection was successful.
Please consider adding such extract steps for other similar controls.
Once you connect to a pop-up window, you can not switch back to the previous one, until the pop-up is closed. Some test scenarios will benefit from the possibility to switch back and forth.
Please consider this feature for next releases.
Currently, when a step in the API test fails, the whole test is stopped.
Please add the continue on failure option for steps in API project.
It would be nice if there was a way to avoid simulating real typing for a search box. There is no option to enable/disable it, but it is clearly using this behavior. I've tried a workaround of entering text directly in the input element, but it doesn't seem to register it when this technique is used. I don't see how a textbox can be made to work without this behavior but a search box cannot.
Simulating real typing tends to be the most fragile part of our tests, and all we really need is to enter text and then search. It also slows down the tests quite a bit vs. just setting the text directly.
Tests that have Not Completed tests, shouldn't list the overall test as Pass or Fail....rather it should set the overall run as "not complete"
In addition - when loading not completed tests within test runs, they are just shown as fail with no background and white text. The run is listed as pass if there are no fails but existing "not completes".
Recommendation to add Not Complete as overall result in addition to individual tests that are no in a passed or failed state
Currently Test Studio updates only the necessary tests and resources to the storage server. When the test execution starts on the execution server, Test Studio will download everything available in the storage server.
Please check how this can be optimized to ensure that there are all the necessary resources for the test execution.
Conditional statements (if, while, and such) should be able to use an extracted values as a condition.
Also multiple conditions will be very useful in some cases.
I think it would be neat to be able to view / edit markdown files within a project within the Test Studio application. Being able to see a markdown file in the project list, and have them open in a editor tab so you can review / make edits.
For example I create markdown files that have high level summary's of what each test completes, or potential helpful debugging information on a particular test.
I was going though the IOS and Android api-reference document (https://docs.telerik.com/teststudio/test-studio-mobile/api-reference/ios), I was wondering if there is a method to get row count of a UITableView or of any other collection
This can make it much easier to loop and traverse through the TableView dynamically and interacting with the elements inside the TableView easily.
I noticed the Hybrid api-reference document has a method called "getRowCount" but it works with Web.(
uint rowCount = this.ActiveDevice.Web.GetRowCount(Elements.MyScreen.WebView.TableView);)
Is there anything similar for iOS and Android?
There are user scenarios in which load testing http requests can be data driven with predefined and known data. In such case there will be no need of a dynamic target taking data from a previous http call to pass the necessary data.
It will be useful to add the ability to directly data drive the http request parameters.
Please add the possibility to edit multiple test lists' settings at the same time and create new test list with already predefined settings.
This will make it easier to maintain my project.
Currently the results view in Test Studio only shows results from the Results folder in the project. Please add an option to change this path to a different folder or navigate through the different results in subfolders.