I'm on the TRIAL with 8 days left trying to understand how this program will help me! I'm trying to run the DemoTests: so far they are failing!
I've run the tests:
--for Test #3 - Fill Out and Submitting a from
--CalculatorDesktopExample
NEITHER WORK! I also need to see them work in very slow step by step mode (F8 in VBA) so that I and understand what they are doing. I don't see this option! This isn't very helpful for a new user without any training!
The Kendo Angular Multiselect control is not covered with the built-in translators.
It will be very useful and consistent if such translator is added in Test Studio.
A picture is worth a thousand words...
When the Scheduling server is not added during installation, the Configure button in Test Studio will be greyed out (disabled). It will be helpful to add some kind of tool tip or note about this.
The solution is to modify the existing installation of Test Studio and add the Scheduling Server feature to the installation.
We need to be able to verify the text in dialogs in different browsers.
The current solution is in code, but is not stable due to browser structure changes -> https://docs.telerik.com/teststudio/advanced-topics/coded-samples/html/verify-dialog-text-chrome
Http requests' responses with content-encoding type 'br' cannot be decoded in Test Studio load testing. This results in the inability of using these responses to generate dynamic targets to cover the proper load test run.
Workaround: Modify the traffic for a load user profile by removing the 'br' encoding type. A third party Chrome extension can be used for this modification.
When one need to reconfigure a very high volume of Custom Dynamic Targets regularly in different user profiles, the GUI approach is very slow and inconvenient.
Would it be possible to implement a feature wherein a file can be uploaded to a User Profile in order to automatically configure Custom Dynamic Targets that are used frequently? Or somehow transfer the desired set of targets from one profile to another test or profile?
Please, add a custom goal to help in analyzing load test result data by User Profile. The requirement is to find the "Average time for completion" of each executed user profile.
P.S. Running performance tests while the application server is loaded is not a sufficient metric for the scenario.
Implement the option of passing multiple custom dynamic targets to the same Post Data array.
Currently, this is not a supported scenario in Test Studio - the array is a JSON array, and using the solution to copy the entire array into the Destination Field Name and using the prefix{value}suffix method, will allow to alter one of the values, but not both.
Telerik Test Studio has the options to Add a, Bind (to an existing), or Unbind from a data source.
However, the VS Test Studio plug-in only has Add and Unbind.
It would be nice to be able to do everything within Visual Studio.
It would be useful to support the generated output file additionally in a human-readable format (markdown/html), when it comes to automated API testing.
We (dev/qa team) would like to attach the file to the story as documentation of the test case. So that the product manager or other colleagues (without license access) can easily take a look at the covered cases.
The current xml output (sample attached) already provides a good overview and could be extended with background information in some places (for example <action>).
When one schedules a test list and the tests execute with the rerun failed tests option, there is only a limited number of things that can happen.
1) a test may pass.
2) a test may fail once but pass the second time.
3) a test may fail twice.
4) a test may not run.
My suggestion is that, if these are the four types of result, report only this.
When you say this:
Run Summary: 25 of 25 test(s) run; 22 passed, 3 failed, 0 not run.
there is ambiguity. It is not clear what happened. It could instead say:
Run Summary: (#) test(s) run, (#) passed, (#) failed but then passed, (#) failed twice, (#) not run.
If the summary is in the second form, there is no ambiguity.
It would be helpful extract the current selected value of a RadDropDownList, so it can be compared to the expected value. This is commonly used to verify if the selection was successful.
Please consider adding such extract steps for other similar controls.
Once you connect to a pop-up window, you can not switch back to the previous one, until the pop-up is closed. Some test scenarios will benefit from the possibility to switch back and forth.
Please consider this feature for next releases.
Currently, when a step in the API test fails, the whole test is stopped.
Please add the continue on failure option for steps in API project.
Conditional statements (if, while, and such) should be able to use an extracted values as a condition.
Also multiple conditions will be very useful in some cases.