We've observed an odd behaviour where data driven tests set up with local data seem to only execute the first iteration when using the "run to here" and "run selected steps" options. According to the log, it correctly identifies the test as being data driven, identifies that it is executing the first iteration but does not execute any further iterations. (No errors or failures of subsequent iterations are present in the log). This occurs regardless of if the test is run as a standalone test, or a test-as-step in a larger test (in which case, after running the first iteration of the sub-test, the next step in the parent following the test-as-step runs (and fails on a data validation error since the other iterations did not execute).
On further analysis we determined that if the entire test is fully executed, it does seem to find and run all iterations of the subtest, so it seems only the partial run options seem to be encountering the issue.
We're unsure if this is by design, or if there is a configuration setting that may control only 1 iteration running when using the other run options.
Thanks,
Jeff Harrison
Current Test Studio wouldn't prompt Save dialog when close unsaved test case file. User may lost track for changed file.
Request when close any unsaved file on Test Studio, it should prompt user a Save Dialog, let user notice and decide to save.
Current test studio would auto remove element if not used by any test cases, without user notification.
For real world scenario, those element may needed in later test cases. It is a nightmare if an element not found when needed. Also those element may used in custom code file.
Request to enhance Test Studio to let user decide which element to removed. Yours can highlight or change font of unused elements, so that user notified and manage it after that.
When initiating a 'Record' on a web test in Test Studio, the relevant browser (Edge Chromium, but Firefox and Chrome have been tried) is launched and URL navigated to, but only a single step is recorded:
Navigate to : '/'
We are attempting to record regression tests against a custom web app. Steps are recorded fine if navigating to (for example) google.com and typing in a search query.
Please let me know what further information I can provide to narrow this down.
1. Edit a Test
2. Test shows * for pending changes
3. Can't find a ribbon button for saving the changes -> I can only use [CTRL]+[S]
Maybe you can add the button in the section "Edit" or as seperate big button before the "Edit" section?
A picture is worth a thousand words...
The Kendo Angular Multiselect control is not covered with the built-in translators.
It will be very useful and consistent if such translator is added in Test Studio.
It would be useful to support the generated output file additionally in a human-readable format (markdown/html), when it comes to automated API testing.
We (dev/qa team) would like to attach the file to the story as documentation of the test case. So that the product manager or other colleagues (without license access) can easily take a look at the covered cases.
The current xml output (sample attached) already provides a good overview and could be extended with background information in some places (for example <action>).
Telerik Test Studio has the options to Add a, Bind (to an existing), or Unbind from a data source.
However, the VS Test Studio plug-in only has Add and Unbind.
It would be nice to be able to do everything within Visual Studio.
Implement the option of passing multiple custom dynamic targets to the same Post Data array.
Currently, this is not a supported scenario in Test Studio - the array is a JSON array, and using the solution to copy the entire array into the Destination Field Name and using the prefix{value}suffix method, will allow to alter one of the values, but not both.
Please, add a custom goal to help in analyzing load test result data by User Profile. The requirement is to find the "Average time for completion" of each executed user profile.
P.S. Running performance tests while the application server is loaded is not a sufficient metric for the scenario.
When one need to reconfigure a very high volume of Custom Dynamic Targets regularly in different user profiles, the GUI approach is very slow and inconvenient.
Would it be possible to implement a feature wherein a file can be uploaded to a User Profile in order to automatically configure Custom Dynamic Targets that are used frequently? Or somehow transfer the desired set of targets from one profile to another test or profile?
Http requests' responses with content-encoding type 'br' cannot be decoded in Test Studio load testing. This results in the inability of using these responses to generate dynamic targets to cover the proper load test run.
Workaround: Modify the traffic for a load user profile by removing the 'br' encoding type. A third party Chrome extension can be used for this modification.
We need to be able to verify the text in dialogs in different browsers.
The current solution is in code, but is not stable due to browser structure changes -> https://docs.telerik.com/teststudio/advanced-topics/coded-samples/html/verify-dialog-text-chrome
We have a large PDF (60+ pages) and try to verify certain content on specific pages. During the recording process some elements are captured with TagName and TextContent and others with TagIndex. In this scenario TagIndex is not reliably, because the document is large and the elements change dynamicaly.
One solution is to update the element's find logic manually on all elements that use TagIndex, but this is very time consuming.
Another option is to use coded step to find the target element by TextContent and verify it.