There is no option in Test Studio recording capabilities to add a step which sets value for the WPF RadSlider control. It will be useful to have such similar to the WPF slider control.
The workaround is to set this in a coded step like this:
// Accepts values from 0 to 1
Applications.SliderTestexe.MainWindow.Item0Radslider.Value = 0.25;
Hello Progress Support,
I wonder if we can have "Bind data" applied on the "String Length" in "Generate random string" step?
Thank you for your help.
Regards,
Lisa
Hello,
Currently when I create a Git repo and connect Test Studio it creates default branch called "master".
Would it be possible to make it consistent with good practice and Git standard and rename it to "main" in the next version of Test Studio?
Can this be added as feature request please.
Thank you,
Max
When generating videos for the test runs from a test list, the output video files uses the name of the test only. There is no indication which is the test list from which this test was executed and when having multiple runs and videos it is difficult to relate these with the generated results.
It will be useful to generate the names of the videos from a test list in a way to correspond to the test list name and particular run.
Currently the exported result contains extended details only for the failed steps. If there is a warning for a step - like the warning that the element was found only by image, this can be only seen in the Test Studio result file.
Extending the HTML exported result to show the step warning details will be very useful for anyone who review this type of result (attached in an email after a scheduled run, for example).
Currently the Test Studio CLI runner AOT.Runner.exe allows outputting the results in junit format (other than xml and html).
Add the option to output the results in NUnit format.
Enhance Test Studio recording and execution options with the ability to prompt the user whether to deny or allow permissions to their geo location.
The app i'm testing may popup a dialog in Edge to ask the user to know their location.
However when i try to record the test using Edge i don't see this popup. I just see an error thrown by the app that says the user denied Geo Location.
When i access the app with edge outside of Test Studio i do get prompted.
Currently Test Studio built-in connection to Git repository is covered for the straight scenario for authentication.
Enhance the options for connecting to Git using authentication via proxy.
Currently if the value used for a filter starts with #, the comparison operator is automatically converted from 'is exactly' to 'matches this regular expression'.
So, if one needs to use URL fragment (starts with #) for element identification, for example, that # needs to be escaped.
The workaround to use will be to change the comparison operator with 'contains' and use the fragment portion without the # sign.
We have a parent test that callas a data bound (Excel spreadsheet). All works great, but if we get a failure in a subtest of the data bound test, there is no way to terminate the test and stop the next iterations.
It would be great to have a way to stop data driven tests, if there is a failure in one of the iterations.
Hello,
In an live demo session we already discussed a certain need that we have (APG Group N.V.) From our need, we want to test desktop applications on regression. Your tool provides that, since we made a proof of concept based on a trial period we used to build some tests. But what we also want is to get insights into the duration of tests. We are a Dev-Ops based team that have all our workplaces almost fully automated, also when it comes to updates/patches. What we want is to see what the impact is of the updates/patches that are installed on the workplaces. We have structure in this process so changes in software go through certain rings to end in the production environment. Testing those software changes in the form of regression testing, is our pick.
If we get into Test Studio and testing, we can clearly see all results and the overview is amazing! But what is missing, is the function to see the duration of tests in a visual format (an graph or line). What we also discovered is that in the log of every test that has run, the data is there. On every individual step also! So all the data is there to get it into a grapgh/line (in the reports tab).
At this point this is not yet into your product, and for the organisation this is a make or break. I want to address that everything else of the tool is amazing and we would love to take your product if this function gets implemented.
I hope this feature gets implemented and I would like to hear from you.