Hi Team,
I had been using Telerik for UI automation but never faced this type of issue earlier. So the problem is while recording the test the screen doesn't changes but while running or testing the same test case screen size changes unexpectedly which results in failed step as after the screen changes the location of the elements also changes.
PFA - Actual Image & Expected Image.
Hello,
We have a suite made of 23 test scenarios which are made up of hundreds of test steps. They have been running fine and passing as expected in the previous version of Test Studio (version 2021.3.1103.0). In order to support the most recent version of browsers (Google Chrome and MS Edge), we upgraded Test Studio to the most recent version (version 2022.1.215.0). During the upgrade, Test Studio automatically upgraded all the test scripts. However since the upgrade, only 8 of the 23 test scenarios have been passing as expected even though the target web app has not changed i.e. the web app is not the problem.
Can you please look into this and help us resolve whatever root cause prevents our suite of tests from executing and passing as expected with the most recent version of Telerik Test Studio? Thanks in advance.
Stéphane C. (Senior Software Tester at Assura Software)
I have a project with two tests. One test is the child of the other and extracts values from a web page to use in the parent test. The first step of the parent test is to run the child test.
If I choose a step later on in the parent test and "Run to here" on that step, once it finishes I can view the log and I can see all of the steps and outcomes from both the parent and child test.
If instead I launch the same test from the execute button and set a breakpoint on that later parent step then once it hits that break point and I open the log from the debug options the only thing I see are the results from the child test steps and nothing from the parent test. Nothing that even indicates that the first step of the parent had launched the child test.
I find it hard to believe that this behavior is intentional or at least not able to be configured but I didn't find anything in my search of the documentation or the forums.
Is this possibly a bug or "unintended feature"?
Thanks in advance,
Jason
P.S. I am running Test Studio Ultimate v 2021.3.1103.0
Nested data driven tests generates quite large results, even when there is no data in the bound data table (built-in or external). The result is almost twice the size of the same tests, if these are not using the data tables. The tests may be using InheritParentDataSource, but the behavior is similar if these are not using it as well.
For real user scenarios, the large results may exceed the limitation of the file size (16MB), which can be transferred through the storage service.
Investigate if it is possible to optimize the data driven test results, at least for the case, when the data source is empty.
Not a big deal, or anything requiring further support, but an observation worth noting nonetheless:
If a dynamic target is set between 2 steps and one such step is removed from HTTP traffic, rather than being able to edit the target to reflect a new destination step (or rather than the target information simply passing to step taking the place of the deleted step in the HTTP order), the target destination defaults to zero and any attempt to edit that traffic or the dynamic target further results in the application crashing.
Also of note, the reason that this has been an issue, is that deleted traffic seems to be reappearing on application close-->reopen, even when the traffic is saved (with or without the crashing behavior).
Hope this helps! Thanks.
Hello,
ResultsViewer crashes when closing the Step Failure Details window with the OK button.
To reproduce:
Best regards,
Alexandre.
Dear Support,
The version I am using is not listed below: 2020.1.403.0
I am noticing an inconsistency when using the Replace Element feature i.e. the attributes selected are not respecting the priority set in the Settings - Find Logic (Html) screen.
Please refer to screenshots attached.
See the screen shot.
There are two annotation markers. For some reason, a "MouseClick:LeftClick" will appear, as it should, but then it sticks there, even as 10 or 12 other steps execute and their annotations come and go. Sometimes another MouseClick:LeftClick will appear and the old one will go away and then the new one will stick. But this does not happen every times. Sometimes a new MouseClick:LeftClick annotation comes in and goes away and the old one is still hanging around.
This happens will almost all of my scripts.
Is it because some of the mouse clicks are from recorded steps and some are from code steps? This is just a guess.
On a data driven web test, setting the DataRange property to the example "SingleRow" causes Visual Studio 2017 to crash. I have three rows of data in a data table that are bound to specific steps of my test. I was attempting to limit execution to only one row temporarily. Using SingleRow or 'SingleRow' causes Visual Studio 2017 to crash with the following stack trace as found in Windows Event log. I was able to successfully use '1:1' or ':1' as a work around.
Application: devenv.exe