It is very difficult to explain (and understand) when the global wait for elements applies and when it doesn't. There seem to be lots of questions on the forums and in the suggestions around this, for example this post: http://feedback.telerik.com/Project/117/Feedback/Details/44270-when-waiting-for-element-to-exist-in-code-test-studio-doesnt-respect-the-timeou I can't find anything in the online documentation that talks about it. We have opened support cases, asked questions on the forums and new users here get very confused. It is pretty fundamental to "getting" the product and would be great to get some documentation on that. I'd say it is important enough to document and call out in a blog. Thanks!
When you record in IE 10 the application becomes partially irresponsible and then acts very slowly. Access to the application and steps to reproduce are in the external description.
When executing multiple tests on distributed machines upon failure you cannot determine on which machine the failed test/s was/were executed. Refer to the screenshot attached
1) Double click on a .aiiresult file of a test list containing a failed test 2) Drill down into Step Failure Details 3) Export the details to a .zip file Expected: The file to include the image on failure Actual: the image on failure is missing
Here is my concern. Whenever I have my test lists execute and there are failed tests. It requires a large amount of time to determine what caused the tests within the test lists to fail. When I see a failures in a test list, I have seen 4 general errors. 1. Element identification 2. Step execution 3. Validation 4. Other problems From the current view of a failed test list result, I always use the Result filter to deselect the Passed tests. Then I have a partial view of the level of work that I require to resolve these issues. So when I resolve these issues, I experience one of the 4 errors I listed above. The error that appears to take the most time for me to have a test pass is the Element identification. The element identification causes concern for me because the test was only able to run up to that point, so there is chance the steps after the failed element identification may fail as well after I repaired the failure. Based on the system under test, if a test has a failed element identification, the system may have thrown an exception and is actually on an error page. Considering that I spend my time digging through the failed tests to access if the system under test has a bug or if the automated test is the cause of the failure, the reason for this feedback item is: 1. solicit other users' feedback and see if there are commonality between our user experience within Test Studio despite the differences in System under Test. 2. Provide an idea to introduce a way to prioritize failed tests instead of going through each individual failed test to see what actually failed. So from this screenshot, show me which failed tests requires more of my time.
Currently the translators don't offer any text verifications when selecting htmltablerow or htmltable. Adding TextContent or InnerText to this would be extremely useful--it's a very common verification!
Steps to reproduce: 1. Execute а data driven test in a test list 2. Drill down the results to where the iteration section is Actual behavior: There is an empty browser column in the grid Expected behavior: This column should not be present in this view
Steps to reproduce and video of the issue are in the internal description.
So here is my idea. In Test Studio, there are if else conditions. These conditions only allow Verify or Wait to be used in their check. But it seems that the Wait for URL is categorized as a Action Step instead of a Verify or Wait Step. In the spirit of checking the step, I suggest making the Wait for URL step allow to be converted to a Verify for URL step.
There should be an ability to select multiple test lists and run them remotely. We should be able to select 15 test lists for example and have them all run remotely in the execution server one after another. When multiple test lists are select the "Run List remotely" button becomes disable. The workaround is to schedule the test one after another (e.g. the first at 1 PM, the second at 2PM e.t.c). In this case you should know how much it takes to execute the particular test list.
VSTest.Console.exe is not running Test Studio tests successfully if there are test as steps and data bound tests.
when executing test as step from code and there is exception in the executed test, not all exception information is populated back to parent test.
Executing a test list with ArtOfTest.Runner using a Settings.xml file does not take the BaseURL into account. Note: if elementWaitTimeout=" enableScriptLogging=" are removed from file then the BaseURL is applied to the test list execution regardless of the test list settings. It would be useful to sort out what are the priorities for test and test list execution from command line using settings.xml file.
When scheduled - test scripts are uploaded in the storage server data base. To update the version of the test files one have to manually update them or to re-schedule the job. This request is about that such update to happen automatically or to have an option to trigger it.
If there are nested test on multiple levels and InheritParentDataSource is enabled for the bottom level test it will take the data of the top level test and not the one with data and unchecked inheritance property. Test A with data source -> Test B with data source -> Test C with data source with InheritParentDataSource enabled Expected: Test C to take data from Test B. Actual: Test C will take the data source from Test A although Test B does not inherit its data. If the setting could be extended to 'InheritTopLevelDataSource' and 'InheritParentDataSource' would be a good solution.
Scenario: Access token to be used from a response across all requests 1.In a load test send a request for an access token 2. The response contains an access token used for the current user session Expected: To be able to get and use the token in all requests in the load test after that Actual: This is currently not possible Scenario: Data bind a value in a json sent with a POST API call.
Steps to reproduce: 1. Open the attached project and the included test. 2. Try to load the page from the navigate step: for IE the 'Load' button will be enabled but if switched to Chrome or Firefox the 'Load' button will be greyed out. 3. However Run->To Here works and loads the page. So load it in Chrome and try to enable highlighting. Expected: To be able to highlight elements and add steps from the quick steps menu. Actual: The whole page is highlighted and the context menu does not appear at all. After a while Test Studio becomes unresponsive and closes without crash trace in the log. Details shared internally!
1. Open the attached aiiresult file Expected: To load the results in the Results viewer Actual: The results viewer crashes Resources shared internally.
It would be very nice to be able to capture traffic for load test scenarios from a list of tests, rather than just one test at a time. The desired outcome would be that each test in the list is captured as a seperate load scenario.