If a test list is frozen, it will effectively block the service from running any other test lists the following days. We'd like to have something like a watchdog feature implemented that simply kills off a scheduled test-list run completely if it is inactive for a long period of time (also killing the IE processes ofc). This would allow the service to continue the following day and the next etc.
It would be really powerful if I could watch the currently running test list's live progress. To be able to see what tests have completed so far, which have failed etc., how long there is before the testlist run has completed. Currently it's quite impossible to see the realtime progress when running a remote testlist, either as a scheduled testlist or as a manually executed testlist. Another thing in the same vein as the above is the possibility to cancel a running testlist on the schedule server.
When a test list is executed it results in a list of all executed tests. There should be an option that allows only failed tests to be re-executed in the context of the same test list.
I would like to see the ability where after running a test list you can select all failed test (e.g. with a checkbox) and re-run them. This is extremely helpful and will save a lot of time.
An excel export of test list results include the test name and the failure information, but it would be nice if it showed the step that failed as well. We have some users that do not access the test list section of Test Studio (due to all of the problems we have been having in that area as of late), so would be nice if the results showed the step rather than me having to drill down the results in the test lists tab to tell them what steps need to be addressed.
We will ultimately have several employees using Test Studio at the same time creating tests and test lists. As test lists cannot be run in parallel, it would be nice for one employee to be able to see if any lists are currently running on any remote execution machines on the Test Lists tab, rather than having to jump over to the Tests tab and look at the scheduling status.
When an automated test fails, it can sometimes be down to environmental issues or network instability and sometimes a rerun of the test would cause it to pass. It would be good if there was an option to rerun test x number of times on failure. This is particularly useful for automated deployments which run and can be hampered by tests that fail first time but potentially will pass a second or third time if run.
I find it frustrating that the scheduling server does not have a state or status for a scheduled test that was not run. In my experience if a test is scheduled and does not run it simply disappears from the results view. This happens for example if a test executor gets turned off and is not available to run the scheduled test. I would expect proactive notification from the scheduling service that a test was not able to be ran. Or keep the scheduled test in the results but have it red, with useful failure details in the log.
Can you please enhance the scheduling server/test executor to prevent compile time collision. I've run into a situation where I have a test executor that runs tests frequently, so it is compiling the pages.g.cs file each time. Well I sometimes go in and request for a test to be run remotely against that same executor - expecting that it knows when its busy and will get to my request when it can. ...and boom. The requests crash into each other and the executor gets a compile error in the pages.g.cs file. Frustrating. You should see about enhancing this so that the executor taking commands from multiple points (the scheduler, or the developer's box) don't collide.
Currently when you move or delete the last coded step from a test the code-behind file still remains in the project. It would be nice if we delete the code-behind when no coded steps are left in the test. We can also put a confirm dialog, asking for permission to proceed, otherwise, some code-behind files can be deleted by mistake.
Invoke OnBlur fails with the following exception in IE 11: ExecuteCommand failed! InError set by the client. Client Error: System.ArgumentOutOfRangeException: Index and length must refer to a location within the string. Parameter name: length at System.String.Substring(Int32 startIndex, Int32 length) at ArtOfTest.InternetExplorer.IECommandProcessor.InvokeEvent(IHTMLElement target, BrowserCommand command) at ArtOfTest.InternetExplorer.IECommandProcessor.ProcessActionCommands(BrowserCommand request) at ArtOfTest.InternetExplorer.IECommandProcessor.ProcessCommandInternal(WebBrowserClass ieInstance, BrowserCommand request, IHTMLDocument2 document) BrowserCommand (Type:'Action',Info:'NotSet',Action:'InvokeEvent',Target:'ElementId (tagName: 'input',occurrenceIndex: '74')',Data:'onblur--@@--null',ClientId:'Client_007162c7-bb26-4f8e-a68b-32c177fccba3',HasFrames:'False',FramesInfo:'',TargetFrameIndex:'0',InError:'True',Response:'System.ArgumentOutOfRangeException: Index and length must refer to a location within the string. Parameter name: length at System.String.Substring(Int32 startIndex, Int32 length) at ArtOfTest.InternetExplorer.IECommandProcessor.InvokeEvent(IHTMLElement target, BrowserCommand command) at ArtOfTest.InternetExplorer.IECommandProcessor.ProcessActionCommands(BrowserCommand request) at ArtOfTest.InternetExplorer.IECommandProcessor.ProcessCommandInternal(WebBrowserClass ieInstance, BrowserCommand request, IHTMLDocument2 document)') InnerException: none.
Schedule test list: Email settings Persist for the project. Currently you must fill out the attachment type, who the email is sent to, title, ect. Currently If you close out of the project these are lost. If you are in a project and do not close the project, it retains the settings. this should persist through the project.
If scheduler is overloaded or stuck it will stop working. It should either attempt to auto-recover or send an alert that is not running.
Steps to reproduce: 1. Open a Test Studio project in Windows file explorer. 2. Copy and paste a test in a new folder within the projct. 3. Press the Refresh button in Test Studio so the test appears. Actual: The new test has the same ID as the original test. Expected. Test Studio should change the ID. This works if you select Add existing test from the project explorer.
There is no option to save only one test in the project. When you press the save button Test Studio saves all tests that are marked as dirty. The workaround is to go to the Recent project and reopen the same project you are working on. In this case a dialog with all dirty tests is displayed and the one can uncheck some of them.
It would be very beneficial to be able to select multiple test list results and be able to mass export them all to a single location as excel spreadsheets. Currently, it is very tedious to go through and manually export each test to excel, especially when you have a lot of test lists that get run as part of your suite.
Once tests are run using ArtOfRunner in command line, I like to extract the results (number of tests, passes tests, etc) so that I can send email with the pass rate and number of tests run. I can do this opening the results file and making into word/excel. There it gives all details. These details not saved in this file, SanityChrome 131250182933109322.aiiresult. Where can i see the tests passed out of line ? thanks, Sri
Include Expand All & Collapse All in the Project Tab