Here is my concern. Whenever I have my test lists execute and there are failed tests. It requires a large amount of time to determine what caused the tests within the test lists to fail. When I see a failures in a test list, I have seen 4 general errors. 1. Element identification 2. Step execution 3. Validation 4. Other problems From the current view of a failed test list result, I always use the Result filter to deselect the Passed tests. Then I have a partial view of the level of work that I require to resolve these issues. So when I resolve these issues, I experience one of the 4 errors I listed above. The error that appears to take the most time for me to have a test pass is the Element identification. The element identification causes concern for me because the test was only able to run up to that point, so there is chance the steps after the failed element identification may fail as well after I repaired the failure. Based on the system under test, if a test has a failed element identification, the system may have thrown an exception and is actually on an error page. Considering that I spend my time digging through the failed tests to access if the system under test has a bug or if the automated test is the cause of the failure, the reason for this feedback item is: 1. solicit other users' feedback and see if there are commonality between our user experience within Test Studio despite the differences in System under Test. 2. Provide an idea to introduce a way to prioritize failed tests instead of going through each individual failed test to see what actually failed. So from this screenshot, show me which failed tests requires more of my time.