Unplanned
Last Updated: 31 Aug 2015 13:41 by Ewin
Ewin
Created on: 11 Jun 2015 16:41
Type: Feature Request
2
Failed Test List and show me priority
Here is my concern. 

Whenever I have my test lists execute and there are failed tests.  It requires a large amount of time to determine what caused the tests within the test lists to fail.  

When I see a failures in a test list, I have seen 4 general errors.

	1. Element identification
	2. Step execution
	3. Validation 
	4. Other problems

From the current view of a failed test list result, I always use the Result filter to deselect the Passed tests.  Then I have a partial view of the level of work that I require to resolve these issues. 

So when I resolve these issues, I experience one of the 4 errors I listed above.  The error that appears to take the most time for me to have a test pass is the Element identification.  The element identification causes concern for me because the test was only able to run up to that point, so there is chance the steps after the failed element identification may fail as well after I repaired the failure.  

Based on the system under test, if a test has a failed element identification, the system may have thrown an exception and is actually on an error page. 

Considering that I spend my time digging through the failed tests to access if the system under test has a bug or if the automated test is the cause of the failure, the reason for this feedback item is: 

	1. solicit other users' feedback and see if there are commonality between our user experience within Test Studio despite the differences in System under Test.
	2. Provide an idea to introduce a way to prioritize failed tests instead of going through each individual failed test to see what actually failed. So from this screenshot, show me which failed tests requires more of my time.  
Attached Files:
4 comments
Ewin
Posted on: 22 Jun 2015 13:46
Hi Daniel, 
This sounds good for me.  

The multiple failed iterations sounds like it will be complicated, but I guess that is something that needs to be thought out for a feature like this. 

Thanks for understanding. 
ADMIN
Daniel Djambov
Posted on: 22 Jun 2015 06:17
Telerik: Hi Ewin - I understand your point and it makes sense. In case you hover over the failed test if we show the failure information (like in Quick execution is there is element find error or any other error) probably will help you identify the error quickly, along with having Test priority as option in Test Result grid - how does this sound? We have to figure a way to show multiple errors in case of multiple failing iterations and other specific cases though. If you confirm or add something more to it, I will fire a feature request to have this iterated by PMs and process it further.
Ewin
Posted on: 18 Jun 2015 19:23
Hi Daniel, 
I understand what you mean.  Currently, my company is in the process of starting to get into continuous integration.  Collectively for our QA team, there are 10 automation projects that I have my hand in.  Each project vary in number of tests, but range between 100 to 500 tests.  

With that in mind, it gets a little difficult to manage the test repair after a test execution.  

So what I am really asking for is to see if I view the attached screenshot in test list view, I would be able to distinguish the first failed test in this list is a Verify Error and also the third failed test is an Element Find Error. 

When using the Quick Execution, there is the debugger screenshot, which provides me the ability to debug the test for the provided errors. The debugger options provide a good categories for failures in a test. 

With this visualization of Errors, I can then prioritize my effort instead of guessing which test would take a while or not.  

I always intend to work and repair all the failed tests but I would like to spend less time on digging through failed test to find a verify error instead of concentrating on a element find error. 

Does this make sense?
ADMIN
Daniel Djambov
Posted on: 18 Jun 2015 05:07
Hi Ewin, 
I agree you have a good point. We actually have Test Case priority property, however it is not included in Result view filtering, so I will create a Feature Request for our PMs to considers best way to implement it. 

About your second question that you need to go to each individual failed test I will give you my personal opinion as a QA - you have to go to each individual failed test, as each of them can reveal some sort of problem - either in the automation test itself, or in the application under test. Having so much failures however as you show in attachment is not reasonable for debugging - when we run our automation for Test Studio, we run nightly about 500 tests and usually there are 10-20 failed tests that need to be analyzed, most often it is random failures and automation need to be fixed and stabilized. Over time, if an automation test is working for a long time it will not break unless application/environment changes. 
The other failures usually require to fix the steps around the failure - before or after so that automation is more stable. Writing an automation test and running it tens of times until it is in a stable state makes it finalized. 

Of course there are random failures, that require too much time to investigate and fix and are not cost effective. 
For cases like this, what we have implemented is the strategy of called by ourselves "Failed tests rerun" - after a test list is executed, we create runtime a new test list with failed tests from the list only and execute the failed tests one more time. That way we can reduce the number of random failures and automatically get less results to analyze for application failures. Of course from time to time you need to analyze the initially failed tests, which pass afterwards to fix the automation test and stabilize it, but this is not urgent and allows you to have more time and focus on real double failures only. 

We use our execution extension methods to prepare and execute the failed tests again - first we create a new Test List object:
public void OnBeforeTestListStarted(TestList list)
        {
            //execute this section only for the Automation project
            if (projVersion == list.ProjectId.ToString())
            {
                isCurrentProject = true;

                //prepare execution to re-run failed tests
                if (!rerun || list.TestListName.Contains("FailedReRun")) return;
                //create new TestList object
                failedTL = new TestList(list.TestListName.ToString() + "(FailedReRun)", "Automatic", ArtOfTest.WebAii.Design.TestListType.Automated);
                failedTL.ProjectId = list.ProjectId;
                failedTL.Id = list.Id;
                failedTL.Settings = list.Settings;
            }
        }
then we execute if there are failed tests - we execute tests either individually or as part of a Test List:
  public void OnAfterTestListCompleted(RunResult result)
        { 
            //Only execute if re-run failed tests is selected
            if (!rerun || !isFailed) return;

            //get the list of failed tests for re-run
            var failedtestnames = result.TestResults.Where(x => x.Result == ArtOfTest.Common.Design.ResultType.Fail).ToList();
            
            if (rerunAsList)
            {
                foreach (TestResult tr in failedtestnames)
                {
                    failedTL.Tests.Add(new TestInfo(tr.TestId.ToString(), tr.TestPath.ToString()));
                }

                failedTL.SaveToListFile(projRoot);

                StartAOTProcess(failedTL.TestListName.ToString() + ".aiilist", projRoot);
            }
            else
            {
                foreach (TestResult tr in failedtestnames)
                {
                    StartAOTProcess(tr.TestPath.ToString(), @projRoot);
                }
            }
        }

Hope this makes sense to you too.