Automating Graphical User Interface (GUI) tests is a challenging task. In theory, any test can be automated, but it is not worth it to automate every test, often because of limited resources (i.e., time spent on writing the automated test.) So how do you decide which test cases are worthy of automation? Specific requirements will depend on the product, team skillset, time constraints and tool limitations. In this article, we’ll list the factors by which you can evaluate a test as a potential automation candidate.
- Repetitive Test Runs (R)
Evaluates how often a given manual test is executed. If the test should be run every time you release your product as part of regression testing, then it should be considered for automation. When automated, you will run this test not every time you release, but every time you build the application under test (AUT).
- Test Severity (S)
This factor represents how critical the feature is to the AUT. This should be evaluated by all stakeholders, giving tests for the critical parts of your application a priority over tests for lower-severity areas.
- Configurations (C)
The number of configurations a given test needs to be run on. If a certain test needs to be run on varying software configurations, across multiple platforms or using different test datasets, this indicates the test is a good candidate for automation.
- Time to Execute the Test Manually (T)
This factor represents the time needed to execute the test manually on a given configuration.
- Test Performance (TP)
This factor represents the historical failure detection rate of the given test. That is, determining if a test’s execution leads to finding a defect. Modern test management systems, like Squish Test Center, can provide such information. On the other hand, there are tests that are executed over many years, but never result in a new defect being found.
- Ease of Automation (A)
Describes the level of difficulty in automating a test. Here you need to consider test complexity, test automation tool limitations, programming skills of test developers and, finally, the nature of the given test. A good automation candidate will have results that are precise and deterministic, and which can be evaluated by a computer program.
Automation Candidate Factor
For each manual test, an Automation Candidate Factor can be calculated. Evaluate each factor giving it a weighting from 1 to 3. The example formula can be: ACF = (R + S + C + T + TP)/5 + A. With this factor, you can allocate your limited resources to automate tests that are truly worth automating.
Regardless of the priority you assign to a given manual test, the quality of the manual test preceding automation is also an important factor. Poor quality manual tests lead to poor quality automated tests. Are all the steps in the test case well described? Are expected results precise? Investing time in improving the quality of manual tests before automating them is, in general, a good idea.