To keep pace with the ever-increasing demand for higher software quality, full end-to-end testing of software products has become a common practice. By exercising the application via the graphical user interface (GUI), testers assume the position of the user which yields a lot of benefits. For example, GUI tests exercise a huge portion of the applications’ source code with relatively little effort. GUI testing lets testers get a lot of ‚leverage.‘
However, as applications become more powerful and complex, so do their GUIs. Manually testing every piece of functionality is tedious at best. Hence, many software projects consider automating at least some of their GUI testing efforts – and rightfully so.
There are plenty of good reasons to automate GUI tests: human testers are freed up to do things only humans can do, tests are executed more quickly, they yield reproducible results and more. Alas, there is no free lunch. Getting a clear understanding of the desired behaviour or identifying test cases which are suitable for automation are only two of the challenges you will face. There are many factors to consider when trying to decide if, and which, a GUI test should be automated. This article discusses the most important ones and tries to help you with the decision.
Benefits of GUI Test Automation
First and foremost, test automation does not replace human testers. Instead, test automation supports human testers. By automating tests, human testers are relieved of executing mundane tasks. Instead, they can concentrate on verifying behavior which a machine cannot verify easily.
Instead of grinding through simple test cases, human testers can work on quality assurance tasks such as:
- Usability testing to find complicated or hard to use user interfaces.
- Exploratory testing which leverages tester experience to look for defects in specific areas of the application.
- Analyzing test reports to identify patterns which hint at the root cause of failures.
Speed truly matters when it comes to testing. Slow test execution means that you cannot execute the tests as often. In fact, slow test execution often means that you don’t want to do it as often. Hence, GUI testing of non-trivial software projects is often conducted once a week. Or, even worse, shortly before reaching critical milestones such as a public release.
Executing tests more quickly may seem like a ’nice-to-have‘ thing. Test execution speed has significant consequences for all teams in a software team though.
Defects are found more quickly when executing tests more often. Ideally, only little time has passed since the last successful test run. Hence, changes to the application between test runs are minimized. Often enough, developers still have the changes they applied in the back of their mind. This makes fixing defects much cheaper.
Fixing defects shortly after they were introduced avoids regressions. By avoiding surprise regressions, the progress at which a project nears completion is much more linear:
Note how the green line has multiple smaller bumps whereas the blue one has fewer, bigger bumps.
A more linear progression makes accurate time estimates a lot easier – good news for project managers, the marketing department, and customers!
People are smart. Sometimes, too smart for their own good. Human brains are hard-wired to find new, shorter, more interesting ways to perform a repetitive task. This ability is excellent for creative work, such as exploratory testing.
Execution of regression and GUI tests requires other qualities, however. It’s important to be able to reproduce the test reports. This enables testing bug fixes: does the report still contain a failure in the new build? It also permits assessing the quality over time. In order to get comparable results, you need to perform the same test steps. The exact same test steps. Every. Single. Time.
This can be mind-numbingly boring. Hence, after ten, 50, 100 executions of the same test case, the cleverness of humans kicks in. They use a keyboard shortcut instead of multiple mouse clicks. Maybe they start to skip a little step. After a while, they might omit entire test cases since
There is no possible way this test case will fail. There was no change to the application in this area. And besides, it never failed before!
Don’t buy it.
Quality assurance requires a high amount of diligence and endurance. Both are qualities which computers excel at. An automated test case guarantees comparable results. The compiler will perform the exact same steps, every single time. It will never take any shortcuts, and it will never silently skip tests. Assuming that you have a stable test setup, the only thing changing between test runs should be your application. So if a test case fails in one test report but passes in the previous report, you can tell that it must have been caused by a change in the application.
Precise Expectations And Requirements
Manual test execution builds on the intelligence of human testers. Test specifications and test plans can be left vague. Of course, any diligent QA team will try to express any test case in clear and unambiguous terms. As it happens though, when churning out hundreds or thousands of test cases, not all of them will be equally precise.
In many cases, being a little lenient causes no harm. When a test fails however, it can become a major source of frustration. Nobody likes delaying a release and sitting through a subsequent meeting in which unclear test expectations are discussed!
Automating GUI tests does away with this lack of precision. A computer, unless explicitly told otherwise, permits no ambiguity. Expected outcomes must be expressed clearly. This may seem cumbersome at first. However, it greatly improves the overall consistency of the test suite. Even after writing a thousand test cases, the 1001st test case still needs to be precise in its description about what is done and what is expected.
Note that precision is not the same thing as accuracy though. Precision is about ambiguity (or lack thereof) in statements. Saying „It’s a 2 kilometer walk to the next gas station“ is less precise than „It’s a 2408 meter walk to the next gas station“. Accuracy is about correctness. It’s about whether you’re walking in the right direction! Hence, even though test automation enforces precision, you still need to make sure that the expected outcome is accurate!
Challenges with GUI Test Automation
Any software project seeking to automate GUI tests will eventually face common challenges. Some may be specific to the individual organization. Others are commonly faced by any software project.
Clear Understanding of Desired Behavior
Computers need to be told exactly what the expected behavior and state is. This is a highly positive aspect of test automation. However, existing test cases may not be as precise in their current form. The expected behavior may be implicit and rely on a tester’s judgement. In such cases, reviewing test cases may be required. Typically, the need for review becomes apparent when implementing automated test cases.
Furthermore, there are some tests for which there is no clear and definite description of the desired outcome, such as
- In exploratory testing, there are no clearly pre-defined test cases at all. There may be rough ideas on what to exercise. However, for the most part, the system is tested on the fly.
- During usability testing, users who are (or at least represent) real users interact with the system. Do they get lost while doing so? Can they complete the tasks they are trying to do? Is using the system a positive experience? Such highly subjective assessments of a system cannot be encoded into an automated test case easily.
These examples show that there are plausible tests via the GUI for which GUI test automation is not the best fit.
Test methodologies like behavior-driven development & testing can be of great help here. Domain experts can clarify the desired behavior in a free-form language which can then be augmented with logic driving the UI by a tester.
Any form of testing requires some up-front investment, no matter if it’s manual testing or automated testing. Test cases need to be written, and test plans need to be laid out. Manual testing permits a rather free-form description of test cases. This permits some convenience when writing test cases at the risk of losing precision.
GUI test automation incurs an additional one-time overhead. QA teams need to decide on a suitable test automation tool. Testers need to familiarize themselves with the tool. It may be necessary to port test cases to a format suitable for consumption by the tool.
Tests are executed for the entire lifetime of the software product. In particular, tests are typically executed a lot more often than they are modified. Thus, the long-term benefits of faster (and more reliable) test execution soon outweigh the up-front investment. The effort associated with introducing test automation into a project amortizes over the entire lifetime of the product.
Identify Test Cases Suitable for Automation
Not all test cases verify behavior which is easily accessible to a computer. Such test cases likely do not lend themselves to test automation.
Imagine the ‚Print‘ functionality of an application. Technically, a full end-to-end test would need to observe the printer. It would need to check that the paper displays the intended output. While it’s possible to do so with sufficient (hardware-) effort, it’s most likely impractical for many cases. Instead, it’s perfectly sufficient to have a human tester verify this functionality.
Successful test automation efforts concentrate on the low-hanging fruit. Many projects define a large number of relatively simple test cases. Testers often find them to be boring and will greatly appreciate if a computer can take over the work. Aiming for 100% GUI test automation is most certainly not sensible from an economic point of view.
To address this challenge, make sure to select an appropriate test automation tool. It’s useful — but not sufficient — to perform image-based testing. The test automation tool needs intimate knowledge of how the application under test is constructed. That way, it can access individual objects and perform object-based testing.
It’s well possible to automate too much. Automation can be such a time saver that testers end up automating everything. This can lead to „automation blindness“ – testers don’t question whether a test case is necessary at all.
As a result, test cases tend to accumulate over time. This results in ever increasing test suites with more and more test code to maintain. Executing test suites takes longer, too – jeopardizing the benefits gained by faster test case execution! A classic example is test case overlap: over time, as the application under test changes, test cases may exercise very similar (or even the same) functionality. This makes test suites run slower at very little gain.
To fight automation blindness, it’s imperative to review the coverage of GUI tests constantly. A good code coverage tool is highly recommended for this. In the best case, it integrates tightly with the GUI test automation tool. That way, you can easily review the code coverage per GUI test case as well as identify test cases which provide no or only very little extra benefit.
GUI test automation is a major improvement for all but the simplest software development projects. Implementing a test automation early in the process maximizes the time over which the initial one-time investment amortizes.
GUI test automation enables frequent, fast and repeatable test runs, which help with
- spotting regressions faster
- making defects easier and cheaper to fix
- simplifying project management by reducing the risk of regressions discovered very late in the process
Designing test cases with automation in mind increases the leverage of the tool, liberating the QA staff to concentrate on higher-level tasks which only humans can perform.
It is extremely important to select a proven industry-strength test automation tool by a dedicated provider. Test automation is a very long-term process – good technical support, continuous and timely updates as well as comprehensive documentation are critical. The IT landscape changes constantly, and so do users‘ expectations. Hence, not only the applications need to adapt – the testing tool has to follow suit.