The Squish GUI Tester excels at verifying an application’s user interface. But comprehensive verifications can come at a cost: the resulting test reports become huge and daunting to analyze. Take advantage of additional screenshots in Squish reports to get a better understanding of what happened.
A picture is worth a thousand words
Squish is a professional tool for creating, running and maintaining GUI tests. The resulting test reports can be stored in a wide range of formats or post-processed by other tools. However, sometimes it can be hard to tell why a verification failed. Tests make it very clear that e.g. a button was disabled even though it should have been enabled. Why that is, is often not clear at all.
Watching Squish as it replays tests to determine what’s happening is often not viable:
- Tests execute steps at high speed. It can be hard to follow the sequence of steps visually.
- Most tests are often executed outside of working hours. Nightly test runs are very common.
- Last but not least, tests typically take a couple of minutes, minimum. Watching tests can be boring, and there may be better things to do!
Screenshots in Squish reports can help with this. By storing screenshots along with the test report data you get to see the state of the screen as it was at various moments during test execution. For example, the screen as it was when a verification failed.
Adding Screenshots to Test Reports
The simplest way to log a screenshot is by invoking the test.attachDesktopScreenshot function:
def main(): startApplication("SquishAddressBook") test.attachDesktopScreenshot("Desktop after launching AUT")
The documentation explains:
This function will create a screenshot of the desktop from the system where the currently active AUT is running and store that in the test report directory. In addition, an entry is being logged in the test report indicating the path of the screenshot file as well as including the specified
This means that you can take screenshots at arbitrary points during the execution of a test case. This is especially useful if the application under test has some visible side effect on the desktop, such as
- An external PDF viewer is opened to display an invoice
- A new icon appears in the system tray
- A message box (possibly an error) is shown by the operating system
Automatically Logging Screenshots
In addition to the test.attachDesktopScreenshot function, Squish also features three APIs to create screenshots for you automatically in different situations:
- testSettings.logScreenshotOnFail for logging a screenshot every time a verification fails. Imagine a sporadic test failure which cannot be explained. Maybe the overall state of the application under test is broken due to external factors? Additional information can be provided to the developers via visual inspection of the desktop. This can be extremely useful for diagnosing test failures. In many cases, a quick look at a screenshot can give a useful hint as to what caused a verification to fail.
- testSettings.logScreenshotOnError for logging a screenshot every time a script error occurs. This is invaluable for diagnosing inexplicable test errors. For example, the test execution may abort because the application under test vanished. A screenshot may show that the application restarted itself due to an update. Or clicking a button may fail – the screenshot might show that the button is obscured by a Windows message box asking to reboot the machine.
- testSettings.logScreenshotOnPass for logging a screenshot every time a verification passes. This can be useful for creating a ‘photo story’ of the test execution. Verifications are typically spread all over a test script – and most of them are passes. By logging screenshots on passes, the resulting test reports become larger but much more expressive.
Especially the first two settings, testSettings.logScreenshotOnFail and testSettings.logScreenshotOnError, are extremely useful for diagnosing test failures. Enabling these settings is typically a major improvement to the generated test reports.
These three are not functions like test.attachDesktopScreenshot(), however. Instead, they are properties assuming the values ‘true’ and ‘false’. By default, they are all ‘false’ but can be enabled in a test script using a script statement such as
testSettings.logScreenshotOnFail = True
Make sure to execute this script statement early on in your test script. That way, they are created automatically for all subsequent verifications.
All screenshots are stored on disk in the compact PNG format, along with the other test report data. There are two main ways to view the screenshots: from the Squish IDE, or by inspecting (resp. processing) the test report files.
Accessing Screenshots in the Squish IDE
Accessing screenshots in the Squish IDE is useful if you just finished executing a test. It’s also handy when loading a previously generated test report into the Squish IDE.
In the Squish IDE, screenshots generated by test.attachDesktopScreenshot() show up like this:
Double-click the line saying ‘Attachment’ (the last line in the above image) to open the generated screenshot.
When using any of the testSettings flags described above, the results look slightly different:
In this case, double-click the line starting with ‘Desktop Screenshot’ to open the screenshot.
Accessing Screenshots In Squish Reports
Squish supports generating test reports in various formats. See the documentation of the squishrunner for a full list of supported formats. The appearance of screenshots in Squish reports depends on the format used by the report.
The most powerful solution for inspecting test results is Test Center; Squish reports can be automatically sent to Test Center for analysis by all stakeholders. This allows viewing the screenshots conveniently in a web browser. To avoid the report becoming too large, screenshots are only shown when requesting them. To do so, click the little icon next to the ‘Comparison’ message in the string:
Other report formats which are meant for post-processing by other tools reference screenshots via the path on the file system. For example, Squish XML3 reports use this:
<verification> <location> <uri><![CDATA[x-testcase:/test.py]]></uri> <lineNo><![CDATA]></lineNo> </location> <scriptedVerificationResult type="FAIL" time="2019-09-02T14:23:29+02:00"> <scriptedLocation> <uri><![CDATA[x-testcase:/test.py]]></uri> <lineNo><![CDATA]></lineNo> </scriptedLocation> <text><![CDATA[Comparison (Screenshot in "/tmp/reportsml3/suite_screenshots/tst_case1/failedImages/failed_1.png")]]></text> <detail><![CDATA['Address Book - Untitled' and 'Apple' are not equal]]></detail> <screenshot> <uri><![CDATA[x-results:/suite_screenshots/tst_case1/failedImages/failed_1.png]]></uri> </screenshot> </scriptedVerificationResult> </verification>
That way, screenshots generated as part of Squish test executions are always easily available.