Analyse test reports from automated executions in the Squish IDE

Analyse test reports from automated executions in the Squish IDE

Executing Squish tests regularly in an automated fashion (like in a CI system) is key to a successful test effort. These tests will occassionally fail and then a tester needs to analyze what has caused the failure and possibly adapt the expected values for verifications. The Squish IDE provides a couple of different tools for these tasks, like navigation to the script positions, editing of screenshot verification points using vpdiff or the stack trace.

The tools the IDE provides are easily accessible whenever a test has been executed through the Squish IDE. It would be sensible to use the same toolset to analyze failures that happen during automated execution, but are not easily reproducable when running through the IDE on a testers computer. In order to help with that use case Squish 6.4 introduced support for loading test reports generated outside of the IDE and on other systems into the IDE to leverage the powerful tools for analyzing the test failures.

Obtaining a report that can be imported

The import functionality for test reports requires that a report has been generated with the xml3 report generator. This version includes all data necessary to analyze screenshot and visual verification failures as well as stack trace details for any failures or errors in the report. It is also possible to use the older xml2 format, but doing so only allows to jump from the failure to the corresponding script location logging that failure.

The report can be obtained by invoking squishrunner with the corresponding report generator option:

squishrunner --testsuite suite_sample --reportgen xml3.3,xml3report

Once the test execution has finished the xml3report directory will contain all the needed information and can be compressed into a zip file to transport it to the computer running the IDE.

Importing a report into the IDE

In order to import and analyze the failures or update the verifications expected data it is necessary to have the test suite and all corresponding script directories – including global ones – open in the IDE.

Once the testsuite has been opened you can copy the compressed report to your own computer and then use the Import toolbar button in the Result View to select the compressed report from your disk.

Matching paths from the execution system to the local system

In some cases the location of the Squish test suite on the system where the automated execution happened will not be the same as on the system where the IDE is being used. For example if the test execution happened on a Windows system but analyzing is being done on a macOS system.

Whenever the IDE encounters a filesystem path in the imported report and cannot find a file at the path on the current system it will ask you to lookup the path of that file on the current system. The same applies to paths pointing to globally shared script files.

Doing this mapping can become tedious if you need to open several different files from the report. Thus the IDE will store these selected paths and when another unknown path is found it will try to match a part of this unknown path to one of the selected paths. That way it is usually unnecessary to manually specify more than one file from a given suite or global script directory.

Utilizing the IDE for analyzation

Once the IDE can map the paths from the report file to the open testsuite and global scripts it is possible to double-click on entries to go to the corresponding script location. In combination with the strack trace information this can already provide some insight into what may have happened during test execution.

Analysing screenshot, table or visual verifications can be done as well, as the imported report contains the actual data that failed to match the expected data in the verification point. Using the context menu or toolbar entries for opening the View Differences option will show, for example, the screenshot difference viewer allowing a range of different comparisons and adjustments to the verifications.

Finally it is also possible to leverage the Use as Expected functionality in the Results View to update the expected data in verifications for those cases where the failure documents an intended behavioral change in the application.

Conclusion

The import feature can be quite a time saver when analyzing failures or wanting to update expected data in verifications using reports from automated test executions. It can also be nicely combined with the exporting of test reports from the IDE to share such test reports generated with a colleague. Test reports exported from the IDE will automatically be in the right format and compressed to encourage importing it on a different system.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*