In test-driven development, a common challenge is to decide which tests to write and how many are necessary. Ideally, one would have as many tests as there are possible deviations in a program’s behavior. This is often very hard to achieve though, so it is necessary to determine how much of an application’s logic is covered by tests. This is often done through code coverage tools, which exist for a variety of platforms and languages.
Using the Java code coverage tool, JaCoCo, we’re going to demonstrate how to use a code coverage tool together with Squish so that coverage reports are being generated that can tell which part of the application logic is being covered by which Squish test case. This makes it possible to identify test cases that duplicate other tests – in terms of coverage of the application logic – and areas which no test case yet covers.
Creating a Short Test
We are creating a short and simple test case in Squish and then extending this to include JaCoCo. The test uses our addressbook example for Java/Swing and will create a new entry in the addressbook and then quit the application.
Using the Squish IDE to record the steps yields a scripted test case like this one:
import names def main(): startApplication("AddressbookSwing.jar") activateItem(waitForObjectItem(names.address_Book_JMenuBar, "File")) activateItem(waitForObjectItem(names.file_JMenu, "New...")) activateItem(waitForObjectItem(names.address_Book_Unnamed_JMenuBar, "Edit")) activateItem(waitForObjectItem(names.edit_JMenu, "Add...")) type(waitForObject(names.address_Book_Add_Forename_JTextField), "Andreas") mouseClick(waitForObject(names.address_Book_Add_Surname_JTextField), 68, 22, 0, Button.Button1) type(waitForObject(names.address_Book_Add_Surname_JTextField), "Pakulat") mouseClick(waitForObject(names.address_Book_Add_Email_JTextField), 40, 25, 0, Button.Button1) type(waitForObject(names.address_Book_Add_Email_JTextField), "email@example.com") mouseClick(waitForObject(names.address_Book_Add_Phone_JTextField), 29, 14, 0, Button.Button1) type(waitForObject(names.address_Book_Add_Phone_JTextField), "123456") clickButton(waitForObject(names.address_Book_Add_OK_JButton)) activateItem(waitForObjectItem(names.address_Book_Unnamed_JMenuBar, "File")) activateItem(waitForObjectItem(names.file_JMenu_2, "Quit")) clickButton(waitForObject(names.address_Book_No_JButton))
Running the Test With JaCoCo Instrumentation
JaCoCo has to run as part of the application to be able to generate coverage information as explained in its Command Line Interface documentation. An easy way to apply the command line arguments in a Squish test is to register the
java executable as the AUT instead of the AUT’s jar file. The
startApplication invocation will then be modified to look exactly the same as a manual start of the AUT with JaCoCo from a command window.
Integrating the Squish test case name into the report file name enables the mapping of the coverage data back to a particular test case. The following script snippet demonstrates the startup procedure – including the cleanup of leftover report files from the last execution:
# Some reused paths, adjust to your system jacocoInstallDir = "/Users/andreas/Downloads/jacoco-0.8.3/" addressbookDir = "/Users/andreas/squish/packages/squish-6.4.3-java-mac/examples/java/addressbook" javaApp = "%s/AddressBookSwing.jar" % addressbookDir testcaseName = os.path.basename(squishinfo.testCase) jacocoReport = "%s/%s_jacoco.exec" % (addressbookDir, testcaseName) # Clean up existing reports if os.path.exists(jacocoReport): os.remove(jacocoReport) # Include jacoco when starting the AUT and tell it where to store the report startApplication("java -javaagent:%s/lib/jacocoagent.jar=destfile=%s -jar %s" % (jacocoInstallDir, jacocoReport, javaApp))
Executing this test case will now write a JaCoCo report file when the AUT terminates. In order to ensure that the report file is created before starting with the next part – the generation of a HTML report – a short synchronization block is necessary. In this short example it is sufficient to wait for the report file to appear on the hard disk. In larger applications, where writing a report may take a bit, the synchronization may need to include the size of the file or just a fixed amount of time.
# ensure the report file is there before continuing while not os.path.exists(jacocoReport): snooze(1)
Visualizing the Code Coverage Data
The execution report file that JaCoCo generates is not human readable, but it can be used with analyzers/visualizers in CI systems, like Jenkins code coverage view. It is also possible to generate a human-readable HTML report as part of the Squish test with JaCoCo’s command line interface. The following snippet shows how this can be achieved: Invoking the command line interface using Python’s standard subprocess module that allows to run arbitrary commands. The test case name is again being made part of the HTML report’s name so reports from different test cases can be differentiated and analyzed separately. This enables the identification of missing tests as well as duplicated ones using the HTML report.
# Generate a html report in a coveragereport subdirectory using the application jar # for the class files argument to avoid extracting those subprocess.check_call(["/usr/bin/java", "-jar", "%s/lib/jacococli.jar" % jacocoInstallDir, "report", jacocoReport, "--classfiles", javaApp, "--sourcefiles", os.path.dirname(javaApp), "--name", "AddressBook Test %s" % testcaseName, "--html", htmlReportDir])
This snippet uses a new variable
htmlReportdir that has been added at the beginning of the main function along with a corresponding cleanup step to cleanup that file if it exists:
htmlReportDir = "%s/%s_coveragereport" % (addressbookDir, testcaseName) # Clean up existing reports if os.path.exists(jacocoReport): os.remove(jacocoReport) if os.path.exists(htmlReportDir): shutil.rmtree(htmlReportDir)
The resulting HTML report for our example shows that the test already covers quite a bit of the example application as seen in this screenshot:
Using JaCoCo’s command line interface tools, it is possible to generate code coverage execution reports for each test case – including an HTML report for consumption by humans. Having the relation between code coverage data and test cases allows a better understanding of which tests have to be written and which are not that useful.
While not demonstrated here, it is perfectly possible to apply the same idea to a BDD test case. In such a test, the application startup and setup of the variables would likely be done as part of an OnScenarioStart BDD hook script and the variables for the report files and JaCoCo directory can be passed to the report generation by adding them to the context object available to such hooks. The HTML report generation and synchronization would be moved to a corresponding OnScenarioEnd BDD hook script and would use the variables available from the context for accessing the JaCoCo report. The filename of the JaCoCo report could include the Scenario title to further break down the code coverage information to the scenario level of a BDD test.
You can download the complete Squish testsuite demonstrating the use of JaCoCo.