We had a great time presenting our talk, Using Code Coverage to Enhance Product Quality, at this year’s Qt Virtual Tech Con hosted by our friends at The Qt Company. We introduced our code coverage tool, Squish Coco, and discussed ways to improve testing efficiency for both your development and QA teams. We received lots of great questions during the Q&A portion, some of which we could not answer within the time limit. The complete Q&A is given here:
Integrations & Support
Are GitLab CI and GitHub Actions supported for Continuous Integration?
Yes. While we have documented instructions for CI tool integrations with Jenkins and Bamboo, generally speaking, integrations with other CI systems, including GitLab CI and GitHub Actions are supported with a set of command line tools.
Our support team is available to assist your team with integrating your chosen CI system with Squish Coco.
Will Coco integrate with my unit test framework “X?”
Coco was built with an open, flexible approach in mind for integrating with unit test frameworks. While we have documented setups for popular frameworks like CppUnit, Qt Test and Google Test, virtually any generic or atypical framework can be supported.
We’ve written a blog explaining how to integrate your generic-type unit test framework with Squish Coco. Read it here.
Does Coco work with gmock?
gmock has been absorbed into the Google Test framework. Coco includes integration support with this framework.
How does Coco work with CMake + Google Test?
What are the project size limits which Coco can handle?
While there is no definitive answer on the size limits of projects which Coco can manage, we have seen Coco used successfully with applications built with millions of lines of source code.
In terms of runtime performance, are instrumented builds still debug or non-optimized?
It is up to you to compile your build in debug or release mode. With Coco, you can directly use an optimized build, which will run faster than a debug build. Coco provides accurate, reliable coverage information for either case.
Our benchmarks show a 10% to 30% impact on runtime performance with instrumented builds.
Can we measure test coverage for tests written as standalone Python scripts which call application functions using some API? How should we instrument the code, then?
Should we aim for a high coverage from unit tests alone? Is it common practice to include more “heavy-weight” integration tests in our coverage reporting?
Aiming for high coverage from unit tests is a good goal, but attention should be paid to the quality of the tests themselves. Unit testing, while fundamental, does not provide a complete picture of product quality, even if high coverage is achieved with unit tests. Unit tests can tell you that the code is working as development intended, but may miss customer requirements discovered during more “heavy-weight” integration testing, like exercising the GUI. A multi-pronged approach to testing, including a review of the quality of tests to prevent “automation blindness”, is recommended.
Does Coco support blackbox testing?
Yes. In teams where source code security prevents sharing the code between all developers and QA team members, Coco provides the ability to create a ‘blackbox’ instrumentation database. The database can be shared safely with any member of the team (or even outsourced testers), because there is no possibility to view the application’s source. Further, QA engineers are still able to view the coverage of their tests and manage their executions, before passing their reports to development to merge the reports into one global coverage report.
Check out our blog for a how-to on blackbox testing with Coco.
Tool Qualification & Safety-Critical Applications
Does froglogic support Tool Qualification for IEC 61508 and IEC 62304?
Yes. We offer Tool Qualification Kits for the following standards:
- ISO 26262: Road Vehicles – Functional Safety
- EN 50128: Railway Applications
- DO 178C: Airborne Systems
- IEC 61508: Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems
- IEC 62304: Medical Device Software – Software Life Cycle Processes
- ISO 13485: Medical Devices – Quality Management Systems
What is the difference between Squish Coco and gcov/LCOV? Are there metrics that Coco can retrieve that gcov can’t?
- gcov’s coverage level support is limited to statement and branch coverage, whereas Coco supports condition, MC/DC and MCC coverage, in addition to statement and branch coverage.
- gcov does not produce reliable coverage results for optimized builds.
- LCOV, gcov’s graphical frontend, creates HTML pages displaying the source code annotated with the coverage information. Coco, on the other hand, not only can produce detailed HTML reports to aid in analysis, but Coco’s frontend user interface program, the CoverageBrowser, offers a fully-functional GUI for interactive code coverage data analysis, annotated source code views, test execution status and timing, and much more.
- gcov works only on code compiled with GCC, whereas Coco has more extensive compiler support.
- Coco records a code coverage report from each unit test or application test separately. This allows the selection and comparison of individual tests.
- Coco supports Test Impact Analysis (also referred to as Patch Analysis), an optimization method used to determine which tests exercise a specific code change (e.g., a last-minute patch.) Using this analysis, you can run only those tests which exercise the change, thus improving testing efficiency under a time constraint where risk assessment for the patch needs to be conducted.