wiki:TBR/UserManual/RTEMS_Coverage_Analysis

Version 1 (modified by JoelSherrill, on Aug 5, 2009 at 9:10:41 PM) (diff)

First content cut and pasted from Open Projects

RTEMS Coverage Analysis

This task consists of performing automated coverage testing using an open source simulator. The SkyEye project is currently adding coverage analysis capabilities per our specifications. When those are available, the person(s) undertaking this project could analysis the binary object level coverage provided by the RTEMS Test Suites on any target BSP supported by the SkyEye/RTEMS combination.

The analysis will identify a subset of RTEMS such as the SuperCore? and a single API implementation and use that as the basis for analysis. RTEMS includes a lot of source code and the coverage analysis should only focus on improving the test coverage of that code subset.

The resulting analysis is expected to provide a report on individual assembly instructions within RTEMS subsystems which are not currently exercised by existing tests. Each case has to be individually analyzed and addressed. Historically, we have identified multiple categories for code being uncovered:

  • Needs a new test case
  • Unreachable in current RTEMS configuration. For example, the SuperCore? could have a feature only exercised by a POSIX API object. It could be disabled when POSIX is not configured.
  • Debug or sanity checking code which can be placed inside an RTEMS_DEBUG conditional.
  • Unreachable paths generated by gcc for switches. Sometimes you have to restructure switches to avoid unreachable object code.
  • Critical sections which are synchronizing actions with ISRs. Most of these are very hard to hit and may require very specific support from a simulator environment. OAR has used tsim to exercise these paths but this is not reproducible in a BSP independent manner. Worse, there is often no external way to know the case in question has been hit and no way to do it in a one shot test.

There are multiple ways to measure progress on this task. In the past, we have used two metrics. The first is the reduction in the number of uncovered binary code ranges from that identified initially. The second is the percent of untested binary object code as a percentage of the total code size under analysis. Together the metrics provide useful information. Some uncovered ranges may be a single instruction so eliminating that case improves the first metric more than the second.

Attachments (1)

Download all attachments as: .zip