Changes between Version 2 and Version 3 of GCI/Documentation/CoverageAnalysis/Coverage


Ignore:
Timestamp:
Dec 8, 2018, 3:02:05 PM (6 months ago)
Author:
shashvat jain
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GCI/Documentation/CoverageAnalysis/Coverage

    v2 v3  
    33[[TOC(GCI/Documentation/CoverageAnalysis/Coverage , depth=2)]]
    44
     5= Coverage Analysis Theory =
     6
     7
     8[[TOC(Developer/Coverage/Theory, depth=2)]]
     9
     10
     11The subject of Code Coverage Analysis is broad and has been written about many times over.  This background material is not intended to summarise or rehash what can be read elsewhere.  Instead, the focus here will be on the aspects of Code Coverage Analysis as they pertain to the [wiki:TBR/UserManual/RTEMS_Coverage_Analysis RTEMS Coverage Analysis] effort.
     12
     13The ultimate goal of Code Coverage Analysis is to ensure that a test suite adequately tests a particular body of code.  In order to achieve this goal, several different coverage criteria may have to be examined.  Let's consider the following criteria:
     14
     15 *  '''Statement Coverage''' - Has each line of the source code been executed?
     16 *  '''Decision Coverage''' (also known as Branch coverage) - Has each control structure (such as an if statement) evaluated both to true and false?
     17 *  '''Condition Coverage''' - Has each boolean sub-expression evaluated both to true and false (this does not necessarily imply decision coverage)?
     18 *  '''Object Coverage''' - Has each line of generated assembly been executed?
     19
     20== Statement Coverage ==
     21
     22
     23Statement Coverage requires that each line of source code must be executed.  This is often considered the simplest criteria.  The problem is that it only identifies the lines that were executed, and does not consider the logic flow of the code.  It can be useful for identifying "chunks" of code (i.e. new functionality) that are not covered by the test suite, but not much else.
     24
     25== Decision Coverage ==
     26
     27
     28Decision Coverage requires that each control structure evaluate to both TRUE and FALSE.  This is a pretty good criteria because it generally ensures that both the TRUE and FALSE paths of an expression are covered.  However, short-circuit operators will prevent some portions of a complex expression from being evaluated.
     29== Condition Coverage ==
     30
     31
     32
     33Condition Coverage requires that each boolean sub-expression evaluate to both TRUE and FALSE.  This criteria goes a little further than Decision Coverage by ensuring that the component parts of a compound expression each evaluate to TRUE and FALSE.  But it should be noted that Condition Coverage by itself does not necessarily imply decision coverage.  Because of this fact, it is best to apply Decision Coverage and Condition Coverage together.
     34
     35== Object Coverage ==
     36
     37
     38Object Coverage requires that each line of generated assembly be executed.  This can be a very good general criteria because it ensures most of the test cases that the other criteria ensure.
     39
     40= Criteria Relationships =
     41
     42
     43[[Image(CoverageCategories.png)]]
     44
     45Each of these criteria can be used independently to analyze the code in question.  Application of any one criteria will likely improve the test suite to some degree albeit at the cost of increasing the complexity of the test suite.  Examination of the criteria collectively, shows that there are clear relationships between the different criteria as shown in the picture.  The completeness and complexity of the test suite increases as it satisfies first Statement Coverage and then Decision Coverage and finally !Condition/Decision Coverage.  If the test suite satisfies Statement Coverage, it will partially satisfy Decision Coverage and !Condition/Decision Coverage.  If the test suite satisfies Decision Coverage, it will completely satisfy Statement Coverage and partially satisfy !Condition/Decision Coverage.  Note the fact that Object Coverage satisfies part of all of the other criteria.  There is also a complexity relationship where Statement Coverage is the least complex to satisfy and !Condition/Decision Coverage is the most complex to satisfy.
     46
     47= An Example =
     48
     49
     50In order to illustrate what is covered by each of the different criteria, consider the following example showing the source code for a simple if statement along with its generated pseudo-code instructions.
     51
     52|| '''Block''' || '''Source Code''' || '''Block''' || '''Object Pseudo-code''' ||
     53|| A || if (x OR y) || A1 || cmp x, 0 branch if FALSE to do something ||
     54|| || || A2 || cmp y, 0 branch if TRUE around do something ||
     55|| B || do something || B || do something instructions ||
     56
     57== Statement Coverage ==
     58
     59
     60A single test case that allows the if statement to evaluate to TRUE will execute blocks A and B.  This will achieve 100% Statement Coverage.
     61
     62== Decision Coverage ==
     63
     64
     65A minimum of two test cases are required to achieve 100% Decision Coverage.  One case must force the if statement to evaluate to TRUE and the other case must force the if statement to evaluate to FALSE.  A test case that forces a TRUE outcome will either execute blocks A1 and B or A1, A2 and B.  A test case that forces a FALSE outcome will execute blocks A1 and A2.
     66
     67== !Condition/Decision Coverage ==
     68
     69
     70A minimum of two test cases are required to achieve 100% !Condition/Decision Coverage.  In the first case, x and y must be TRUE.  In the second case, x and y must be FALSE.  The test case that forces a TRUE outcome will execute blocks A1 and B.  The test case that forces a FALSE outcome will execute blocks A1 and A2.
     71
     72== Object Coverage ==
     73
     74
     75One carefully chosen test case where x is FALSE and y is TRUE will achieve 100% Object Coverage.  The test case will execute blocks A1, A2 and B.
     76
     77
     78{{{#!comment
     79= Improve Coverage Analysis Toolset =
     80
     81'''Mentors:''' Chris Johns, Joel Sherrill, C.P. O'Donnell
     82
     83'''Students:''' Vijay Kumar Banerjee
     84
     85'''Progress:''' The Coverage Analysis is running and generating coverage reports in html and txt format
     86
     87'''Blockers:''' Generating .gcno notes files by changing gcc flags
     88
     89''' Development Blog :''' https://thelunatic.github.io/rtems_gsoc18
     90}}}
     91
     92= Introduction =
     93This project is to improve the tools used to perform the analysis and generate reports. It will convert all the config files to .ini and get the coverage analysis running properly. Then the coverage tools will be integrated with RTEMS Tester .
     94Also gcov reports will be generated and the report generation of the covoar tool will be reworked to give output it some data-centric format.
     95
     96= Project =
     97
     98== prerequisites ==
     99* Covoar and rtemstoolkit (C++)
     100* RTEMS Tester framework (Python)
     101* Understand INI format for config files.
     102* Reports (HTML and text currently, add XML)
     103
     104== Running covoar To generate coverage reports ==
     105
     106To run covoar we need to run covoar from the RTEMS-KERNEL BUILD directory.
     107
     108[[span(style=color: #FF0000, NOTE : )]] The .cov trace files are needed to get coverage reports
     109
     110{{{
     111covoar -S /home/lunatic/development/rtems/test/rtems-tools/tester/rtems/testing/coverage/leon3-qemu-symbols.ini \
     112-O coverage/score -E/home/lunatic/development/rtems/test/rtems-tools/tester/rtems/testing/coverage/Explanations.txt \
     113-p RTEMS-5 sparc-rtems5/c/leon3/testsuites/samples/hello.exe
     114}}}
     115
     116=== `covoar` usage : ===
     117{{{
     118Usage: covoar [-v] -T TARGET -f FORMAT [-E EXPLANATIONS] -e EXE_EXTENSION -c COVERAGEFILE_EXTENSION EXECUTABLE1 ... EXECUTABLE2
     119
     120  -v                        - verbose at initialization
     121  -T TARGET                 - target name
     122  -f FORMAT                 - coverage file format (RTEMS, QEMU, TSIM or Skyeye)
     123  -E EXPLANATIONS           - name of file with explanations
     124  -s SYMBOL_SET_FILE        - path to the INI format symbol sets
     125  -1 EXECUTABLE             - name of executable to get symbols from
     126  -e EXE_EXTENSION          - extension of the executables to analyze
     127  -c COVERAGEFILE_EXTENSION - extension of the coverage files to analyze
     128  -g GCNOS_LIST             - name of file with list of *.gcno files
     129  -p PROJECT_NAME           - name of the project
     130  -C ConfigurationFileName  - name of configuration file
     131  -O Output_Directory       - name of output directory (default=.
     132  -d debug                  - disable cleaning of tempfile
     133
     134}}}
     135
     136== Running RTEMS-TESTER for Coverage analysis ==
     137
     138RTEMS-TESTER when run with `--coverage` option, generates an html coverage analysis report (report.html)
     139
     140{{{
     141$HOME/development/rtems/test/rtems-tools/tester/rtems-test \
     142--rtems-tools=$HOME/development/rtems/5 --log=coverage_analysis.log \
     143--no-clean --coverage=score --rtems-bsp=leon3-qemu-cov \
     144/home/lunatic/development/rtems/kernel/leon3/sparc-rtems5/c/leon3/testsuites/samples/hello.exe
     145}}}
     146
     147[[span(style=color: #FF0000, NOTE : )]] The `--no-clean` option tells the script not to delete the .cov trace files generated while running the coverage.
     148These Trace files can be used for running covoar directly.
     149
     150The --coverage option is when included with specific symbol name, it runs coverage analysis for only the mentioned sets.
     151In the above example above, the coverage analysis will run for `score` only.
     152
     153To run coverage for all the sets there's no need to pass any argument to `--coverage` option,
     154by default, it runs coverage analysis for all the sets.
     155
     156Please visit [https://thelunatic.github.io/rtems_gsoc18/blog/coverage-report/ my development blog] to see examples of coverage report
     157
     158== Resources ==
     159
     160TBA
     161
     162= References =
     163 *  TBD