wiki:Developer/Projects/Open/TestScreenValidation

Test Screen Validation

Mentors: Past, Present, and Potential Mentors

Students: Past, Present, and Potential Students

Status: Current status of project. For starting, it should be: Uninitiated.

Introduction: Develop a tool to validate the correctness of test outputs. This requires expertise in a unit testing framework, probably DejaGnu?, and will probably need to be tied into rtems-testing and rtems-tools.

This is not an exhaustive list of requirements for such a tool.

  1. Does the test output a screen?
  2. Can the output automatically be written to the terminal or (better yet) a text file?
  3. A standard way to write a verification script to specify the form of acceptable results?
    1. a parser that can identify the correct form of output

(for example: The time taken to complete the task is 40.)

The form of the output might be specified by something like

grep "The time taken to complete the task is [0-9][0-9]*" test.txt

  1. a way to parse out any run specific result

using the above example: The time taken to complete the task is 40.

The way to parse out the specific result might be specified by something like

cat test.txt | sed 's/.*is\ g' | sed 's/'\.'$g'

  1. a way to specify acceptable results (an acceptable range or a list of acceptable outputs)

using the above example: The time taken to complete the task is 40.

The acceptable range would be the non-negative numbers (the task might be completed before the clock is read, but there is no such thing as negative time.

  1. some tests will not have run specific output (think "hello world".). The test verification should be able to handle that issue as well.
  2. The person writing the verification script should only have to enter the (1) whether the test outputs to the screen, if not the test should not be verified with the script, if so (2) parsing logic (3) (if applicable) acceptable results
  1. The tool should be written in an OS independent language (such as Python).
  1. For this tool to stay up-to-date, a hook should be added to check-submission to verify that before a test is accepted into the git repository it has an entry in the RTEMS Test Screen Validation (even if the entry indicates the test does not have a screen).
  1. Although, it might be preferable to change each test to do a return code instead. In which case, a hook should be added to check-submission to verify that before a test is accepted into the git repository it has an exit code of the correct form.
  1. The case for either method (test screen validation or exit codes) would need to be made, and accepted by the RTEMS development team.

Goal: Concise statement of the overall goal of the project. Refine this initial statement to include: project deliverables (code, docs, testing), required/suggested methodology, standards of quality, possible goal extensions beyond the main objective.

Requirements: List the requirements and level of expertise you estimate are required by the developer tackling this project will have to have: Required level of programming language(s), specific areas of RTEMS or tools, level of familiarity with RTEMS, cross-development, GNU/Linux, etx., development/documentation/testing tools, mathematical/algorithmic background, other desirable skills.

Resources: Current RTEMS developers, papers, etc that may help you in this project.

Acknowledgements

  • who helped and did work

Miscellaneous Sections

As the project progresses, you will need to add build instructions, etc and this page will evolve from a project description into a HOWTO.

References

  • TBD

Other sections: If you have more to say about the project that doesn't fit in the proposed sections of this template, feel free to add other sections at will.

Last modified on Feb 14, 2015 at 7:45:50 PM Last modified on Feb 14, 2015, 7:45:50 PM