wiki:Developer/Projects/Open/GNUToolsTesting

Version 3 (modified by C Rempel, on 02/10/13 at 07:16:52) (diff)

GNUToolsTesting

Table of Contents

    Error: Page Projects/GNUToolsTesting does not exist

Mentors: JoelSherrill

Students: Help Wanted

Status: JoelSherrill is slowly building test scripts. He has reported gcc results on mulitiple targets.

  • The CVS module gcc-testing has the support infrastructure.
  • Some targets are not tested on simulators yet.
  • Skyeye has some bugs preventing testing on Coldfire and Blackfin.
  • There seem to be regression checking scripts we need to utilize to help reduce the burden of tracking results.
  • There are tests which spuriously fail due to the test infrastructure issues.
    • not knowing we add specific CPU options which prevent generating expected instructions on "scan assembly" tests
    • missing instructions in simulators
  • Want to eventually run on multiple BSPs inside a single target architecture to cover more code generation possibilities.

Introduction: This project broadly consists of doing whatever is required to improve the current state of automated testing of GNU tools on RTEMS targets. We currently have automated the building of binutils, gcc/newlib, and gdb from source using either released versions or the development version checked out and updated from the source code control repositories of those projects. This step is largely complete with the test scripts in the CVS module gcc-testing.

Goal: Since there are approximately a dozen active RTEMS targets, this ultimate goal of this effort is to be able to test at least one BSP on all targets. Some of the targets have simulators. If there are executable tests, then the project will have to address being able to run those executable tests on the simulators capturing the output and verifying tests do not run too long. Scripts to aid this are largely complete and in the CVS module gcc-testing under sim-scripts.

Requirements: # We need to be able to generate deviation reports which highlight changes between subsequent test runs. The volume of test data generated is very high with GCC having over 60,000 tests. # We believe a script to do this exists although the integration of that into the overall test system needs to occur. # Also the deviation report should be emailed to the RTEMS Tool Test Results mailing list (http://www.rtems.org/pipermail/rtems-tooltestresults).

Resources: The RTEMS Project has access to the GCC Compile Farm for the purpose of testing GNU tools and providing automated reports. This is a collection of high power servers and our intent is to do as much of the automated tools testing as possible on those machines. But the scripting needed to drive this will be portable to other environments.

The RTEMS Project has a lab and test hardware hosted at OAR Corporation which includes multiple target boards and infrastructure to remotely access as well as power on/off each board. Once the simulator targets have been completely exercised, we will want support running executable tests on real embedded hardware targets -- with highest priority going to those with no simulator.

There are a number of GCC tests on various architectures which report failures because of the way we have to test on a specific target CPU/BSP which does not necessarily match the CPU model the test is written for. These are reported as failures when they should be "NA". Fixing this will improve overall test results but not require knowledge of the inner workings of the compiler.