Changeset e89faf3e in rtems


Ignore:
Timestamp:
Sep 25, 2009, 5:49:32 PM (10 years ago)
Author:
Joel Sherrill <joel.sherrill@…>
Branches:
4.10, 4.11, master
Children:
c42d1a4
Parents:
0feb8085
Message:

2009-09-25 Sebastian Huber <Sebastian.Huber@…>

  • score/src/heap.c, score/include/rtems/score/heap.h: Reduced alignment requirement for CPU_ALIGNMENT from four to two.
Location:
cpukit
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • cpukit/ChangeLog

    r0feb8085 re89faf3e  
     12009-09-25      Sebastian Huber <Sebastian.Huber@embedded-brains.de>
     2
     3        * score/src/heap.c, score/include/rtems/score/heap.h: Reduced alignment
     4        requirement for CPU_ALIGNMENT from four to two.
     5
    162009-09-25      Joel Sherrill <joel.sherrill@OARcorp.com>
    27
  • cpukit/score/include/rtems/score/heap.h

    r0feb8085 re89faf3e  
    3939 *
    4040 * The alignment routines could be made faster should we require only powers of
    41  * two to be supported both for page size, alignment and boundary arguments.
    42  * However, both workspace and malloc heaps are initialized with
    43  * CPU_HEAP_ALIGNMENT as page size, and while all the BSPs seem to use
    44  * CPU_ALIGNMENT (that is power of two) as CPU_HEAP_ALIGNMENT, for whatever
    45  * reason CPU_HEAP_ALIGNMENT is only required to be multiple of CPU_ALIGNMENT
    46  * and explicitly not required to be a power of two.
     41 * two to be supported for page size, alignment and boundary arguments.  The
     42 * minimum alignment requirement for pages is currently CPU_ALIGNMENT and this
     43 * value is only required to be multiple of two and explicitly not required to
     44 * be a power of two.
    4745 *
    4846 * There are two kinds of blocks.  One sort describes a free block from which
     
    168166   * used, otherwise the previous block is free.  A used previous block may
    169167   * claim the @a prev_size field for allocation.  This trick allows to
    170    * decrease the overhead in the used blocks by the size of the
    171    * @a prev_size field.  As sizes are always multiples of four, the two least
    172    * significant bits are always zero. We use one of them to store the flag.
     168   * decrease the overhead in the used blocks by the size of the @a prev_size
     169   * field.  As sizes are required to be multiples of two, the least
     170   * significant bits would be always zero. We use this bit to store the flag.
    173171   *
    174172   * This field is always valid.
  • cpukit/score/src/heap.c

    r0feb8085 re89faf3e  
    2727#include <rtems/score/heap.h>
    2828
    29 #if CPU_ALIGNMENT == 0 || CPU_ALIGNMENT % 4 != 0
     29#if CPU_ALIGNMENT == 0 || CPU_ALIGNMENT % 2 != 0
    3030  #error "invalid CPU_ALIGNMENT value"
    3131#endif
     
    214214  stats->instance = instance++;
    215215
    216   _HAssert( _Heap_Is_aligned( CPU_ALIGNMENT, 4 ) );
    217216  _HAssert( _Heap_Is_aligned( heap->page_size, CPU_ALIGNMENT ) );
    218217  _HAssert( _Heap_Is_aligned( heap->min_block_size, page_size ) );
Note: See TracChangeset for help on using the changeset viewer.