Changeset e89faf3e in rtems
- Timestamp:
- 09/25/09 17:49:32 (14 years ago)
- Branches:
- 4.10, 4.11, 5, master
- Children:
- c42d1a4
- Parents:
- 0feb8085
- Location:
- cpukit
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
cpukit/ChangeLog
r0feb8085 re89faf3e 1 2009-09-25 Sebastian Huber <Sebastian.Huber@embedded-brains.de> 2 3 * score/src/heap.c, score/include/rtems/score/heap.h: Reduced alignment 4 requirement for CPU_ALIGNMENT from four to two. 5 1 6 2009-09-25 Joel Sherrill <joel.sherrill@OARcorp.com> 2 7 -
cpukit/score/include/rtems/score/heap.h
r0feb8085 re89faf3e 39 39 * 40 40 * The alignment routines could be made faster should we require only powers of 41 * two to be supported both for page size, alignment and boundary arguments. 42 * However, both workspace and malloc heaps are initialized with 43 * CPU_HEAP_ALIGNMENT as page size, and while all the BSPs seem to use 44 * CPU_ALIGNMENT (that is power of two) as CPU_HEAP_ALIGNMENT, for whatever 45 * reason CPU_HEAP_ALIGNMENT is only required to be multiple of CPU_ALIGNMENT 46 * and explicitly not required to be a power of two. 41 * two to be supported for page size, alignment and boundary arguments. The 42 * minimum alignment requirement for pages is currently CPU_ALIGNMENT and this 43 * value is only required to be multiple of two and explicitly not required to 44 * be a power of two. 47 45 * 48 46 * There are two kinds of blocks. One sort describes a free block from which … … 168 166 * used, otherwise the previous block is free. A used previous block may 169 167 * claim the @a prev_size field for allocation. This trick allows to 170 * decrease the overhead in the used blocks by the size of the 171 * @a prev_size field. As sizes are always multiples of four, the twoleast172 * significant bits are always zero. We use one of themto store the flag.168 * decrease the overhead in the used blocks by the size of the @a prev_size 169 * field. As sizes are required to be multiples of two, the least 170 * significant bits would be always zero. We use this bit to store the flag. 173 171 * 174 172 * This field is always valid. -
cpukit/score/src/heap.c
r0feb8085 re89faf3e 27 27 #include <rtems/score/heap.h> 28 28 29 #if CPU_ALIGNMENT == 0 || CPU_ALIGNMENT % 4!= 029 #if CPU_ALIGNMENT == 0 || CPU_ALIGNMENT % 2 != 0 30 30 #error "invalid CPU_ALIGNMENT value" 31 31 #endif … … 214 214 stats->instance = instance++; 215 215 216 _HAssert( _Heap_Is_aligned( CPU_ALIGNMENT, 4 ) );217 216 _HAssert( _Heap_Is_aligned( heap->page_size, CPU_ALIGNMENT ) ); 218 217 _HAssert( _Heap_Is_aligned( heap->min_block_size, page_size ) );
Note: See TracChangeset
for help on using the changeset viewer.