Changeset e89faf3e in rtems for cpukit/score/include/rtems/score/heap.h
- Timestamp:
- 09/25/09 17:49:32 (14 years ago)
- Branches:
- 4.10, 4.11, 5, master
- Children:
- c42d1a4
- Parents:
- 0feb8085
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
cpukit/score/include/rtems/score/heap.h
r0feb8085 re89faf3e 39 39 * 40 40 * The alignment routines could be made faster should we require only powers of 41 * two to be supported both for page size, alignment and boundary arguments. 42 * However, both workspace and malloc heaps are initialized with 43 * CPU_HEAP_ALIGNMENT as page size, and while all the BSPs seem to use 44 * CPU_ALIGNMENT (that is power of two) as CPU_HEAP_ALIGNMENT, for whatever 45 * reason CPU_HEAP_ALIGNMENT is only required to be multiple of CPU_ALIGNMENT 46 * and explicitly not required to be a power of two. 41 * two to be supported for page size, alignment and boundary arguments. The 42 * minimum alignment requirement for pages is currently CPU_ALIGNMENT and this 43 * value is only required to be multiple of two and explicitly not required to 44 * be a power of two. 47 45 * 48 46 * There are two kinds of blocks. One sort describes a free block from which … … 168 166 * used, otherwise the previous block is free. A used previous block may 169 167 * claim the @a prev_size field for allocation. This trick allows to 170 * decrease the overhead in the used blocks by the size of the 171 * @a prev_size field. As sizes are always multiples of four, the twoleast172 * significant bits are always zero. We use one of themto store the flag.168 * decrease the overhead in the used blocks by the size of the @a prev_size 169 * field. As sizes are required to be multiples of two, the least 170 * significant bits would be always zero. We use this bit to store the flag. 173 171 * 174 172 * This field is always valid.
Note: See TracChangeset
for help on using the changeset viewer.