Changeset 0d15414e in rtems for cpukit/libblock
- Timestamp:
- 08/05/09 00:00:54 (15 years ago)
- Branches:
- 4.10, 4.11, 5, master
- Children:
- 6605d4d
- Parents:
- f14a21df
- Location:
- cpukit/libblock
- Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
cpukit/libblock/include/rtems/bdbuf.h
rf14a21df r0d15414e 11 11 * Author: Victor V. Vengerov <vvv@oktet.ru> 12 12 * 13 * Copyright (C) 2008 Chris Johns <chrisj@rtems.org>13 * Copyright (C) 2008,2009 Chris Johns <chrisj@rtems.org> 14 14 * Rewritten to remove score mutex access. Fixes many performance 15 15 * issues. 16 Change to support demand driven variable buffer sizes. 16 17 * 17 18 * @(#) bdbuf.h,v 1.9 2005/02/02 00:06:18 joel Exp … … 45 46 * the drivers and fast cache look up using an AVL tree. 46 47 * 47 * The buffers are held in pools based on size. Each pool has buffers and the 48 * buffers follow this state machine: 48 * The block size used by a file system can be set at runtime and must be a 49 * multiple of the disk device block size. The disk device's physical block 50 * size is called the media block size. The file system can set the block size 51 * it uses to a larger multiple of the media block size. The driver must be 52 * able to handle buffers sizes larger than one media block. 53 * 54 * The user configures the amount of memory to be used as buffers in the cache, 55 * and the minimum and maximum buffer size. The cache will allocate additional 56 * memory for the buffer descriptors and groups. There are enough buffer 57 * descriptors allocated so all the buffer memory can be used as minimum sized 58 * buffers. 59 * 60 * The cache is a single pool of buffers. The buffer memory is divided into 61 * groups where the size of buffer memory allocated to a group is the maximum 62 * buffer size. A group's memory can be divided down into small buffer sizes 63 * that are a multiple of 2 of the minimum buffer size. A group is the minumum 64 * allocation unit for buffers of a specific size. If a buffer of maximum size 65 * is request the group will have a single buffer. If a buffer of minium size 66 * is requested the group is divided into minimum sized buffers and the 67 * remaining buffers are held ready for use. A group keeps track of which 68 * buffers are with a file system or driver and groups who have buffer in use 69 * cannot be realloced. Groups with no buffers in use can be taken and 70 * realloced to a new size. This is how buffers of different sizes move around 71 * the cache. 72 73 * The buffers are held in various lists in the cache. All buffers follow this 74 * state machine: 49 75 * 50 76 * @dot … … 83 109 * buffer in the transfer state. The transfer state means being read or 84 110 * written. If the file system has modifed the block and releases it as 85 * modified it placed on the pool's modified list and a hold timer111 * modified it placed on the cache's modified list and a hold timer 86 112 * initialised. The buffer is held for the hold time before being written to 87 113 * disk. Buffers are held for a configurable period of time on the modified 88 114 * list as a write sets the state to transfer and this locks the buffer out 89 * from the file system until the write complete . Buffers are often repeatable90 * a ccessed and modified in a series of small updates so if sent to the disk91 * when released as modified the user would have to block waiting until it had92 * beenwritten. This would be a performance problem.115 * from the file system until the write completes. Buffers are often accessed 116 * and modified in a series of small updates so if sent to the disk when 117 * released as modified the user would have to block waiting until it had been 118 * written. This would be a performance problem. 93 119 * 94 120 * The code performs mulitple block reads and writes. Multiple block reads or … … 104 130 * the file system. 105 131 * 106 * The poolhas the following lists of buffers:132 * The cache has the following lists of buffers: 107 133 * - @c ready: Empty buffers created when the pool is initialised. 108 134 * - @c modified: Buffers waiting to be written to disk. 109 135 * - @c sync: Buffers to be synced to disk. 110 136 * - @c lru: Accessed buffers released in least recently used order. 137 * 138 * The cache scans the ready list then the LRU list for a suitable buffer in 139 * this order. A suitable buffer is one that matches the same allocation size 140 * as the device the buffer is for. The a buffer's group has no buffers in use 141 * with the file system or driver the group is reallocated. This means the 142 * buffers in the group are invalidated, resized and placed on the ready queue. 143 * There is a performance issue with this design. The reallocation of a group 144 * may forced recently accessed buffers out of the cache when they should 145 * not. The design should be change to have groups on a LRU list if they have 146 * no buffers in use. 111 147 * 112 148 * @{ … … 129 165 130 166 /** 167 * Forward reference to the block. 168 */ 169 struct rtems_bdbuf_group; 170 typedef struct rtems_bdbuf_group rtems_bdbuf_group; 171 172 /** 131 173 * To manage buffers we using buffer descriptors (BD). A BD holds a buffer plus 132 174 * a range of other information related to managing the buffer in the cache. To 133 * speed-up buffer lookup descriptors are organized in AVL-Tree. 175 * speed-up buffer lookup descriptors are organized in AVL-Tree. The fields 134 176 * 'dev' and 'block' are search keys. 135 177 */ 136 178 typedef struct rtems_bdbuf_buffer 137 179 { 138 rtems_chain_node link; /* Link inthe BD onto a number of lists. */180 rtems_chain_node link; /**< Link the BD onto a number of lists. */ 139 181 140 182 struct rtems_bdbuf_avl_node … … 155 197 volatile rtems_bdbuf_buf_state state; /**< State of the buffer. */ 156 198 157 volatile uint32_t waiters; /**< The number of threads waiting on this 158 * buffer. */ 159 rtems_bdpool_id pool; /**< Identifier of buffer pool to which this buffer 160 belongs */ 161 162 volatile uint32_t hold_timer; /**< Timer to indicate how long a buffer 163 * has been held in the cache modified. */ 199 volatile uint32_t waiters; /**< The number of threads waiting on this 200 * buffer. */ 201 rtems_bdbuf_group* group; /**< Pointer to the group of BDs this BD is 202 * part of. */ 203 volatile uint32_t hold_timer; /**< Timer to indicate how long a buffer 204 * has been held in the cache modified. */ 164 205 } rtems_bdbuf_buffer; 165 206 166 207 /** 167 * The groups of the blocks with the same size are collected in a pool. Note 168 * that a several of the buffer's groups with the same size can exists. 169 */ 170 typedef struct rtems_bdbuf_pool 208 * A group is a continuous block of buffer descriptors. A group covers the 209 * maximum configured buffer size and is the allocation size for the buffers to 210 * a specific buffer size. If you allocate a buffer to be a specific size, all 211 * buffers in the group, if there are more than 1 will also be that size. The 212 * number of buffers in a group is a multiple of 2, ie 1, 2, 4, 8, etc. 213 */ 214 struct rtems_bdbuf_group 171 215 { 172 uint32_t blksize; /**< The size of the blocks (in bytes) */ 173 uint32_t nblks; /**< Number of blocks in this pool */ 174 175 uint32_t flags; /**< Configuration flags */ 176 177 rtems_id lock; /**< The pool lock. Lock this data and 178 * all BDs. */ 179 rtems_id sync_lock; /**< Sync calls lock writes. */ 180 bool sync_active; /**< True if a sync is active. */ 181 rtems_id sync_requester; /**< The sync requester. */ 182 dev_t sync_device; /**< The device to sync */ 183 184 rtems_bdbuf_buffer* tree; /**< Buffer descriptor lookup AVL tree 185 * root */ 186 rtems_chain_control ready; /**< Free buffers list (or read-ahead) */ 187 rtems_chain_control lru; /**< Last recently used list */ 188 rtems_chain_control modified; /**< Modified buffers list */ 189 rtems_chain_control sync; /**< Buffers to sync list */ 190 191 rtems_id access; /**< Obtain if waiting for a buffer in the 192 * ACCESS state. */ 193 volatile uint32_t access_waiters; /**< Count of access blockers. */ 194 rtems_id transfer; /**< Obtain if waiting for a buffer in the 195 * TRANSFER state. */ 196 volatile uint32_t transfer_waiters; /**< Count of transfer blockers. */ 197 rtems_id waiting; /**< Obtain if waiting for a buffer and the 198 * none are available. */ 199 volatile uint32_t wait_waiters; /**< Count of waiting blockers. */ 200 201 rtems_bdbuf_buffer* bds; /**< Pointer to table of buffer descriptors 202 * allocated for this buffer pool. */ 203 void* buffers; /**< The buffer's memory. */ 204 } rtems_bdbuf_pool; 205 206 /** 207 * Configuration structure describes block configuration (size, amount, memory 208 * location) for buffering layer pool. 209 */ 210 typedef struct rtems_bdbuf_pool_config { 211 int size; /**< Size of block */ 212 int num; /**< Number of blocks of appropriate size */ 213 unsigned char* mem_area; /**< Pointer to the blocks location or NULL, in this 214 * case memory for blocks will be allocated by 215 * Buffering Layer with the help of RTEMS partition 216 * manager */ 217 } rtems_bdbuf_pool_config; 218 219 /** 220 * External reference to the pool configuration table describing each pool in 221 * the system. 222 * 223 * The configuration table is provided by the application. 224 */ 225 extern rtems_bdbuf_pool_config rtems_bdbuf_pool_configuration[]; 226 227 /** 228 * External reference the size of the pool configuration table 229 * @ref rtems_bdbuf_pool_configuration. 230 * 231 * The configuration table size is provided by the application. 232 */ 233 extern size_t rtems_bdbuf_pool_configuration_size; 216 rtems_chain_node link; /**< Link the groups on a LRU list if they 217 * have no buffers in use. */ 218 size_t bds_per_group; /**< The number of BD allocated to this 219 * group. This value must be a multiple of 220 * 2. */ 221 uint32_t users; /**< How many users the block has. */ 222 rtems_bdbuf_buffer* bdbuf; /**< First BD this block covers. */ 223 }; 234 224 235 225 /** … … 238 228 */ 239 229 typedef struct rtems_bdbuf_config { 240 uint32_t max_read_ahead_blocks; /**< Number of blocks to read ahead. */ 241 uint32_t max_write_blocks; /**< Number of blocks to write at once. */ 242 rtems_task_priority swapout_priority; /**< Priority of the swap out task. */ 243 uint32_t swapout_period; /**< Period swapout checks buf timers. */ 244 uint32_t swap_block_hold; /**< Period a buffer is held. */ 230 uint32_t max_read_ahead_blocks; /**< Number of blocks to read 231 * ahead. */ 232 uint32_t max_write_blocks; /**< Number of blocks to write 233 * at once. */ 234 rtems_task_priority swapout_priority; /**< Priority of the swap out 235 * task. */ 236 uint32_t swapout_period; /**< Period swapout checks buf 237 * timers. */ 238 uint32_t swap_block_hold; /**< Period a buffer is held. */ 239 uint32_t swapout_workers; /**< The number of worker 240 * threads for the swapout 241 * task. */ 242 rtems_task_priority swapout_worker_priority; /**< Priority of the swap out 243 * task. */ 244 size_t size; /**< Size of memory in the 245 * cache */ 246 uint32_t buffer_min; /**< Minimum buffer size. */ 247 uint32_t buffer_max; /**< Maximum buffer size 248 * supported. It is also the 249 * allocation size. */ 245 250 } rtems_bdbuf_config; 246 251 … … 250 255 * The configuration is provided by the application. 251 256 */ 252 extern rtems_bdbuf_config rtems_bdbuf_configuration;257 extern const rtems_bdbuf_config rtems_bdbuf_configuration; 253 258 254 259 /** … … 279 284 280 285 /** 286 * Default swap-out worker tasks. Currently disabled. 287 */ 288 #define RTEMS_BDBUF_SWAPOUT_WORKER_TASKS_DEFAULT 0 289 290 /** 291 * Default swap-out worker task priority. The same as the swapout task. 292 */ 293 #define RTEMS_BDBUF_SWAPOUT_WORKER_TASK_PRIORITY_DEFAULT \ 294 RTEMS_BDBUF_SWAPOUT_TASK_PRIORITY_DEFAULT 295 296 /** 297 * Default size of memory allocated to the cache. 298 */ 299 #define RTEMS_BDBUF_CACHE_MEMORY_SIZE_DEFAULT (64 * 512) 300 301 /** 302 * Default minimum size of buffers. 303 */ 304 #define RTEMS_BDBUF_BUFFER_MIN_SIZE_DEFAULT (512) 305 306 /** 307 * Default maximum size of buffers. 308 */ 309 #define RTEMS_BDBUF_BUFFER_MAX_SIZE_DEFAULT (4096) 310 311 /** 281 312 * Prepare buffering layer to work - initialize buffer descritors and (if it is 282 * neccessary) buffers. Buffers will be allocated accoriding to the 283 * configuration table, each entry describes the size of block and the size of 284 * the pool. After initialization all blocks is placed into the ready state. 285 * lists. 313 * neccessary) buffers. After initialization all blocks is placed into the 314 * ready state. 286 315 * 287 316 * @return RTEMS status code (RTEMS_SUCCESSFUL if operation completed … … 396 425 /** 397 426 * Synchronize all modified buffers for this device with the disk and wait 398 * until the transfers have completed. The sync mutex for the poolis locked427 * until the transfers have completed. The sync mutex for the cache is locked 399 428 * stopping the addition of any further modifed buffers. It is only the 400 429 * currently modified buffers that are written. 401 430 * 402 * @note Nesting calls to sync multiple devices attached to a single pool will 403 * be handled sequentially. A nested call will be blocked until the first sync 404 * request has complete. This is only true for device using the same pool. 431 * @note Nesting calls to sync multiple devices will be handled sequentially. A 432 * nested call will be blocked until the first sync request has complete. 405 433 * 406 434 * @param dev Block device number … … 411 439 rtems_status_code 412 440 rtems_bdbuf_syncdev (dev_t dev); 413 414 /**415 * Find first appropriate buffer pool. This primitive returns the index of416 * first buffer pool which block size is greater than or equal to specified417 * size.418 *419 * @param block_size Requested block size420 * @param pool The pool to use for the requested pool size.421 *422 * @return RTEMS status code (RTEMS_SUCCESSFUL if operation completed423 * successfully or error code if error is occured)424 * @retval RTEMS_INVALID_SIZE The specified block size is invalid (not a power425 * of 2)426 * @retval RTEMS_NOT_DEFINED The buffer pool for this or greater block size427 * is not configured.428 */429 rtems_status_code430 rtems_bdbuf_find_pool (uint32_t block_size, rtems_bdpool_id *pool);431 432 /**433 * Obtain characteristics of buffer pool with specified number.434 *435 * @param pool Buffer pool number436 * @param block_size Block size for which buffer pool is configured returned437 * there438 * @param blocks Number of buffers in buffer pool.439 *440 * RETURNS:441 * @return RTEMS status code (RTEMS_SUCCESSFUL if operation completed442 * successfully or error code if error is occured)443 * @retval RTEMS_INVALID_SIZE The appropriate buffer pool is not configured.444 *445 * @note Buffer pools enumerated continuously starting from 0.446 */447 rtems_status_code rtems_bdbuf_get_pool_info(448 rtems_bdpool_id pool,449 uint32_t *block_size,450 uint32_t *blocks451 );452 441 453 442 /** @} */ -
cpukit/libblock/include/rtems/diskdevs.h
rf14a21df r0d15414e 20 20 #include <rtems/libio.h> 21 21 #include <stdlib.h> 22 23 /**24 * @ingroup rtems_bdbuf25 *26 * Buffer pool identifier.27 */28 typedef int rtems_bdpool_id;29 22 30 23 #include <rtems/blkdev.h> … … 108 101 * Device block size in bytes. 109 102 * 110 * This is the minimum transfer unit and must be power of two.103 * This is the minimum transfer unit. It can be any size. 111 104 */ 112 105 uint32_t block_size; 113 106 114 107 /** 115 * Binary logarithm of the block size. 116 */ 117 uint32_t block_size_log2; 118 119 /** 120 * Buffer pool assigned to this disk. 121 */ 122 rtems_bdpool_id pool; 108 * Device media block size in bytes. 109 * 110 * This is the media transfer unit the hardware defaults to. 111 */ 112 uint32_t media_block_size; 123 113 124 114 /** -
cpukit/libblock/src/bdbuf.c
rf14a21df r0d15414e 16 16 * Alexander Kukuta <kam@oktet.ru> 17 17 * 18 * Copyright (C) 2008 Chris Johns <chrisj@rtems.org>18 * Copyright (C) 2008,2009 Chris Johns <chrisj@rtems.org> 19 19 * Rewritten to remove score mutex access. Fixes many performance 20 20 * issues. … … 45 45 #include "rtems/bdbuf.h" 46 46 47 /** 48 * The BD buffer context. 49 */ 50 typedef struct rtems_bdbuf_context { 51 rtems_bdbuf_pool* pool; /*< Table of buffer pools */ 52 int npools; /*< Number of entries in pool table */ 53 rtems_id swapout; /*< Swapout task ID */ 54 bool swapout_enabled; 55 } rtems_bdbuf_context; 47 /* 48 * Simpler label for this file. 49 */ 50 #define bdbuf_config rtems_bdbuf_configuration 51 52 /** 53 * A swapout transfer transaction data. This data is passed to a worked thread 54 * to handle the write phase of the transfer. 55 */ 56 typedef struct rtems_bdbuf_swapout_transfer 57 { 58 rtems_chain_control bds; /**< The transfer list of BDs. */ 59 dev_t dev; /**< The device the transfer is for. */ 60 rtems_blkdev_request* write_req; /**< The write request array. */ 61 } rtems_bdbuf_swapout_transfer; 62 63 /** 64 * Swapout worker thread. These are available to take processing from the 65 * main swapout thread and handle the I/O operation. 66 */ 67 typedef struct rtems_bdbuf_swapout_worker 68 { 69 rtems_chain_node link; /**< The threads sit on a chain when 70 * idle. */ 71 rtems_id id; /**< The id of the task so we can wake 72 * it. */ 73 volatile bool enabled; /**< The worked is enabled. */ 74 rtems_bdbuf_swapout_transfer transfer; /**< The transfer data for this 75 * thread. */ 76 } rtems_bdbuf_swapout_worker; 77 78 /** 79 * The BD buffer cache. 80 */ 81 typedef struct rtems_bdbuf_cache 82 { 83 rtems_id swapout; /**< Swapout task ID */ 84 volatile bool swapout_enabled; /**< Swapout is only running if 85 * enabled. Set to false to kill the 86 * swap out task. It deletes itself. */ 87 rtems_chain_control swapout_workers; /**< The work threads for the swapout 88 * task. */ 89 90 rtems_bdbuf_buffer* bds; /**< Pointer to table of buffer 91 * descriptors. */ 92 void* buffers; /**< The buffer's memory. */ 93 size_t buffer_min_count; /**< Number of minimum size buffers 94 * that fit the buffer memory. */ 95 size_t max_bds_per_group; /**< The number of BDs of minimum 96 * buffer size that fit in a group. */ 97 uint32_t flags; /**< Configuration flags. */ 98 99 rtems_id lock; /**< The cache lock. It locks all 100 * cache data, BD and lists. */ 101 rtems_id sync_lock; /**< Sync calls block writes. */ 102 volatile bool sync_active; /**< True if a sync is active. */ 103 volatile rtems_id sync_requester; /**< The sync requester. */ 104 volatile dev_t sync_device; /**< The device to sync and -1 not a 105 * device sync. */ 106 107 rtems_bdbuf_buffer* tree; /**< Buffer descriptor lookup AVL tree 108 * root. There is only one. */ 109 rtems_chain_control ready; /**< Free buffers list, read-ahead, or 110 * resized group buffers. */ 111 rtems_chain_control lru; /**< Least recently used list */ 112 rtems_chain_control modified; /**< Modified buffers list */ 113 rtems_chain_control sync; /**< Buffers to sync list */ 114 115 rtems_id access; /**< Obtain if waiting for a buffer in 116 * the ACCESS state. */ 117 volatile uint32_t access_waiters; /**< Count of access blockers. */ 118 rtems_id transfer; /**< Obtain if waiting for a buffer in 119 * the TRANSFER state. */ 120 volatile uint32_t transfer_waiters; /**< Count of transfer blockers. */ 121 rtems_id waiting; /**< Obtain if waiting for a buffer 122 * and the none are available. */ 123 volatile uint32_t wait_waiters; /**< Count of waiting blockers. */ 124 125 size_t group_count; /**< The number of groups. */ 126 rtems_bdbuf_group* groups; /**< The groups. */ 127 128 bool initialised; /**< Initialised state. */ 129 } rtems_bdbuf_cache; 56 130 57 131 /** … … 61 135 (((uint32_t)'B' << 24) | ((uint32_t)(n) & (uint32_t)0x00FFFFFF)) 62 136 63 #define RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY RTEMS_BLKDEV_FATAL_ERROR(1) 64 #define RTEMS_BLKDEV_FATAL_BDBUF_SWAPOUT RTEMS_BLKDEV_FATAL_ERROR(2) 65 #define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK RTEMS_BLKDEV_FATAL_ERROR(3) 66 #define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK RTEMS_BLKDEV_FATAL_ERROR(4) 67 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_LOCK RTEMS_BLKDEV_FATAL_ERROR(5) 68 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_UNLOCK RTEMS_BLKDEV_FATAL_ERROR(6) 69 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAIT RTEMS_BLKDEV_FATAL_ERROR(7) 70 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAKE RTEMS_BLKDEV_FATAL_ERROR(8) 71 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_WAKE RTEMS_BLKDEV_FATAL_ERROR(9) 72 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_NOMEM RTEMS_BLKDEV_FATAL_ERROR(10) 73 #define BLKDEV_FATAL_BDBUF_SWAPOUT_RE RTEMS_BLKDEV_FATAL_ERROR(11) 74 #define BLKDEV_FATAL_BDBUF_SWAPOUT_TS RTEMS_BLKDEV_FATAL_ERROR(12) 137 #define RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY RTEMS_BLKDEV_FATAL_ERROR(1) 138 #define RTEMS_BLKDEV_FATAL_BDBUF_SWAPOUT RTEMS_BLKDEV_FATAL_ERROR(2) 139 #define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK RTEMS_BLKDEV_FATAL_ERROR(3) 140 #define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK RTEMS_BLKDEV_FATAL_ERROR(4) 141 #define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_LOCK RTEMS_BLKDEV_FATAL_ERROR(5) 142 #define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_UNLOCK RTEMS_BLKDEV_FATAL_ERROR(6) 143 #define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_1 RTEMS_BLKDEV_FATAL_ERROR(7) 144 #define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_2 RTEMS_BLKDEV_FATAL_ERROR(8) 145 #define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_3 RTEMS_BLKDEV_FATAL_ERROR(9) 146 #define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAKE RTEMS_BLKDEV_FATAL_ERROR(10) 147 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_WAKE RTEMS_BLKDEV_FATAL_ERROR(11) 148 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_NOMEM RTEMS_BLKDEV_FATAL_ERROR(12) 149 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_CREATE RTEMS_BLKDEV_FATAL_ERROR(13) 150 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_START RTEMS_BLKDEV_FATAL_ERROR(14) 151 #define BLKDEV_FATAL_BDBUF_SWAPOUT_RE RTEMS_BLKDEV_FATAL_ERROR(15) 152 #define BLKDEV_FATAL_BDBUF_SWAPOUT_TS RTEMS_BLKDEV_FATAL_ERROR(16) 75 153 76 154 /** … … 92 170 * @warning Priority inheritance is on. 93 171 */ 94 #define RTEMS_BDBUF_ POOL_LOCK_ATTRIBS \172 #define RTEMS_BDBUF_CACHE_LOCK_ATTRIBS \ 95 173 (RTEMS_PRIORITY | RTEMS_BINARY_SEMAPHORE | \ 96 174 RTEMS_INHERIT_PRIORITY | RTEMS_NO_PRIORITY_CEILING | RTEMS_LOCAL) … … 104 182 * IDLE task which can cause unsual side effects. 105 183 */ 106 #define RTEMS_BDBUF_ POOL_WAITER_ATTRIBS \184 #define RTEMS_BDBUF_CACHE_WAITER_ATTRIBS \ 107 185 (RTEMS_PRIORITY | RTEMS_SIMPLE_BINARY_SEMAPHORE | \ 108 186 RTEMS_NO_INHERIT_PRIORITY | RTEMS_NO_PRIORITY_CEILING | RTEMS_LOCAL) … … 114 192 115 193 /** 116 * The context of buffering layer.117 */ 118 static rtems_bdbuf_c ontext rtems_bdbuf_ctx;194 * The Buffer Descriptor cache. 195 */ 196 static rtems_bdbuf_cache bdbuf_cache; 119 197 120 198 /** … … 637 715 638 716 /** 639 * Get the pool for the device. 640 * 641 * @param pid Physical disk device. 642 */ 643 static rtems_bdbuf_pool* 644 rtems_bdbuf_get_pool (const rtems_bdpool_id pid) 645 { 646 return &rtems_bdbuf_ctx.pool[pid]; 647 } 648 649 /** 650 * Lock the pool. A single task can nest calls. 651 * 652 * @param pool The pool to lock. 717 * Lock the mutex. A single task can nest calls. 718 * 719 * @param lock The mutex to lock. 720 * @param fatal_error_code The error code if the call fails. 653 721 */ 654 722 static void 655 rtems_bdbuf_lock _pool (rtems_bdbuf_pool* pool)656 { 657 rtems_status_code sc = rtems_semaphore_obtain ( pool->lock,723 rtems_bdbuf_lock (rtems_id lock, uint32_t fatal_error_code) 724 { 725 rtems_status_code sc = rtems_semaphore_obtain (lock, 658 726 RTEMS_WAIT, 659 727 RTEMS_NO_TIMEOUT); 660 728 if (sc != RTEMS_SUCCESSFUL) 661 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_LOCK); 662 } 663 664 /** 665 * Unlock the pool. 666 * 667 * @param pool The pool to unlock. 729 rtems_fatal_error_occurred (fatal_error_code); 730 } 731 732 /** 733 * Unlock the mutex. 734 * 735 * @param lock The mutex to unlock. 736 * @param fatal_error_code The error code if the call fails. 668 737 */ 669 738 static void 670 rtems_bdbuf_unlock _pool (rtems_bdbuf_pool* pool)671 { 672 rtems_status_code sc = rtems_semaphore_release ( pool->lock);739 rtems_bdbuf_unlock (rtems_id lock, uint32_t fatal_error_code) 740 { 741 rtems_status_code sc = rtems_semaphore_release (lock); 673 742 if (sc != RTEMS_SUCCESSFUL) 674 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_UNLOCK); 675 } 676 677 /** 678 * Lock the pool's sync. A single task can nest calls. 679 * 680 * @param pool The pool's sync to lock. 743 rtems_fatal_error_occurred (fatal_error_code); 744 } 745 746 /** 747 * Lock the cache. A single task can nest calls. 681 748 */ 682 749 static void 683 rtems_bdbuf_lock_sync (rtems_bdbuf_pool* pool) 684 { 685 rtems_status_code sc = rtems_semaphore_obtain (pool->sync_lock, 686 RTEMS_WAIT, 687 RTEMS_NO_TIMEOUT); 688 if (sc != RTEMS_SUCCESSFUL) 689 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK); 690 } 691 692 /** 693 * Unlock the pool's sync. 694 * 695 * @param pool The pool's sync to unlock. 750 rtems_bdbuf_lock_cache (void) 751 { 752 rtems_bdbuf_lock (bdbuf_cache.lock, RTEMS_BLKDEV_FATAL_BDBUF_CACHE_LOCK); 753 } 754 755 /** 756 * Unlock the cache. 696 757 */ 697 758 static void 698 rtems_bdbuf_unlock_sync (rtems_bdbuf_pool* pool) 699 { 700 rtems_status_code sc = rtems_semaphore_release (pool->sync_lock); 701 if (sc != RTEMS_SUCCESSFUL) 702 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK); 759 rtems_bdbuf_unlock_cache (void) 760 { 761 rtems_bdbuf_unlock (bdbuf_cache.lock, RTEMS_BLKDEV_FATAL_BDBUF_CACHE_UNLOCK); 762 } 763 764 /** 765 * Lock the cache's sync. A single task can nest calls. 766 */ 767 static void 768 rtems_bdbuf_lock_sync (void) 769 { 770 rtems_bdbuf_lock (bdbuf_cache.sync_lock, RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK); 771 } 772 773 /** 774 * Unlock the cache's sync lock. Any blocked writers are woken. 775 */ 776 static void 777 rtems_bdbuf_unlock_sync (void) 778 { 779 rtems_bdbuf_unlock (bdbuf_cache.sync_lock, 780 RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK); 703 781 } 704 782 … … 709 787 * tasks that could be waiting. 710 788 * 711 * While we have the poollocked we can try and claim the semaphore and712 * therefore know when we release the lock to the poolwe will block until the789 * While we have the cache locked we can try and claim the semaphore and 790 * therefore know when we release the lock to the cache we will block until the 713 791 * semaphore is released. This may even happen before we get to block. 714 792 * 715 793 * A counter is used to save the release call when no one is waiting. 716 794 * 717 * The function assumes the poolis locked on entry and it will be locked on795 * The function assumes the cache is locked on entry and it will be locked on 718 796 * exit. 719 797 * 720 * @param pool The pool to wait for a buffer to return.721 798 * @param sema The semaphore to block on and wait. 722 799 * @param waiters The wait counter for this semaphore. 723 800 */ 724 801 static void 725 rtems_bdbuf_wait (rtems_bdbuf_pool* pool, rtems_id* sema, 726 volatile uint32_t* waiters) 802 rtems_bdbuf_wait (rtems_id* sema, volatile uint32_t* waiters) 727 803 { 728 804 rtems_status_code sc; … … 735 811 736 812 /* 737 * Disable preemption then unlock the pool and block. 738 * There is no POSIX condition variable in the core API so 739 * this is a work around. 813 * Disable preemption then unlock the cache and block. There is no POSIX 814 * condition variable in the core API so this is a work around. 740 815 * 741 * The issue is a task could preempt after the pool is unlocked 742 * because it is blocking or just hits that window, and before 743 * this task has blocked on the semaphore. If the preempting task 744 * flushes the queue this task will not see the flush and may 745 * block for ever or until another transaction flushes this 816 * The issue is a task could preempt after the cache is unlocked because it is 817 * blocking or just hits that window, and before this task has blocked on the 818 * semaphore. If the preempting task flushes the queue this task will not see 819 * the flush and may block for ever or until another transaction flushes this 746 820 * semaphore. 747 821 */ … … 749 823 750 824 if (sc != RTEMS_SUCCESSFUL) 751 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_ POOL_WAIT);752 753 /* 754 * Unlock the pool, wait, and lock the poolwhen we return.755 */ 756 rtems_bdbuf_unlock_ pool (pool);825 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_1); 826 827 /* 828 * Unlock the cache, wait, and lock the cache when we return. 829 */ 830 rtems_bdbuf_unlock_cache (); 757 831 758 832 sc = rtems_semaphore_obtain (*sema, RTEMS_WAIT, RTEMS_NO_TIMEOUT); 759 833 760 834 if (sc != RTEMS_UNSATISFIED) 761 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_ POOL_WAIT);762 763 rtems_bdbuf_lock_ pool (pool);835 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_2); 836 837 rtems_bdbuf_lock_cache (); 764 838 765 839 sc = rtems_task_mode (prev_mode, RTEMS_ALL_MODE_MASKS, &prev_mode); 766 840 767 841 if (sc != RTEMS_SUCCESSFUL) 768 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_ POOL_WAIT);842 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_3); 769 843 770 844 *waiters -= 1; … … 788 862 789 863 if (sc != RTEMS_SUCCESSFUL) 790 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_ POOL_WAKE);864 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAKE); 791 865 } 792 866 } … … 794 868 /** 795 869 * Add a buffer descriptor to the modified list. This modified list is treated 796 * a litte differently to the other lists. To access it you must have the pool870 * a litte differently to the other lists. To access it you must have the cache 797 871 * locked and this is assumed to be the case on entry to this call. 798 872 * 799 * If the poolhas a device being sync'ed and the bd is for that device the873 * If the cache has a device being sync'ed and the bd is for that device the 800 874 * call must block and wait until the sync is over before adding the bd to the 801 875 * modified list. Once a sync happens for a device no bd's can be added the … … 807 881 * active. 808 882 * 809 * @param pool The pool the bd belongs to. 810 * @param bd The bd to queue to the pool's modified list. 883 * @param bd The bd to queue to the cache's modified list. 811 884 */ 812 885 static void 813 rtems_bdbuf_append_modified (rtems_bdbuf_pool* pool, rtems_bdbuf_buffer* bd) 814 { 815 /* 816 * If the pool has a device being sync'ed check if this bd is for that 817 * device. If it is unlock the pool and block on the sync lock. once we have 818 * the sync lock reelase it. 819 * 820 * If the 821 */ 822 if (pool->sync_active && (pool->sync_device == bd->dev)) 823 { 824 rtems_bdbuf_unlock_pool (pool); 825 rtems_bdbuf_lock_sync (pool); 826 rtems_bdbuf_unlock_sync (pool); 827 rtems_bdbuf_lock_pool (pool); 886 rtems_bdbuf_append_modified (rtems_bdbuf_buffer* bd) 887 { 888 /* 889 * If the cache has a device being sync'ed check if this bd is for that 890 * device. If it is unlock the cache and block on the sync lock. Once we have 891 * the sync lock release it. 892 */ 893 if (bdbuf_cache.sync_active && (bdbuf_cache.sync_device == bd->dev)) 894 { 895 rtems_bdbuf_unlock_cache (); 896 /* Wait for the sync lock */ 897 rtems_bdbuf_lock_sync (); 898 rtems_bdbuf_unlock_sync (); 899 rtems_bdbuf_lock_cache (); 828 900 } 829 901 830 902 bd->state = RTEMS_BDBUF_STATE_MODIFIED; 831 903 832 rtems_chain_append (& pool->modified, &bd->link);904 rtems_chain_append (&bdbuf_cache.modified, &bd->link); 833 905 } 834 906 … … 839 911 rtems_bdbuf_wake_swapper (void) 840 912 { 841 rtems_status_code sc = rtems_event_send ( rtems_bdbuf_ctx.swapout,913 rtems_status_code sc = rtems_event_send (bdbuf_cache.swapout, 842 914 RTEMS_BDBUF_SWAPOUT_SYNC); 843 915 if (sc != RTEMS_SUCCESSFUL) … … 846 918 847 919 /** 848 * Initialize single buffer pool. 849 * 850 * @param config Buffer pool configuration 851 * @param pid Pool number 852 * 853 * @return RTEMS_SUCCESSFUL, if buffer pool initialized successfully, or error 854 * code if error occured. 855 */ 856 static rtems_status_code 857 rtems_bdbuf_initialize_pool (rtems_bdbuf_pool_config* config, 858 rtems_bdpool_id pid) 859 { 860 int rv = 0; 861 unsigned char* buffer = config->mem_area; 862 rtems_bdbuf_pool* pool; 920 * Compute the number of BDs per group for a given buffer size. 921 * 922 * @param size The buffer size. It can be any size and we scale up. 923 */ 924 static size_t 925 rtems_bdbuf_bds_per_group (size_t size) 926 { 927 size_t bufs_per_size; 928 size_t bds_per_size; 929 930 if (size > rtems_bdbuf_configuration.buffer_max) 931 return 0; 932 933 bufs_per_size = ((size - 1) / bdbuf_config.buffer_min) + 1; 934 935 for (bds_per_size = 1; 936 bds_per_size < bufs_per_size; 937 bds_per_size <<= 1) 938 ; 939 940 return bdbuf_cache.max_bds_per_group / bds_per_size; 941 } 942 943 /** 944 * Reallocate a group. The BDs currently allocated in the group are removed 945 * from the ALV tree and any lists then the new BD's are prepended to the ready 946 * list of the cache. 947 * 948 * @param group The group to reallocate. 949 * @param new_bds_per_group The new count of BDs per group. 950 */ 951 static void 952 rtems_bdbuf_group_realloc (rtems_bdbuf_group* group, size_t new_bds_per_group) 953 { 863 954 rtems_bdbuf_buffer* bd; 955 int b; 956 size_t bufs_per_bd; 957 958 bufs_per_bd = bdbuf_cache.max_bds_per_group / group->bds_per_group; 959 960 for (b = 0, bd = group->bdbuf; 961 b < group->bds_per_group; 962 b++, bd += bufs_per_bd) 963 { 964 if ((bd->state == RTEMS_BDBUF_STATE_CACHED) || 965 (bd->state == RTEMS_BDBUF_STATE_MODIFIED) || 966 (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD)) 967 { 968 if (rtems_bdbuf_avl_remove (&bdbuf_cache.tree, bd) != 0) 969 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY); 970 rtems_chain_extract (&bd->link); 971 } 972 } 973 974 group->bds_per_group = new_bds_per_group; 975 bufs_per_bd = bdbuf_cache.max_bds_per_group / new_bds_per_group; 976 977 for (b = 0, bd = group->bdbuf; 978 b < group->bds_per_group; 979 b++, bd += bufs_per_bd) 980 rtems_chain_prepend (&bdbuf_cache.ready, &bd->link); 981 } 982 983 /** 984 * Get the next BD from the list. This call assumes the cache is locked. 985 * 986 * @param bds_per_group The number of BDs per block we are need. 987 * @param list The list to find the BD on. 988 * @return The next BD if found or NULL is none are available. 989 */ 990 static rtems_bdbuf_buffer* 991 rtems_bdbuf_get_next_bd (size_t bds_per_group, 992 rtems_chain_control* list) 993 { 994 rtems_chain_node* node = rtems_chain_first (list); 995 while (!rtems_chain_is_tail (list, node)) 996 { 997 rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node; 998 999 /* 1000 * If this bd is already part of a group that supports the same number of 1001 * BDs per group return it. If the bd is part of another group check the 1002 * number of users and if 0 we can take this group and resize it. 1003 */ 1004 if (bd->group->bds_per_group == bds_per_group) 1005 { 1006 rtems_chain_extract (node); 1007 bd->group->users++; 1008 return bd; 1009 } 1010 1011 if (bd->group->users == 0) 1012 { 1013 /* 1014 * We use the group to locate the start of the BDs for this group. 1015 */ 1016 rtems_bdbuf_group_realloc (bd->group, bds_per_group); 1017 bd = (rtems_bdbuf_buffer*) rtems_chain_get (&bdbuf_cache.ready); 1018 return bd; 1019 } 1020 1021 node = rtems_chain_next (node); 1022 } 1023 1024 return NULL; 1025 } 1026 1027 /** 1028 * Initialise the cache. 1029 * 1030 * @return rtems_status_code The initialisation status. 1031 */ 1032 rtems_status_code 1033 rtems_bdbuf_init (void) 1034 { 1035 rtems_bdbuf_group* group; 1036 rtems_bdbuf_buffer* bd; 1037 uint8_t* buffer; 1038 int b; 1039 int cache_aligment; 864 1040 rtems_status_code sc; 865 uint32_t b; 866 int cache_aligment = 32 /* FIXME rtems_cache_get_data_line_size() */; 867 868 /* For unspecified cache alignments we use the CPU alignment */ 1041 1042 #if RTEMS_BDBUF_TRACE 1043 rtems_bdbuf_printf ("init\n"); 1044 #endif 1045 1046 /* 1047 * Check the configuration table values. 1048 */ 1049 if ((bdbuf_config.buffer_max % bdbuf_config.buffer_min) != 0) 1050 return RTEMS_INVALID_NUMBER; 1051 1052 /* 1053 * We use a special variable to manage the initialisation incase we have 1054 * completing threads doing this. You may get errors if the another thread 1055 * makes a call and we have not finished initialisation. 1056 */ 1057 if (bdbuf_cache.initialised) 1058 return RTEMS_RESOURCE_IN_USE; 1059 1060 bdbuf_cache.initialised = true; 1061 1062 /* 1063 * For unspecified cache alignments we use the CPU alignment. 1064 */ 1065 cache_aligment = 32; /* FIXME rtems_cache_get_data_line_size() */ 869 1066 if (cache_aligment <= 0) 870 {871 1067 cache_aligment = CPU_ALIGNMENT; 872 } 873 874 pool = rtems_bdbuf_get_pool (pid); 875 876 pool->blksize = config->size; 877 pool->nblks = config->num; 878 pool->flags = 0; 879 pool->sync_active = false; 880 pool->sync_device = -1; 881 pool->sync_requester = 0; 882 pool->tree = NULL; 883 pool->buffers = NULL; 884 885 rtems_chain_initialize_empty (&pool->ready); 886 rtems_chain_initialize_empty (&pool->lru); 887 rtems_chain_initialize_empty (&pool->modified); 888 rtems_chain_initialize_empty (&pool->sync); 889 890 pool->access = 0; 891 pool->access_waiters = 0; 892 pool->transfer = 0; 893 pool->transfer_waiters = 0; 894 pool->waiting = 0; 895 pool->wait_waiters = 0; 896 897 /* 898 * Allocate memory for buffer descriptors 899 */ 900 pool->bds = calloc (config->num, sizeof (rtems_bdbuf_buffer)); 901 902 if (!pool->bds) 1068 1069 bdbuf_cache.sync_active = false; 1070 bdbuf_cache.sync_device = -1; 1071 bdbuf_cache.sync_requester = 0; 1072 bdbuf_cache.tree = NULL; 1073 1074 rtems_chain_initialize_empty (&bdbuf_cache.swapout_workers); 1075 rtems_chain_initialize_empty (&bdbuf_cache.ready); 1076 rtems_chain_initialize_empty (&bdbuf_cache.lru); 1077 rtems_chain_initialize_empty (&bdbuf_cache.modified); 1078 rtems_chain_initialize_empty (&bdbuf_cache.sync); 1079 1080 bdbuf_cache.access = 0; 1081 bdbuf_cache.access_waiters = 0; 1082 bdbuf_cache.transfer = 0; 1083 bdbuf_cache.transfer_waiters = 0; 1084 bdbuf_cache.waiting = 0; 1085 bdbuf_cache.wait_waiters = 0; 1086 1087 /* 1088 * Create the locks for the cache. 1089 */ 1090 sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 'l'), 1091 1, RTEMS_BDBUF_CACHE_LOCK_ATTRIBS, 0, 1092 &bdbuf_cache.lock); 1093 if (sc != RTEMS_SUCCESSFUL) 1094 { 1095 bdbuf_cache.initialised = false; 1096 return sc; 1097 } 1098 1099 rtems_bdbuf_lock_cache (); 1100 1101 sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 's'), 1102 1, RTEMS_BDBUF_CACHE_LOCK_ATTRIBS, 0, 1103 &bdbuf_cache.sync_lock); 1104 if (sc != RTEMS_SUCCESSFUL) 1105 { 1106 rtems_bdbuf_unlock_cache (); 1107 rtems_semaphore_delete (bdbuf_cache.lock); 1108 bdbuf_cache.initialised = false; 1109 return sc; 1110 } 1111 1112 sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 'a'), 1113 0, RTEMS_BDBUF_CACHE_WAITER_ATTRIBS, 0, 1114 &bdbuf_cache.access); 1115 if (sc != RTEMS_SUCCESSFUL) 1116 { 1117 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1118 rtems_bdbuf_unlock_cache (); 1119 rtems_semaphore_delete (bdbuf_cache.lock); 1120 bdbuf_cache.initialised = false; 1121 return sc; 1122 } 1123 1124 sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 't'), 1125 0, RTEMS_BDBUF_CACHE_WAITER_ATTRIBS, 0, 1126 &bdbuf_cache.transfer); 1127 if (sc != RTEMS_SUCCESSFUL) 1128 { 1129 rtems_semaphore_delete (bdbuf_cache.access); 1130 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1131 rtems_bdbuf_unlock_cache (); 1132 rtems_semaphore_delete (bdbuf_cache.lock); 1133 bdbuf_cache.initialised = false; 1134 return sc; 1135 } 1136 1137 sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 'w'), 1138 0, RTEMS_BDBUF_CACHE_WAITER_ATTRIBS, 0, 1139 &bdbuf_cache.waiting); 1140 if (sc != RTEMS_SUCCESSFUL) 1141 { 1142 rtems_semaphore_delete (bdbuf_cache.transfer); 1143 rtems_semaphore_delete (bdbuf_cache.access); 1144 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1145 rtems_bdbuf_unlock_cache (); 1146 rtems_semaphore_delete (bdbuf_cache.lock); 1147 bdbuf_cache.initialised = false; 1148 return sc; 1149 } 1150 1151 /* 1152 * Allocate the memory for the buffer descriptors. 1153 */ 1154 bdbuf_cache.bds = calloc (sizeof (rtems_bdbuf_buffer), 1155 bdbuf_config.size / bdbuf_config.buffer_min); 1156 if (!bdbuf_cache.bds) 1157 { 1158 rtems_semaphore_delete (bdbuf_cache.transfer); 1159 rtems_semaphore_delete (bdbuf_cache.access); 1160 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1161 rtems_bdbuf_unlock_cache (); 1162 rtems_semaphore_delete (bdbuf_cache.lock); 1163 bdbuf_cache.initialised = false; 903 1164 return RTEMS_NO_MEMORY; 904 905 /* 906 * Allocate memory for buffers if required. The pool memory will be cache 907 * aligned. It is possible to free the memory allocated by rtems_memalign() 908 * with free(). 909 */ 910 if (buffer == NULL) 911 { 912 rv = rtems_memalign ((void **) &buffer, 913 cache_aligment, 914 config->num * config->size); 915 if (rv != 0) 916 { 917 free (pool->bds); 918 return RTEMS_NO_MEMORY; 919 } 920 pool->buffers = buffer; 921 } 922 923 for (b = 0, bd = pool->bds; 924 b < pool->nblks; 925 b++, bd++, buffer += pool->blksize) 1165 } 1166 1167 /* 1168 * Compute the various number of elements in the cache. 1169 */ 1170 bdbuf_cache.buffer_min_count = 1171 bdbuf_config.size / bdbuf_config.buffer_min; 1172 bdbuf_cache.max_bds_per_group = 1173 bdbuf_config.buffer_max / bdbuf_config.buffer_min; 1174 bdbuf_cache.group_count = 1175 bdbuf_cache.buffer_min_count / bdbuf_cache.max_bds_per_group; 1176 1177 /* 1178 * Allocate the memory for the buffer descriptors. 1179 */ 1180 bdbuf_cache.groups = calloc (sizeof (rtems_bdbuf_group), 1181 bdbuf_cache.group_count); 1182 if (!bdbuf_cache.groups) 1183 { 1184 free (bdbuf_cache.bds); 1185 rtems_semaphore_delete (bdbuf_cache.transfer); 1186 rtems_semaphore_delete (bdbuf_cache.access); 1187 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1188 rtems_bdbuf_unlock_cache (); 1189 rtems_semaphore_delete (bdbuf_cache.lock); 1190 bdbuf_cache.initialised = false; 1191 return RTEMS_NO_MEMORY; 1192 } 1193 1194 /* 1195 * Allocate memory for buffer memory. The buffer memory will be cache 1196 * aligned. It is possible to free the memory allocated by rtems_memalign() 1197 * with free(). Return 0 if allocated. 1198 */ 1199 if (rtems_memalign ((void **) &bdbuf_cache.buffers, 1200 cache_aligment, 1201 bdbuf_cache.buffer_min_count * bdbuf_config.buffer_min) != 0) 1202 { 1203 free (bdbuf_cache.groups); 1204 free (bdbuf_cache.bds); 1205 rtems_semaphore_delete (bdbuf_cache.transfer); 1206 rtems_semaphore_delete (bdbuf_cache.access); 1207 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1208 rtems_bdbuf_unlock_cache (); 1209 rtems_semaphore_delete (bdbuf_cache.lock); 1210 bdbuf_cache.initialised = false; 1211 return RTEMS_NO_MEMORY; 1212 } 1213 1214 /* 1215 * The cache is empty after opening so we need to add all the buffers to it 1216 * and initialise the groups. 1217 */ 1218 for (b = 0, group = bdbuf_cache.groups, 1219 bd = bdbuf_cache.bds, buffer = bdbuf_cache.buffers; 1220 b < bdbuf_cache.buffer_min_count; 1221 b++, bd++, buffer += bdbuf_config.buffer_min) 926 1222 { 927 1223 bd->dev = -1; 928 bd-> block = 0;1224 bd->group = group; 929 1225 bd->buffer = buffer; 930 1226 bd->avl.left = NULL; 931 1227 bd->avl.right = NULL; 932 1228 bd->state = RTEMS_BDBUF_STATE_EMPTY; 933 bd->pool = pid;934 1229 bd->error = 0; 935 1230 bd->waiters = 0; 936 1231 bd->hold_timer = 0; 937 1232 938 rtems_chain_append (&pool->ready, &bd->link); 939 } 940 941 sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'L'), 942 1, RTEMS_BDBUF_POOL_LOCK_ATTRIBS, 0, 943 &pool->lock); 944 if (sc != RTEMS_SUCCESSFUL) 945 { 946 free (pool->buffers); 947 free (pool->bds); 948 return sc; 949 } 950 951 sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'S'), 952 1, RTEMS_BDBUF_POOL_LOCK_ATTRIBS, 0, 953 &pool->sync_lock); 954 if (sc != RTEMS_SUCCESSFUL) 955 { 956 rtems_semaphore_delete (pool->lock); 957 free (pool->buffers); 958 free (pool->bds); 959 return sc; 960 } 961 962 sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'a'), 963 0, RTEMS_BDBUF_POOL_WAITER_ATTRIBS, 0, 964 &pool->access); 965 if (sc != RTEMS_SUCCESSFUL) 966 { 967 rtems_semaphore_delete (pool->sync_lock); 968 rtems_semaphore_delete (pool->lock); 969 free (pool->buffers); 970 free (pool->bds); 971 return sc; 972 } 973 974 sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 't'), 975 0, RTEMS_BDBUF_POOL_WAITER_ATTRIBS, 0, 976 &pool->transfer); 977 if (sc != RTEMS_SUCCESSFUL) 978 { 979 rtems_semaphore_delete (pool->access); 980 rtems_semaphore_delete (pool->sync_lock); 981 rtems_semaphore_delete (pool->lock); 982 free (pool->buffers); 983 free (pool->bds); 984 return sc; 985 } 986 987 sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'w'), 988 0, RTEMS_BDBUF_POOL_WAITER_ATTRIBS, 0, 989 &pool->waiting); 990 if (sc != RTEMS_SUCCESSFUL) 991 { 992 rtems_semaphore_delete (pool->transfer); 993 rtems_semaphore_delete (pool->access); 994 rtems_semaphore_delete (pool->sync_lock); 995 rtems_semaphore_delete (pool->lock); 996 free (pool->buffers); 997 free (pool->bds); 998 return sc; 999 } 1000 1001 return RTEMS_SUCCESSFUL; 1002 } 1003 1004 /** 1005 * Free resources allocated for buffer pool with specified number. 1006 * 1007 * @param pid Buffer pool number 1008 * 1009 * @retval RTEMS_SUCCESSFUL 1010 */ 1011 static rtems_status_code 1012 rtems_bdbuf_release_pool (rtems_bdpool_id pid) 1013 { 1014 rtems_bdbuf_pool* pool = rtems_bdbuf_get_pool (pid); 1015 1016 rtems_bdbuf_lock_pool (pool); 1017 1018 rtems_semaphore_delete (pool->waiting); 1019 rtems_semaphore_delete (pool->transfer); 1020 rtems_semaphore_delete (pool->access); 1021 rtems_semaphore_delete (pool->lock); 1022 1023 free (pool->buffers); 1024 free (pool->bds); 1025 1026 return RTEMS_SUCCESSFUL; 1027 } 1028 1029 rtems_status_code 1030 rtems_bdbuf_init (void) 1031 { 1032 rtems_bdpool_id p; 1033 rtems_status_code sc; 1034 1035 #if RTEMS_BDBUF_TRACE 1036 rtems_bdbuf_printf ("init\n"); 1037 #endif 1038 1039 if (rtems_bdbuf_pool_configuration_size <= 0) 1040 return RTEMS_INVALID_SIZE; 1041 1042 if (rtems_bdbuf_ctx.npools) 1043 return RTEMS_RESOURCE_IN_USE; 1044 1045 rtems_bdbuf_ctx.npools = rtems_bdbuf_pool_configuration_size; 1046 1047 /* 1048 * Allocate memory for buffer pool descriptors 1049 */ 1050 rtems_bdbuf_ctx.pool = calloc (rtems_bdbuf_pool_configuration_size, 1051 sizeof (rtems_bdbuf_pool)); 1052 1053 if (rtems_bdbuf_ctx.pool == NULL) 1054 return RTEMS_NO_MEMORY; 1055 1056 /* 1057 * Initialize buffer pools and roll out if something failed, 1058 */ 1059 for (p = 0; p < rtems_bdbuf_ctx.npools; p++) 1060 { 1061 sc = rtems_bdbuf_initialize_pool (&rtems_bdbuf_pool_configuration[p], p); 1062 if (sc != RTEMS_SUCCESSFUL) 1063 { 1064 rtems_bdpool_id j; 1065 for (j = 0; j < p - 1; j++) 1066 rtems_bdbuf_release_pool (j); 1067 return sc; 1068 } 1069 } 1070 1071 /* 1072 * Create and start swapout task 1073 */ 1074 1075 rtems_bdbuf_ctx.swapout_enabled = true; 1233 rtems_chain_append (&bdbuf_cache.ready, &bd->link); 1234 1235 if ((b % bdbuf_cache.max_bds_per_group) == 1236 (bdbuf_cache.max_bds_per_group - 1)) 1237 group++; 1238 } 1239 1240 for (b = 0, 1241 group = bdbuf_cache.groups, 1242 bd = bdbuf_cache.bds; 1243 b < bdbuf_cache.group_count; 1244 b++, 1245 group++, 1246 bd += bdbuf_cache.max_bds_per_group) 1247 { 1248 group->bds_per_group = bdbuf_cache.max_bds_per_group; 1249 group->users = 0; 1250 group->bdbuf = bd; 1251 } 1252 1253 /* 1254 * Create and start swapout task. This task will create and manage the worker 1255 * threads. 1256 */ 1257 bdbuf_cache.swapout_enabled = true; 1076 1258 1077 1259 sc = rtems_task_create (rtems_build_name('B', 'S', 'W', 'P'), 1078 ( rtems_bdbuf_configuration.swapout_priority ?1079 rtems_bdbuf_configuration.swapout_priority :1260 (bdbuf_config.swapout_priority ? 1261 bdbuf_config.swapout_priority : 1080 1262 RTEMS_BDBUF_SWAPOUT_TASK_PRIORITY_DEFAULT), 1081 1263 SWAPOUT_TASK_STACK_SIZE, 1082 1264 RTEMS_PREEMPT | RTEMS_NO_TIMESLICE | RTEMS_NO_ASR, 1083 1265 RTEMS_LOCAL | RTEMS_NO_FLOATING_POINT, 1084 & rtems_bdbuf_ctx.swapout);1266 &bdbuf_cache.swapout); 1085 1267 if (sc != RTEMS_SUCCESSFUL) 1086 1268 { 1087 for (p = 0; p < rtems_bdbuf_ctx.npools; p++) 1088 rtems_bdbuf_release_pool (p); 1089 free (rtems_bdbuf_ctx.pool); 1269 free (bdbuf_cache.buffers); 1270 free (bdbuf_cache.groups); 1271 free (bdbuf_cache.bds); 1272 rtems_semaphore_delete (bdbuf_cache.transfer); 1273 rtems_semaphore_delete (bdbuf_cache.access); 1274 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1275 rtems_bdbuf_unlock_cache (); 1276 rtems_semaphore_delete (bdbuf_cache.lock); 1277 bdbuf_cache.initialised = false; 1090 1278 return sc; 1091 1279 } 1092 1280 1093 sc = rtems_task_start ( rtems_bdbuf_ctx.swapout,1281 sc = rtems_task_start (bdbuf_cache.swapout, 1094 1282 rtems_bdbuf_swapout_task, 1095 (rtems_task_argument) & rtems_bdbuf_ctx);1283 (rtems_task_argument) &bdbuf_cache); 1096 1284 if (sc != RTEMS_SUCCESSFUL) 1097 1285 { 1098 rtems_task_delete (rtems_bdbuf_ctx.swapout); 1099 for (p = 0; p < rtems_bdbuf_ctx.npools; p++) 1100 rtems_bdbuf_release_pool (p); 1101 free (rtems_bdbuf_ctx.pool); 1286 rtems_task_delete (bdbuf_cache.swapout); 1287 free (bdbuf_cache.buffers); 1288 free (bdbuf_cache.groups); 1289 free (bdbuf_cache.bds); 1290 rtems_semaphore_delete (bdbuf_cache.transfer); 1291 rtems_semaphore_delete (bdbuf_cache.access); 1292 rtems_semaphore_delete (bdbuf_cache.sync_lock); 1293 rtems_bdbuf_unlock_cache (); 1294 rtems_semaphore_delete (bdbuf_cache.lock); 1295 bdbuf_cache.initialised = false; 1102 1296 return sc; 1103 1297 } 1104 1298 1299 rtems_bdbuf_unlock_cache (); 1300 1105 1301 return RTEMS_SUCCESSFUL; 1106 1302 } … … 1109 1305 * Get a buffer for this device and block. This function returns a buffer once 1110 1306 * placed into the AVL tree. If no buffer is available and it is not a read 1111 * ahead request and no buffers are waiting to the written to disk wait until 1112 * one is available. If buffers are waiting to be written to disk and non are1113 * a vailable expire the hold timer and wake the swap out task. If the buffer is1114 * for a read ahead transfer return NULL if there is not buffer or it is in the1115 * cache.1116 * 1117 * The AVL tree of buffers for the pool is searched and if not located check1118 * obtain a buffer and insert it into the AVL tree. Buffers are first obtained1119 * from the ready list until all empty/ready buffers are used. Once all buffers1120 * are in use buffers are taken from the LRU list with the least recently used1121 * buffer taken first. A buffer taken from the LRU list is removed from the AVL1122 * tree. The ready list or LRU list buffer is initialised to this device and1123 * block. If no buffers are available due to the ready and LRU lists being1124 * empty a check is made of the modified list. Buffers may be queued waiting1125 * for the hold timer to expire. These buffers should be written to disk and1126 * returned to the LRU list where they can be used rather than this call1127 * blocking. If buffers are on the modified list the max. write block size of1128 * b uffers have their hold timer expired and the swap out task woken. The1129 * caller then blocks on the waiting semaphore and counter. When buffers return1130 * from the upper layers (access) or lower driver (transfer) the blocked caller1131 * task is woken and this procedure is repeated. The repeat handles a case of a1132 * another thread pre-empting getting a buffer first and adding it to the AVL1133 * tree.1307 * ahead request and no buffers are waiting to the written to disk wait until a 1308 * buffer is available. If buffers are waiting to be written to disk and none 1309 * are available expire the hold timer's of the queued buffers and wake the 1310 * swap out task. If the buffer is for a read ahead transfer return NULL if 1311 * there are no buffers available or the buffer is already in the cache. 1312 * 1313 * The AVL tree of buffers for the cache is searched and if not found obtain a 1314 * buffer and insert it into the AVL tree. Buffers are first obtained from the 1315 * ready list until all empty/ready buffers are used. Once all buffers are in 1316 * use the LRU list is searched for a buffer of the same group size or a group 1317 * that has no active buffers in use. A buffer taken from the LRU list is 1318 * removed from the AVL tree and assigned the new block number. The ready or 1319 * LRU list buffer is initialised to this device and block. If no buffers are 1320 * available due to the ready and LRU lists being empty a check is made of the 1321 * modified list. Buffers may be queued waiting for the hold timer to 1322 * expire. These buffers should be written to disk and returned to the LRU list 1323 * where they can be used. If buffers are on the modified list the max. write 1324 * block size of buffers have their hold timer's expired and the swap out task 1325 * woken. The caller then blocks on the waiting semaphore and counter. When 1326 * buffers return from the upper layers (access) or lower driver (transfer) the 1327 * blocked caller task is woken and this procedure is repeated. The repeat 1328 * handles a case of a another thread pre-empting getting a buffer first and 1329 * adding it to the AVL tree. 1134 1330 * 1135 1331 * A buffer located in the AVL tree means it is already in the cache and maybe … … 1143 1339 * and return to the user. 1144 1340 * 1145 * This function assumes the poolthe buffer is being taken from is locked and1146 * it will make sure the pool is locked when it returns. The poolwill be1341 * This function assumes the cache the buffer is being taken from is locked and 1342 * it will make sure the cache is locked when it returns. The cache will be 1147 1343 * unlocked if the call could block. 1148 1344 * 1149 * @param pdd The physical disk device 1150 * @param pool The pool reference 1151 * @param block Absolute media block number 1152 * @param read_ahead The get is for a read ahead buffer 1153 * 1154 * @return RTEMS status code ( if operation completed successfully or error 1345 * Variable sized buffer is handled by groups. A group is the size of the 1346 * maximum buffer that can be allocated. The group can size in multiples of the 1347 * minimum buffer size where the mulitples are 1,2,4,8, etc. If the buffer is 1348 * found in the AVL tree the number of BDs in the group is check and if 1349 * different the buffer size for the block has changed. The buffer needs to be 1350 * invalidated. 1351 * 1352 * @param dd The disk device. Has the configured block size. 1353 * @param bds_per_group The number of BDs in a group for this block. 1354 * @param block Absolute media block number for the device 1355 * @param read_ahead The get is for a read ahead buffer if true 1356 * @return RTEMS status code (if operation completed successfully or error 1155 1357 * code if error is occured) 1156 1358 */ 1157 1359 static rtems_bdbuf_buffer* 1158 rtems_bdbuf_get_buffer (rtems_disk_device* pdd,1159 rtems_bdbuf_pool* pool,1360 rtems_bdbuf_get_buffer (rtems_disk_device* dd, 1361 size_t bds_per_group, 1160 1362 rtems_blkdev_bnum block, 1161 1363 bool read_ahead) 1162 1364 { 1163 dev_t device = pdd->dev;1365 dev_t device = dd->dev; 1164 1366 rtems_bdbuf_buffer* bd; 1165 1367 bool available; 1166 1368 1167 1369 /* 1168 1370 * Loop until we get a buffer. Under load we could find no buffers are 1169 1371 * available requiring this task to wait until some become available before 1170 * proceeding. There is no timeout. If the call is to block and the buffer is 1171 * for a read ahead buffer return NULL. 1372 * proceeding. There is no timeout. If this call is to block and the buffer 1373 * is for a read ahead buffer return NULL. The read ahead is nice but not 1374 * that important. 1172 1375 * 1173 1376 * The search procedure is repeated as another thread could have pre-empted 1174 1377 * us while we waited for a buffer, obtained an empty buffer and loaded the 1175 * AVL tree with the one we are after. 1378 * AVL tree with the one we are after. In this case we move down and wait for 1379 * the buffer to return to the cache. 1176 1380 */ 1177 1381 do … … 1180 1384 * Search for buffer descriptor for this dev/block key. 1181 1385 */ 1182 bd = rtems_bdbuf_avl_search (& pool->tree, device, block);1386 bd = rtems_bdbuf_avl_search (&bdbuf_cache.tree, device, block); 1183 1387 1184 1388 /* … … 1195 1399 { 1196 1400 /* 1197 * Assign new buffer descriptor from the empty list if one is present. If1198 * the empty queue is empty get the oldest buffer from LRU list. If the1401 * Assign new buffer descriptor from the ready list if one is present. If 1402 * the ready queue is empty get the oldest buffer from LRU list. If the 1199 1403 * LRU list is empty there are no available buffers check the modified 1200 1404 * list. 1201 1405 */ 1202 if (rtems_chain_is_empty (&pool->ready)) 1406 bd = rtems_bdbuf_get_next_bd (bds_per_group, &bdbuf_cache.ready); 1407 1408 if (!bd) 1203 1409 { 1204 1410 /* 1205 * No un sed or read-ahead buffers.1411 * No unused or read-ahead buffers. 1206 1412 * 1207 1413 * If this is a read ahead buffer just return. No need to place further 1208 1414 * pressure on the cache by reading something that may be needed when 1209 * we have data in the cache that was needed and may still be. 1415 * we have data in the cache that was needed and may still be in the 1416 * future. 1210 1417 */ 1211 1418 if (read_ahead) … … 1215 1422 * Check the LRU list. 1216 1423 */ 1217 bd = (rtems_bdbuf_buffer *) rtems_chain_get (&pool->lru);1424 bd = rtems_bdbuf_get_next_bd (bds_per_group, &bdbuf_cache.lru); 1218 1425 1219 1426 if (bd) … … 1222 1429 * Remove the buffer from the AVL tree. 1223 1430 */ 1224 if (rtems_bdbuf_avl_remove (& pool->tree, bd) != 0)1431 if (rtems_bdbuf_avl_remove (&bdbuf_cache.tree, bd) != 0) 1225 1432 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY); 1226 1433 } … … 1230 1437 * If there are buffers on the modified list expire the hold timer 1231 1438 * and wake the swap out task then wait else just go and wait. 1439 * 1440 * The check for an empty list is made so the swapper is only woken 1441 * when if timers are changed. 1232 1442 */ 1233 if (!rtems_chain_is_empty (& pool->modified))1443 if (!rtems_chain_is_empty (&bdbuf_cache.modified)) 1234 1444 { 1235 rtems_chain_node* node = rtems_chain_ head (&pool->modified);1445 rtems_chain_node* node = rtems_chain_first (&bdbuf_cache.modified); 1236 1446 uint32_t write_blocks = 0; 1237 1447 1238 node = node->next; 1239 while ((write_blocks < rtems_bdbuf_configuration.max_write_blocks) && 1240 !rtems_chain_is_tail (&pool->modified, node)) 1448 while ((write_blocks < bdbuf_config.max_write_blocks) && 1449 !rtems_chain_is_tail (&bdbuf_cache.modified, node)) 1241 1450 { 1242 1451 rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node; 1243 1452 bd->hold_timer = 0; 1244 1453 write_blocks++; 1245 node = node->next;1454 node = rtems_chain_next (node); 1246 1455 } 1247 1456 … … 1250 1459 1251 1460 /* 1252 * Wait for a buffer to be returned to the pool. The buffer will be1461 * Wait for a buffer to be returned to the cache. The buffer will be 1253 1462 * placed on the LRU list. 1254 1463 */ 1255 rtems_bdbuf_wait ( pool, &pool->waiting, &pool->wait_waiters);1464 rtems_bdbuf_wait (&bdbuf_cache.waiting, &bdbuf_cache.wait_waiters); 1256 1465 } 1257 1466 } 1258 1467 else 1259 1468 { 1260 bd = (rtems_bdbuf_buffer *) rtems_chain_get (&(pool->ready)); 1261 1469 /* 1470 * We have a new buffer for this block. 1471 */ 1262 1472 if ((bd->state != RTEMS_BDBUF_STATE_EMPTY) && 1263 1473 (bd->state != RTEMS_BDBUF_STATE_READ_AHEAD)) … … 1266 1476 if (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD) 1267 1477 { 1268 if (rtems_bdbuf_avl_remove (& pool->tree, bd) != 0)1478 if (rtems_bdbuf_avl_remove (&bdbuf_cache.tree, bd) != 0) 1269 1479 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY); 1270 1480 } … … 1281 1491 bd->waiters = 0; 1282 1492 1283 if (rtems_bdbuf_avl_insert (& pool->tree, bd) != 0)1493 if (rtems_bdbuf_avl_insert (&bdbuf_cache.tree, bd) != 0) 1284 1494 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY); 1285 1495 1286 1496 return bd; 1497 } 1498 } 1499 else 1500 { 1501 /* 1502 * We have the buffer for the block from the cache. Check if the buffer 1503 * in the cache is the same size and the requested size we are after. 1504 */ 1505 if (bd->group->bds_per_group != bds_per_group) 1506 { 1507 bd->state = RTEMS_BDBUF_STATE_EMPTY; 1508 rtems_chain_extract (&bd->link); 1509 rtems_chain_prepend (&bdbuf_cache.ready, &bd->link); 1510 bd = NULL; 1287 1511 } 1288 1512 } … … 1315 1539 case RTEMS_BDBUF_STATE_ACCESS_MODIFIED: 1316 1540 bd->waiters++; 1317 rtems_bdbuf_wait (pool, &pool->access, &pool->access_waiters); 1541 rtems_bdbuf_wait (&bdbuf_cache.access, 1542 &bdbuf_cache.access_waiters); 1318 1543 bd->waiters--; 1319 1544 break; … … 1322 1547 case RTEMS_BDBUF_STATE_TRANSFER: 1323 1548 bd->waiters++; 1324 rtems_bdbuf_wait (pool, &pool->transfer, &pool->transfer_waiters); 1549 rtems_bdbuf_wait (&bdbuf_cache.transfer, 1550 &bdbuf_cache.transfer_waiters); 1325 1551 bd->waiters--; 1326 1552 break; … … 1345 1571 { 1346 1572 rtems_disk_device* dd; 1347 rtems_bdbuf_pool* pool;1348 1573 rtems_bdbuf_buffer* bd; 1349 1350 /* 1351 * Do not hold the pool lock when obtaining the disk table. 1574 size_t bds_per_group; 1575 1576 if (!bdbuf_cache.initialised) 1577 return RTEMS_NOT_CONFIGURED; 1578 1579 /* 1580 * Do not hold the cache lock when obtaining the disk table. 1352 1581 */ 1353 1582 dd = rtems_disk_obtain (device); 1354 if ( dd == NULL)1583 if (!dd) 1355 1584 return RTEMS_INVALID_ID; 1356 1585 1357 1586 if (block >= dd->size) 1587 { 1588 rtems_disk_release (dd); 1589 return RTEMS_INVALID_ADDRESS; 1590 } 1591 1592 bds_per_group = rtems_bdbuf_bds_per_group (dd->block_size); 1593 if (!bds_per_group) 1358 1594 { 1359 1595 rtems_disk_release (dd); 1360 1596 return RTEMS_INVALID_NUMBER; 1361 1597 } 1362 1363 pool = rtems_bdbuf_get_pool (dd->phys_dev->pool); 1364 1365 rtems_bdbuf_lock_pool (pool); 1598 1599 rtems_bdbuf_lock_cache (); 1366 1600 1367 1601 #if RTEMS_BDBUF_TRACE … … 1370 1604 #endif 1371 1605 1372 bd = rtems_bdbuf_get_buffer (dd ->phys_dev, pool, block + dd->start, false);1606 bd = rtems_bdbuf_get_buffer (dd, bds_per_group, block + dd->start, false); 1373 1607 1374 1608 if (bd->state == RTEMS_BDBUF_STATE_MODIFIED) … … 1376 1610 else 1377 1611 bd->state = RTEMS_BDBUF_STATE_ACCESS; 1378 1379 rtems_bdbuf_unlock_ pool (pool);1612 1613 rtems_bdbuf_unlock_cache (); 1380 1614 1381 1615 rtems_disk_release(dd); 1382 1616 1383 1617 *bdp = bd; 1384 1618 1385 1619 return RTEMS_SUCCESSFUL; 1386 1620 } … … 1413 1647 { 1414 1648 rtems_disk_device* dd; 1415 rtems_bdbuf_pool* pool;1416 1649 rtems_bdbuf_buffer* bd = NULL; 1417 1650 uint32_t read_ahead_count; 1418 1651 rtems_blkdev_request* req; 1419 1652 size_t bds_per_group; 1653 1654 if (!bdbuf_cache.initialised) 1655 return RTEMS_NOT_CONFIGURED; 1656 1420 1657 /* 1421 1658 * @todo This type of request structure is wrong and should be removed. … … 1428 1665 1429 1666 /* 1430 * Do not hold the poollock when obtaining the disk table.1667 * Do not hold the cache lock when obtaining the disk table. 1431 1668 */ 1432 1669 dd = rtems_disk_obtain (device); 1433 if ( dd == NULL)1670 if (!dd) 1434 1671 return RTEMS_INVALID_ID; 1435 1672 1436 1673 if (block >= dd->size) { 1437 1674 rtems_disk_release(dd); 1675 return RTEMS_INVALID_NUMBER; 1676 } 1677 1678 bds_per_group = rtems_bdbuf_bds_per_group (dd->block_size); 1679 if (!bds_per_group) 1680 { 1681 rtems_disk_release (dd); 1438 1682 return RTEMS_INVALID_NUMBER; 1439 1683 } … … 1458 1702 read_ahead_count = dd->size - block; 1459 1703 1460 pool = rtems_bdbuf_get_pool (dd->phys_dev->pool); 1461 1462 rtems_bdbuf_lock_pool (pool); 1704 rtems_bdbuf_lock_cache (); 1463 1705 1464 1706 while (req->bufnum < read_ahead_count) … … 1472 1714 * caller. 1473 1715 */ 1474 bd = rtems_bdbuf_get_buffer (dd ->phys_dev, pool,1716 bd = rtems_bdbuf_get_buffer (dd, bds_per_group, 1475 1717 block + dd->start + req->bufnum, 1476 1718 req->bufnum == 0 ? false : true); … … 1515 1757 { 1516 1758 /* 1517 * Unlock the pool. We have the buffer for the block and it will be in the1759 * Unlock the cache. We have the buffer for the block and it will be in the 1518 1760 * access or transfer state. We may also have a number of read ahead blocks 1519 1761 * if we need to transfer data. At this point any other threads can gain 1520 * access to the pool and if they are after any of the buffers we have they1521 * will block and be woken when the buffer is returned to the pool.1762 * access to the cache and if they are after any of the buffers we have 1763 * they will block and be woken when the buffer is returned to the cache. 1522 1764 * 1523 1765 * If a transfer is needed the I/O operation will occur with pre-emption 1524 * enabled and the poolunlocked. This is a change to the previous version1766 * enabled and the cache unlocked. This is a change to the previous version 1525 1767 * of the bdbuf code. 1526 1768 */ … … 1536 1778 0, &out); 1537 1779 1538 rtems_bdbuf_unlock_ pool (pool);1780 rtems_bdbuf_unlock_cache (); 1539 1781 1540 1782 req->req = RTEMS_BLKDEV_REQ_READ; … … 1568 1810 } 1569 1811 1570 rtems_bdbuf_lock_ pool (pool);1812 rtems_bdbuf_lock_cache (); 1571 1813 1572 1814 for (b = 1; b < req->bufnum; b++) … … 1589 1831 bd->state = RTEMS_BDBUF_STATE_ACCESS; 1590 1832 1591 rtems_bdbuf_unlock_ pool (pool);1833 rtems_bdbuf_unlock_cache (); 1592 1834 rtems_disk_release (dd); 1593 1835 … … 1600 1842 rtems_bdbuf_release (rtems_bdbuf_buffer* bd) 1601 1843 { 1602 rtems_bdbuf_pool* pool; 1844 if (!bdbuf_cache.initialised) 1845 return RTEMS_NOT_CONFIGURED; 1603 1846 1604 1847 if (bd == NULL) 1605 1848 return RTEMS_INVALID_ADDRESS; 1606 1849 1607 pool = rtems_bdbuf_get_pool (bd->pool); 1608 1609 rtems_bdbuf_lock_pool (pool); 1850 rtems_bdbuf_lock_cache (); 1610 1851 1611 1852 #if RTEMS_BDBUF_TRACE … … 1615 1856 if (bd->state == RTEMS_BDBUF_STATE_ACCESS_MODIFIED) 1616 1857 { 1617 rtems_bdbuf_append_modified ( pool,bd);1858 rtems_bdbuf_append_modified (bd); 1618 1859 } 1619 1860 else … … 1625 1866 */ 1626 1867 if (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD) 1627 rtems_chain_prepend (& pool->ready, &bd->link);1868 rtems_chain_prepend (&bdbuf_cache.ready, &bd->link); 1628 1869 else 1629 1870 { 1630 1871 bd->state = RTEMS_BDBUF_STATE_CACHED; 1631 rtems_chain_append (&pool->lru, &bd->link); 1632 } 1872 rtems_chain_append (&bdbuf_cache.lru, &bd->link); 1873 } 1874 1875 /* 1876 * One less user for the group of bds. 1877 */ 1878 bd->group->users--; 1633 1879 } 1634 1880 … … 1638 1884 */ 1639 1885 if (bd->waiters) 1640 rtems_bdbuf_wake ( pool->access, &pool->access_waiters);1886 rtems_bdbuf_wake (bdbuf_cache.access, &bdbuf_cache.access_waiters); 1641 1887 else 1642 1888 { 1643 1889 if (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD) 1644 1890 { 1645 if (rtems_chain_has_only_one_node (& pool->ready))1646 rtems_bdbuf_wake ( pool->waiting, &pool->wait_waiters);1891 if (rtems_chain_has_only_one_node (&bdbuf_cache.ready)) 1892 rtems_bdbuf_wake (bdbuf_cache.waiting, &bdbuf_cache.wait_waiters); 1647 1893 } 1648 1894 else 1649 1895 { 1650 if (rtems_chain_has_only_one_node (& pool->lru))1651 rtems_bdbuf_wake ( pool->waiting, &pool->wait_waiters);1652 } 1653 } 1654 1655 rtems_bdbuf_unlock_ pool (pool);1896 if (rtems_chain_has_only_one_node (&bdbuf_cache.lru)) 1897 rtems_bdbuf_wake (bdbuf_cache.waiting, &bdbuf_cache.wait_waiters); 1898 } 1899 } 1900 1901 rtems_bdbuf_unlock_cache (); 1656 1902 1657 1903 return RTEMS_SUCCESSFUL; … … 1661 1907 rtems_bdbuf_release_modified (rtems_bdbuf_buffer* bd) 1662 1908 { 1663 rtems_bdbuf_pool* pool; 1664 1665 if (bd == NULL) 1909 if (!bdbuf_cache.initialised) 1910 return RTEMS_NOT_CONFIGURED; 1911 1912 if (!bd) 1666 1913 return RTEMS_INVALID_ADDRESS; 1667 1914 1668 pool = rtems_bdbuf_get_pool (bd->pool); 1669 1670 rtems_bdbuf_lock_pool (pool); 1915 rtems_bdbuf_lock_cache (); 1671 1916 1672 1917 #if RTEMS_BDBUF_TRACE … … 1676 1921 bd->hold_timer = rtems_bdbuf_configuration.swap_block_hold; 1677 1922 1678 rtems_bdbuf_append_modified ( pool,bd);1923 rtems_bdbuf_append_modified (bd); 1679 1924 1680 1925 if (bd->waiters) 1681 rtems_bdbuf_wake ( pool->access, &pool->access_waiters);1682 1683 rtems_bdbuf_unlock_ pool (pool);1926 rtems_bdbuf_wake (bdbuf_cache.access, &bdbuf_cache.access_waiters); 1927 1928 rtems_bdbuf_unlock_cache (); 1684 1929 1685 1930 return RTEMS_SUCCESSFUL; … … 1689 1934 rtems_bdbuf_sync (rtems_bdbuf_buffer* bd) 1690 1935 { 1691 rtems_bdbuf_pool* pool; 1692 bool available; 1936 bool available; 1693 1937 1694 1938 #if RTEMS_BDBUF_TRACE … … 1696 1940 #endif 1697 1941 1698 if (bd == NULL) 1942 if (!bdbuf_cache.initialised) 1943 return RTEMS_NOT_CONFIGURED; 1944 1945 if (!bd) 1699 1946 return RTEMS_INVALID_ADDRESS; 1700 1947 1701 pool = rtems_bdbuf_get_pool (bd->pool); 1702 1703 rtems_bdbuf_lock_pool (pool); 1948 rtems_bdbuf_lock_cache (); 1704 1949 1705 1950 bd->state = RTEMS_BDBUF_STATE_SYNC; 1706 1951 1707 rtems_chain_append (& pool->sync, &bd->link);1952 rtems_chain_append (&bdbuf_cache.sync, &bd->link); 1708 1953 1709 1954 rtems_bdbuf_wake_swapper (); … … 1725 1970 case RTEMS_BDBUF_STATE_TRANSFER: 1726 1971 bd->waiters++; 1727 rtems_bdbuf_wait ( pool, &pool->transfer, &pool->transfer_waiters);1972 rtems_bdbuf_wait (&bdbuf_cache.transfer, &bdbuf_cache.transfer_waiters); 1728 1973 bd->waiters--; 1729 1974 break; … … 1734 1979 } 1735 1980 1736 rtems_bdbuf_unlock_ pool (pool);1981 rtems_bdbuf_unlock_cache (); 1737 1982 1738 1983 return RTEMS_SUCCESSFUL; … … 1743 1988 { 1744 1989 rtems_disk_device* dd; 1745 rtems_bdbuf_pool* pool;1746 1990 rtems_status_code sc; 1747 1991 rtems_event_set out; … … 1751 1995 #endif 1752 1996 1753 /* 1754 * Do not hold the pool lock when obtaining the disk table. 1997 if (!bdbuf_cache.initialised) 1998 return RTEMS_NOT_CONFIGURED; 1999 2000 /* 2001 * Do not hold the cache lock when obtaining the disk table. 1755 2002 */ 1756 2003 dd = rtems_disk_obtain (dev); 1757 if ( dd == NULL)2004 if (!dd) 1758 2005 return RTEMS_INVALID_ID; 1759 2006 1760 pool = rtems_bdbuf_get_pool (dd->pool); 1761 1762 /* 1763 * Take the sync lock before locking the pool. Once we have the sync lock 1764 * we can lock the pool. If another thread has the sync lock it will cause 1765 * this thread to block until it owns the sync lock then it can own the 1766 * pool. The sync lock can only be obtained with the pool unlocked. 1767 */ 1768 1769 rtems_bdbuf_lock_sync (pool); 1770 rtems_bdbuf_lock_pool (pool); 1771 1772 /* 1773 * Set the pool to have a sync active for a specific device and let the swap 2007 /* 2008 * Take the sync lock before locking the cache. Once we have the sync lock we 2009 * can lock the cache. If another thread has the sync lock it will cause this 2010 * thread to block until it owns the sync lock then it can own the cache. The 2011 * sync lock can only be obtained with the cache unlocked. 2012 */ 2013 2014 rtems_bdbuf_lock_sync (); 2015 rtems_bdbuf_lock_cache (); 2016 2017 /* 2018 * Set the cache to have a sync active for a specific device and let the swap 1774 2019 * out task know the id of the requester to wake when done. 1775 2020 * 1776 2021 * The swap out task will negate the sync active flag when no more buffers 1777 * for the device are held on the modified for syncqueues.1778 */ 1779 pool->sync_active = true;1780 pool->sync_requester = rtems_task_self ();1781 pool->sync_device = dev;2022 * for the device are held on the "modified for sync" queues. 2023 */ 2024 bdbuf_cache.sync_active = true; 2025 bdbuf_cache.sync_requester = rtems_task_self (); 2026 bdbuf_cache.sync_device = dev; 1782 2027 1783 2028 rtems_bdbuf_wake_swapper (); 1784 rtems_bdbuf_unlock_ pool (pool);2029 rtems_bdbuf_unlock_cache (); 1785 2030 1786 2031 sc = rtems_event_receive (RTEMS_BDBUF_TRANSFER_SYNC, … … 1791 2036 rtems_fatal_error_occurred (BLKDEV_FATAL_BDBUF_SWAPOUT_RE); 1792 2037 1793 rtems_bdbuf_unlock_sync ( pool);1794 1795 return rtems_disk_release (dd);2038 rtems_bdbuf_unlock_sync (); 2039 2040 return rtems_disk_release (dd); 1796 2041 } 1797 2042 … … 1818 2063 1819 2064 /** 1820 * Process the modified list of buffers. There us a sync or modified list that 1821 * needs to be handled. 1822 * 1823 * @param pid The pool id to process modified buffers on. 1824 * @param dev The device to handle. If -1 no device is selected so select the 1825 * device of the first buffer to be written to disk. 1826 * @param chain The modified chain to process. 1827 * @param transfer The chain to append buffers to be written too. 1828 * @param sync_active If true this is a sync operation so expire all timers. 1829 * @param update_timers If true update the timers. 1830 * @param timer_delta It update_timers is true update the timers by this 1831 * amount. 2065 * Swapout transfer to the driver. The driver will break this I/O into groups 2066 * of consecutive write requests is multiple consecutive buffers are required 2067 * by the driver. 2068 * 2069 * @param transfer The transfer transaction. 1832 2070 */ 1833 2071 static void 1834 rtems_bdbuf_swapout_modified_processing (rtems_bdpool_id pid, 1835 dev_t* dev, 1836 rtems_chain_control* chain, 1837 rtems_chain_control* transfer, 1838 bool sync_active, 1839 bool update_timers, 1840 uint32_t timer_delta) 1841 { 1842 if (!rtems_chain_is_empty (chain)) 1843 { 1844 rtems_chain_node* node = rtems_chain_head (chain); 1845 node = node->next; 1846 1847 while (!rtems_chain_is_tail (chain, node)) 1848 { 1849 rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node; 1850 1851 if (bd->pool == pid) 1852 { 1853 /* 1854 * Check if the buffer's hold timer has reached 0. If a sync is active 1855 * force all the timers to 0. 1856 * 1857 * @note Lots of sync requests will skew this timer. It should be based 1858 * on TOD to be accurate. Does it matter ? 1859 */ 1860 if (sync_active) 1861 bd->hold_timer = 0; 1862 1863 if (bd->hold_timer) 1864 { 1865 if (update_timers) 1866 { 1867 if (bd->hold_timer > timer_delta) 1868 bd->hold_timer -= timer_delta; 1869 else 1870 bd->hold_timer = 0; 1871 } 1872 1873 if (bd->hold_timer) 1874 { 1875 node = node->next; 1876 continue; 1877 } 1878 } 1879 1880 /* 1881 * This assumes we can set dev_t to -1 which is just an 1882 * assumption. Cannot use the transfer list being empty the sync dev 1883 * calls sets the dev to use. 1884 */ 1885 if (*dev == (dev_t)-1) 1886 *dev = bd->dev; 1887 1888 if (bd->dev == *dev) 1889 { 1890 rtems_chain_node* next_node = node->next; 1891 rtems_chain_node* tnode = rtems_chain_tail (transfer); 1892 1893 /* 1894 * The blocks on the transfer list are sorted in block order. This 1895 * means multi-block transfers for drivers that require consecutive 1896 * blocks perform better with sorted blocks and for real disks it may 1897 * help lower head movement. 1898 */ 1899 1900 bd->state = RTEMS_BDBUF_STATE_TRANSFER; 1901 1902 rtems_chain_extract (node); 1903 1904 tnode = tnode->previous; 1905 1906 while (node && !rtems_chain_is_head (transfer, tnode)) 1907 { 1908 rtems_bdbuf_buffer* tbd = (rtems_bdbuf_buffer*) tnode; 1909 1910 if (bd->block > tbd->block) 1911 { 1912 rtems_chain_insert (tnode, node); 1913 node = NULL; 1914 } 1915 else 1916 tnode = tnode->previous; 1917 } 1918 1919 if (node) 1920 rtems_chain_prepend (transfer, node); 1921 1922 node = next_node; 1923 } 1924 else 1925 { 1926 node = node->next; 1927 } 1928 } 1929 } 1930 } 1931 } 1932 1933 /** 1934 * Process a pool's modified buffers. Check the sync list first then the 1935 * modified list extracting the buffers suitable to be written to disk. We have 1936 * a device at a time. The task level loop will repeat this operation while 1937 * there are buffers to be written. If the transfer fails place the buffers 1938 * back on the modified list and try again later. The pool is unlocked while 1939 * the buffers are being written to disk. 1940 * 1941 * @param pid The pool id to process modified buffers on. 1942 * @param timer_delta It update_timers is true update the timers by this 1943 * amount. 1944 * @param update_timers If true update the timers. 1945 * @param write_req The write request structure. There is only one. 1946 * 1947 * @retval true Buffers where written to disk so scan again. 1948 * @retval false No buffers where written to disk. 1949 */ 1950 static bool 1951 rtems_bdbuf_swapout_pool_processing (rtems_bdpool_id pid, 1952 unsigned long timer_delta, 1953 bool update_timers, 1954 rtems_blkdev_request* write_req) 1955 { 1956 rtems_bdbuf_pool* pool = rtems_bdbuf_get_pool (pid); 1957 rtems_chain_control transfer; 1958 dev_t dev = -1; 2072 rtems_bdbuf_swapout_write (rtems_bdbuf_swapout_transfer* transfer) 2073 { 1959 2074 rtems_disk_device* dd; 1960 bool transfered_buffers = true; 1961 1962 rtems_chain_initialize_empty (&transfer); 1963 1964 rtems_bdbuf_lock_pool (pool); 1965 1966 /* 1967 * When the sync is for a device limit the sync to that device. If the sync 1968 * is for a buffer handle process the devices in the order on the sync 1969 * list. This means the dev is -1. 1970 */ 1971 if (pool->sync_active) 1972 dev = pool->sync_device; 1973 1974 /* 1975 * If we have any buffers in the sync queue move them to the modified 1976 * list. The first sync buffer will select the device we use. 1977 */ 1978 rtems_bdbuf_swapout_modified_processing (pid, &dev, 1979 &pool->sync, &transfer, 1980 true, false, 1981 timer_delta); 1982 1983 /* 1984 * Process the pool's modified list. 1985 */ 1986 rtems_bdbuf_swapout_modified_processing (pid, &dev, 1987 &pool->modified, &transfer, 1988 pool->sync_active, 1989 update_timers, 1990 timer_delta); 1991 1992 /* 1993 * We have all the buffers that have been modified for this device so the 1994 * pool can be unlocked because the state of each buffer has been set to 1995 * TRANSFER. 1996 */ 1997 rtems_bdbuf_unlock_pool (pool); 2075 2076 #if RTEMS_BDBUF_TRACE 2077 rtems_bdbuf_printf ("swapout transfer: %08x\n", transfer->dev); 2078 #endif 1998 2079 1999 2080 /* 2000 2081 * If there are buffers to transfer to the media transfer them. 2001 2082 */ 2002 if (rtems_chain_is_empty (&transfer)) 2003 transfered_buffers = false; 2004 else 2083 if (!rtems_chain_is_empty (&transfer->bds)) 2005 2084 { 2006 2085 /* 2007 * Obtain the disk device. The pool's mutex has been released to avoid a2086 * Obtain the disk device. The cache's mutex has been released to avoid a 2008 2087 * dead lock. 2009 2088 */ 2010 dd = rtems_disk_obtain (dev); 2011 if (dd == NULL) 2012 transfered_buffers = false; 2013 else 2089 dd = rtems_disk_obtain (transfer->dev); 2090 if (dd) 2014 2091 { 2015 2092 /* … … 2028 2105 * trouble waiting to happen. 2029 2106 */ 2030 write_req->status = RTEMS_RESOURCE_IN_USE;2031 write_req->error = 0;2032 write_req->bufnum = 0;2033 2034 while (!rtems_chain_is_empty (&transfer ))2107 transfer->write_req->status = RTEMS_RESOURCE_IN_USE; 2108 transfer->write_req->error = 0; 2109 transfer->write_req->bufnum = 0; 2110 2111 while (!rtems_chain_is_empty (&transfer->bds)) 2035 2112 { 2036 2113 rtems_bdbuf_buffer* bd = 2037 (rtems_bdbuf_buffer*) rtems_chain_get (&transfer );2114 (rtems_bdbuf_buffer*) rtems_chain_get (&transfer->bds); 2038 2115 2039 2116 bool write = false; … … 2046 2123 */ 2047 2124 2048 if ((dd->capabilities & RTEMS_BLKDEV_CAP_MULTISECTOR_CONT) && 2049 write_req->bufnum && 2125 #if RTEMS_BDBUF_TRACE 2126 rtems_bdbuf_printf ("swapout write: bd:%d, bufnum:%d mode:%s\n", 2127 bd->block, transfer->write_req->bufnum, 2128 dd->phys_dev->capabilities & 2129 RTEMS_BLKDEV_CAP_MULTISECTOR_CONT ? "MULIT" : "SCAT"); 2130 #endif 2131 2132 if ((dd->phys_dev->capabilities & RTEMS_BLKDEV_CAP_MULTISECTOR_CONT) && 2133 transfer->write_req->bufnum && 2050 2134 (bd->block != (last_block + 1))) 2051 2135 { 2052 rtems_chain_prepend (&transfer , &bd->link);2136 rtems_chain_prepend (&transfer->bds, &bd->link); 2053 2137 write = true; 2054 2138 } 2055 2139 else 2056 2140 { 2057 write_req->bufs[write_req->bufnum].user = bd; 2058 write_req->bufs[write_req->bufnum].block = bd->block; 2059 write_req->bufs[write_req->bufnum].length = dd->block_size; 2060 write_req->bufs[write_req->bufnum].buffer = bd->buffer; 2061 write_req->bufnum++; 2062 last_block = bd->block; 2141 rtems_blkdev_sg_buffer* buf; 2142 buf = &transfer->write_req->bufs[transfer->write_req->bufnum]; 2143 transfer->write_req->bufnum++; 2144 buf->user = bd; 2145 buf->block = bd->block; 2146 buf->length = dd->block_size; 2147 buf->buffer = bd->buffer; 2148 last_block = bd->block; 2063 2149 } 2064 2150 … … 2068 2154 */ 2069 2155 2070 if (rtems_chain_is_empty (&transfer ) ||2071 ( write_req->bufnum >= rtems_bdbuf_configuration.max_write_blocks))2156 if (rtems_chain_is_empty (&transfer->bds) || 2157 (transfer->write_req->bufnum >= rtems_bdbuf_configuration.max_write_blocks)) 2072 2158 write = true; 2073 2159 … … 2077 2163 uint32_t b; 2078 2164 2165 #if RTEMS_BDBUF_TRACE 2166 rtems_bdbuf_printf ("swapout write: writing bufnum:%d\n", 2167 transfer->write_req->bufnum); 2168 #endif 2079 2169 /* 2080 * Perform the transfer. No poollocks, no preemption, only the disk2170 * Perform the transfer. No cache locks, no preemption, only the disk 2081 2171 * device is being held. 2082 2172 */ 2083 result = dd-> ioctl (dd->phys_dev->dev,2084 RTEMS_BLKIO_REQUEST,write_req);2173 result = dd->phys_dev->ioctl (dd->phys_dev->dev, 2174 RTEMS_BLKIO_REQUEST, transfer->write_req); 2085 2175 2086 2176 if (result < 0) 2087 2177 { 2088 rtems_bdbuf_lock_ pool (pool);2178 rtems_bdbuf_lock_cache (); 2089 2179 2090 for (b = 0; b < write_req->bufnum; b++)2180 for (b = 0; b < transfer->write_req->bufnum; b++) 2091 2181 { 2092 bd = write_req->bufs[b].user;2182 bd = transfer->write_req->bufs[b].user; 2093 2183 bd->state = RTEMS_BDBUF_STATE_MODIFIED; 2094 2184 bd->error = errno; 2095 2185 2096 2186 /* 2097 * Place back on the pools modified queue and try again.2187 * Place back on the cache's modified queue and try again. 2098 2188 * 2099 2189 * @warning Not sure this is the best option but I do not know 2100 2190 * what else can be done. 2101 2191 */ 2102 rtems_chain_append (& pool->modified, &bd->link);2192 rtems_chain_append (&bdbuf_cache.modified, &bd->link); 2103 2193 } 2104 2194 } … … 2115 2205 rtems_fatal_error_occurred (BLKDEV_FATAL_BDBUF_SWAPOUT_RE); 2116 2206 2117 rtems_bdbuf_lock_ pool (pool);2118 2119 for (b = 0; b < write_req->bufnum; b++)2207 rtems_bdbuf_lock_cache (); 2208 2209 for (b = 0; b < transfer->write_req->bufnum; b++) 2120 2210 { 2121 bd = write_req->bufs[b].user;2211 bd = transfer->write_req->bufs[b].user; 2122 2212 bd->state = RTEMS_BDBUF_STATE_CACHED; 2123 2213 bd->error = 0; 2124 2125 rtems_chain_append (&pool->lru, &bd->link); 2214 bd->group->users--; 2215 2216 rtems_chain_append (&bdbuf_cache.lru, &bd->link); 2126 2217 2127 2218 if (bd->waiters) 2128 rtems_bdbuf_wake ( pool->transfer, &pool->transfer_waiters);2219 rtems_bdbuf_wake (bdbuf_cache.transfer, &bdbuf_cache.transfer_waiters); 2129 2220 else 2130 2221 { 2131 if (rtems_chain_has_only_one_node (& pool->lru))2132 rtems_bdbuf_wake ( pool->waiting, &pool->wait_waiters);2222 if (rtems_chain_has_only_one_node (&bdbuf_cache.lru)) 2223 rtems_bdbuf_wake (bdbuf_cache.waiting, &bdbuf_cache.wait_waiters); 2133 2224 } 2134 2225 } 2135 2226 } 2136 2137 rtems_bdbuf_unlock_ pool (pool);2138 2139 write_req->status = RTEMS_RESOURCE_IN_USE;2140 write_req->error = 0;2141 write_req->bufnum = 0;2227 2228 rtems_bdbuf_unlock_cache (); 2229 2230 transfer->write_req->status = RTEMS_RESOURCE_IN_USE; 2231 transfer->write_req->error = 0; 2232 transfer->write_req->bufnum = 0; 2142 2233 } 2143 2234 } … … 2145 2236 rtems_disk_release (dd); 2146 2237 } 2147 } 2148 2149 if (pool->sync_active && ! transfered_buffers) 2150 { 2151 rtems_id sync_requester = pool->sync_requester; 2152 pool->sync_active = false; 2153 pool->sync_requester = 0; 2238 else 2239 { 2240 /* 2241 * We have buffers but no device. Put the BDs back onto the 2242 * ready queue and exit. 2243 */ 2244 /* @todo fixme */ 2245 } 2246 } 2247 } 2248 2249 /** 2250 * Process the modified list of buffers. There is a sync or modified list that 2251 * needs to be handled so we have a common function to do the work. 2252 * 2253 * @param dev The device to handle. If -1 no device is selected so select the 2254 * device of the first buffer to be written to disk. 2255 * @param chain The modified chain to process. 2256 * @param transfer The chain to append buffers to be written too. 2257 * @param sync_active If true this is a sync operation so expire all timers. 2258 * @param update_timers If true update the timers. 2259 * @param timer_delta It update_timers is true update the timers by this 2260 * amount. 2261 */ 2262 static void 2263 rtems_bdbuf_swapout_modified_processing (dev_t* dev, 2264 rtems_chain_control* chain, 2265 rtems_chain_control* transfer, 2266 bool sync_active, 2267 bool update_timers, 2268 uint32_t timer_delta) 2269 { 2270 if (!rtems_chain_is_empty (chain)) 2271 { 2272 rtems_chain_node* node = rtems_chain_head (chain); 2273 node = node->next; 2274 2275 while (!rtems_chain_is_tail (chain, node)) 2276 { 2277 rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node; 2278 2279 /* 2280 * Check if the buffer's hold timer has reached 0. If a sync is active 2281 * force all the timers to 0. 2282 * 2283 * @note Lots of sync requests will skew this timer. It should be based 2284 * on TOD to be accurate. Does it matter ? 2285 */ 2286 if (sync_active) 2287 bd->hold_timer = 0; 2288 2289 if (bd->hold_timer) 2290 { 2291 if (update_timers) 2292 { 2293 if (bd->hold_timer > timer_delta) 2294 bd->hold_timer -= timer_delta; 2295 else 2296 bd->hold_timer = 0; 2297 } 2298 2299 if (bd->hold_timer) 2300 { 2301 node = node->next; 2302 continue; 2303 } 2304 } 2305 2306 /* 2307 * This assumes we can set dev_t to -1 which is just an 2308 * assumption. Cannot use the transfer list being empty the sync dev 2309 * calls sets the dev to use. 2310 */ 2311 if (*dev == (dev_t)-1) 2312 *dev = bd->dev; 2313 2314 if (bd->dev == *dev) 2315 { 2316 rtems_chain_node* next_node = node->next; 2317 rtems_chain_node* tnode = rtems_chain_tail (transfer); 2318 2319 /* 2320 * The blocks on the transfer list are sorted in block order. This 2321 * means multi-block transfers for drivers that require consecutive 2322 * blocks perform better with sorted blocks and for real disks it may 2323 * help lower head movement. 2324 */ 2325 2326 bd->state = RTEMS_BDBUF_STATE_TRANSFER; 2327 2328 rtems_chain_extract (node); 2329 2330 tnode = tnode->previous; 2331 2332 while (node && !rtems_chain_is_head (transfer, tnode)) 2333 { 2334 rtems_bdbuf_buffer* tbd = (rtems_bdbuf_buffer*) tnode; 2335 2336 if (bd->block > tbd->block) 2337 { 2338 rtems_chain_insert (tnode, node); 2339 node = NULL; 2340 } 2341 else 2342 tnode = tnode->previous; 2343 } 2344 2345 if (node) 2346 rtems_chain_prepend (transfer, node); 2347 2348 node = next_node; 2349 } 2350 else 2351 { 2352 node = node->next; 2353 } 2354 } 2355 } 2356 } 2357 2358 /** 2359 * Process the cache's modified buffers. Check the sync list first then the 2360 * modified list extracting the buffers suitable to be written to disk. We have 2361 * a device at a time. The task level loop will repeat this operation while 2362 * there are buffers to be written. If the transfer fails place the buffers 2363 * back on the modified list and try again later. The cache is unlocked while 2364 * the buffers are being written to disk. 2365 * 2366 * @param timer_delta It update_timers is true update the timers by this 2367 * amount. 2368 * @param update_timers If true update the timers. 2369 * @param transfer The transfer transaction data. 2370 * 2371 * @retval true Buffers where written to disk so scan again. 2372 * @retval false No buffers where written to disk. 2373 */ 2374 static bool 2375 rtems_bdbuf_swapout_processing (unsigned long timer_delta, 2376 bool update_timers, 2377 rtems_bdbuf_swapout_transfer* transfer) 2378 { 2379 rtems_bdbuf_swapout_worker* worker; 2380 bool transfered_buffers = false; 2381 2382 rtems_bdbuf_lock_cache (); 2383 2384 /* 2385 * If a sync is active do not use a worker because the current code does not 2386 * cleaning up after. We need to know the buffers have been written when 2387 * syncing to release sync lock and currently worker threads do not return to 2388 * here. We do not know the worker is the last in a sequence of sync writes 2389 * until after we have it running so we do not know to tell it to release the 2390 * lock. The simplest solution is to get the main swap out task perform all 2391 * sync operations. 2392 */ 2393 if (bdbuf_cache.sync_active) 2394 worker = NULL; 2395 else 2396 { 2397 worker = (rtems_bdbuf_swapout_worker*) 2398 rtems_chain_get (&bdbuf_cache.swapout_workers); 2399 if (worker) 2400 transfer = &worker->transfer; 2401 } 2402 2403 rtems_chain_initialize_empty (&transfer->bds); 2404 transfer->dev = -1; 2405 2406 /* 2407 * When the sync is for a device limit the sync to that device. If the sync 2408 * is for a buffer handle process the devices in the order on the sync 2409 * list. This means the dev is -1. 2410 */ 2411 if (bdbuf_cache.sync_active) 2412 transfer->dev = bdbuf_cache.sync_device; 2413 2414 /* 2415 * If we have any buffers in the sync queue move them to the modified 2416 * list. The first sync buffer will select the device we use. 2417 */ 2418 rtems_bdbuf_swapout_modified_processing (&transfer->dev, 2419 &bdbuf_cache.sync, 2420 &transfer->bds, 2421 true, false, 2422 timer_delta); 2423 2424 /* 2425 * Process the cache's modified list. 2426 */ 2427 rtems_bdbuf_swapout_modified_processing (&transfer->dev, 2428 &bdbuf_cache.modified, 2429 &transfer->bds, 2430 bdbuf_cache.sync_active, 2431 update_timers, 2432 timer_delta); 2433 2434 /* 2435 * We have all the buffers that have been modified for this device so the 2436 * cache can be unlocked because the state of each buffer has been set to 2437 * TRANSFER. 2438 */ 2439 rtems_bdbuf_unlock_cache (); 2440 2441 /* 2442 * If there are buffers to transfer to the media transfer them. 2443 */ 2444 if (!rtems_chain_is_empty (&transfer->bds)) 2445 { 2446 if (worker) 2447 { 2448 rtems_status_code sc = rtems_event_send (worker->id, 2449 RTEMS_BDBUF_SWAPOUT_SYNC); 2450 if (sc != RTEMS_SUCCESSFUL) 2451 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_WAKE); 2452 } 2453 else 2454 { 2455 rtems_bdbuf_swapout_write (transfer); 2456 } 2457 2458 transfered_buffers = true; 2459 } 2460 2461 if (bdbuf_cache.sync_active && !transfered_buffers) 2462 { 2463 rtems_id sync_requester; 2464 rtems_bdbuf_lock_cache (); 2465 sync_requester = bdbuf_cache.sync_requester; 2466 bdbuf_cache.sync_active = false; 2467 bdbuf_cache.sync_requester = 0; 2468 rtems_bdbuf_unlock_cache (); 2154 2469 if (sync_requester) 2155 2470 rtems_event_send (sync_requester, RTEMS_BDBUF_TRANSFER_SYNC); 2156 2471 } 2157 2472 2158 return transfered_buffers; 2159 } 2160 2161 /** 2162 * Body of task which takes care on flushing modified buffers to the disk. 2163 * 2164 * @param arg The task argument which is the context. 2165 */ 2166 static rtems_task 2167 rtems_bdbuf_swapout_task (rtems_task_argument arg) 2168 { 2169 rtems_bdbuf_context* context = (rtems_bdbuf_context*) arg; 2170 rtems_blkdev_request* write_req; 2171 uint32_t period_in_ticks; 2172 const uint32_t period_in_msecs = rtems_bdbuf_configuration.swapout_period; 2173 uint32_t timer_delta; 2174 rtems_status_code sc; 2175 2473 return transfered_buffers; 2474 } 2475 2476 /** 2477 * Allocate the write request and initialise it for good measure. 2478 * 2479 * @return rtems_blkdev_request* The write reference memory. 2480 */ 2481 static rtems_blkdev_request* 2482 rtems_bdbuf_swapout_writereq_alloc (void) 2483 { 2176 2484 /* 2177 2485 * @note chrisj The rtems_blkdev_request and the array at the end is a hack. … … 2180 2488 * is already part of the buffer structure. 2181 2489 */ 2182 write_req =2490 rtems_blkdev_request* write_req = 2183 2491 malloc (sizeof (rtems_blkdev_request) + 2184 2492 (rtems_bdbuf_configuration.max_write_blocks * … … 2193 2501 write_req->io_task = rtems_task_self (); 2194 2502 2503 return write_req; 2504 } 2505 2506 /** 2507 * The swapout worker thread body. 2508 * 2509 * @param arg A pointer to the worker thread's private data. 2510 * @return rtems_task Not used. 2511 */ 2512 static rtems_task 2513 rtems_bdbuf_swapout_worker_task (rtems_task_argument arg) 2514 { 2515 rtems_bdbuf_swapout_worker* worker = (rtems_bdbuf_swapout_worker*) arg; 2516 2517 while (worker->enabled) 2518 { 2519 rtems_event_set out; 2520 rtems_status_code sc; 2521 2522 sc = rtems_event_receive (RTEMS_BDBUF_SWAPOUT_SYNC, 2523 RTEMS_EVENT_ALL | RTEMS_WAIT, 2524 RTEMS_NO_TIMEOUT, 2525 &out); 2526 2527 if (sc != RTEMS_SUCCESSFUL) 2528 rtems_fatal_error_occurred (BLKDEV_FATAL_BDBUF_SWAPOUT_RE); 2529 2530 rtems_bdbuf_swapout_write (&worker->transfer); 2531 2532 rtems_bdbuf_lock_cache (); 2533 2534 rtems_chain_initialize_empty (&worker->transfer.bds); 2535 worker->transfer.dev = -1; 2536 2537 rtems_chain_append (&bdbuf_cache.swapout_workers, &worker->link); 2538 2539 rtems_bdbuf_unlock_cache (); 2540 } 2541 2542 free (worker->transfer.write_req); 2543 free (worker); 2544 2545 rtems_task_delete (RTEMS_SELF); 2546 } 2547 2548 /** 2549 * Open the swapout worker threads. 2550 */ 2551 static void 2552 rtems_bdbuf_swapout_workers_open (void) 2553 { 2554 rtems_status_code sc; 2555 int w; 2556 2557 rtems_bdbuf_lock_cache (); 2558 2559 for (w = 0; w < rtems_bdbuf_configuration.swapout_workers; w++) 2560 { 2561 rtems_bdbuf_swapout_worker* worker; 2562 2563 worker = malloc (sizeof (rtems_bdbuf_swapout_worker)); 2564 if (!worker) 2565 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_NOMEM); 2566 2567 rtems_chain_append (&bdbuf_cache.swapout_workers, &worker->link); 2568 worker->enabled = true; 2569 worker->transfer.write_req = rtems_bdbuf_swapout_writereq_alloc (); 2570 2571 rtems_chain_initialize_empty (&worker->transfer.bds); 2572 worker->transfer.dev = -1; 2573 2574 sc = rtems_task_create (rtems_build_name('B', 'D', 'o', 'a' + w), 2575 (rtems_bdbuf_configuration.swapout_priority ? 2576 rtems_bdbuf_configuration.swapout_priority : 2577 RTEMS_BDBUF_SWAPOUT_TASK_PRIORITY_DEFAULT), 2578 SWAPOUT_TASK_STACK_SIZE, 2579 RTEMS_PREEMPT | RTEMS_NO_TIMESLICE | RTEMS_NO_ASR, 2580 RTEMS_LOCAL | RTEMS_NO_FLOATING_POINT, 2581 &worker->id); 2582 if (sc != RTEMS_SUCCESSFUL) 2583 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_CREATE); 2584 2585 sc = rtems_task_start (worker->id, 2586 rtems_bdbuf_swapout_worker_task, 2587 (rtems_task_argument) worker); 2588 if (sc != RTEMS_SUCCESSFUL) 2589 rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_START); 2590 } 2591 2592 rtems_bdbuf_unlock_cache (); 2593 } 2594 2595 /** 2596 * Close the swapout worker threads. 2597 */ 2598 static void 2599 rtems_bdbuf_swapout_workers_close (void) 2600 { 2601 rtems_chain_node* node; 2602 2603 rtems_bdbuf_lock_cache (); 2604 2605 node = rtems_chain_first (&bdbuf_cache.swapout_workers); 2606 while (!rtems_chain_is_tail (&bdbuf_cache.swapout_workers, node)) 2607 { 2608 rtems_bdbuf_swapout_worker* worker = (rtems_bdbuf_swapout_worker*) node; 2609 worker->enabled = false; 2610 rtems_event_send (worker->id, RTEMS_BDBUF_SWAPOUT_SYNC); 2611 node = rtems_chain_next (node); 2612 } 2613 2614 rtems_bdbuf_unlock_cache (); 2615 } 2616 2617 /** 2618 * Body of task which takes care on flushing modified buffers to the disk. 2619 * 2620 * @param arg A pointer to the global cache data. Use the global variable and 2621 * not this. 2622 * @return rtems_task Not used. 2623 */ 2624 static rtems_task 2625 rtems_bdbuf_swapout_task (rtems_task_argument arg) 2626 { 2627 rtems_bdbuf_swapout_transfer transfer; 2628 uint32_t period_in_ticks; 2629 const uint32_t period_in_msecs = bdbuf_config.swapout_period;; 2630 uint32_t timer_delta; 2631 2632 transfer.write_req = rtems_bdbuf_swapout_writereq_alloc (); 2633 rtems_chain_initialize_empty (&transfer.bds); 2634 transfer.dev = -1; 2635 2636 /* 2637 * Localise the period. 2638 */ 2195 2639 period_in_ticks = RTEMS_MICROSECONDS_TO_TICKS (period_in_msecs * 1000); 2196 2640 … … 2200 2644 timer_delta = period_in_msecs; 2201 2645 2202 while (context->swapout_enabled) 2203 { 2204 rtems_event_set out; 2646 /* 2647 * Create the worker threads. 2648 */ 2649 rtems_bdbuf_swapout_workers_open (); 2650 2651 while (bdbuf_cache.swapout_enabled) 2652 { 2653 rtems_event_set out; 2654 rtems_status_code sc; 2205 2655 2206 2656 /* … … 2211 2661 /* 2212 2662 * If we write buffers to any disk perform a check again. We only write a 2213 * single device at a time and a pool may have more than one devices2663 * single device at a time and the cache may have more than one device's 2214 2664 * buffers modified waiting to be written. 2215 2665 */ … … 2218 2668 do 2219 2669 { 2220 rtems_bdpool_id pid;2221 2222 2670 transfered_buffers = false; 2223 2671 2224 2672 /* 2225 * Loop over each pool extacting all the buffers we find for a specific2226 * device. The device is the first one we find on a modified list of a2227 * pool. Process the sync queue ofbuffers first.2673 * Extact all the buffers we find for a specific device. The device is 2674 * the first one we find on a modified list. Process the sync queue of 2675 * buffers first. 2228 2676 */ 2229 for (pid = 0; pid < context->npools; pid++) 2677 if (rtems_bdbuf_swapout_processing (timer_delta, 2678 update_timers, 2679 &transfer)) 2230 2680 { 2231 if (rtems_bdbuf_swapout_pool_processing (pid, 2232 timer_delta, 2233 update_timers, 2234 write_req)) 2235 { 2236 transfered_buffers = true; 2237 } 2681 transfered_buffers = true; 2238 2682 } 2239 2683 2240 2684 /* 2241 2685 * Only update the timers once. … … 2254 2698 } 2255 2699 2256 free (write_req); 2700 rtems_bdbuf_swapout_workers_close (); 2701 2702 free (transfer.write_req); 2257 2703 2258 2704 rtems_task_delete (RTEMS_SELF); 2259 2705 } 2260 2706 2261 rtems_status_code2262 rtems_bdbuf_find_pool (uint32_t block_size, rtems_bdpool_id *pool)2263 {2264 rtems_bdbuf_pool* p;2265 rtems_bdpool_id i;2266 rtems_bdpool_id curid = -1;2267 bool found = false;2268 uint32_t cursize = UINT_MAX;2269 int j;2270 2271 for (j = block_size; (j != 0) && ((j & 1) == 0); j >>= 1);2272 if (j != 1)2273 return RTEMS_INVALID_SIZE;2274 2275 for (i = 0; i < rtems_bdbuf_ctx.npools; i++)2276 {2277 p = rtems_bdbuf_get_pool (i);2278 if ((p->blksize >= block_size) &&2279 (p->blksize < cursize))2280 {2281 curid = i;2282 cursize = p->blksize;2283 found = true;2284 }2285 }2286 2287 if (found)2288 {2289 if (pool != NULL)2290 *pool = curid;2291 return RTEMS_SUCCESSFUL;2292 }2293 else2294 {2295 return RTEMS_NOT_DEFINED;2296 }2297 }2298 2299 rtems_status_code rtems_bdbuf_get_pool_info(2300 rtems_bdpool_id pool,2301 uint32_t *block_size,2302 uint32_t *blocks2303 )2304 {2305 if (pool >= rtems_bdbuf_ctx.npools)2306 return RTEMS_INVALID_NUMBER;2307 2308 if (block_size != NULL)2309 {2310 *block_size = rtems_bdbuf_ctx.pool[pool].blksize;2311 }2312 2313 if (blocks != NULL)2314 {2315 *blocks = rtems_bdbuf_ctx.pool[pool].nblks;2316 }2317 2318 return RTEMS_SUCCESSFUL;2319 } -
cpukit/libblock/src/blkdev.c
rf14a21df r0d15414e 38 38 { 39 39 rtems_libio_rw_args_t *args = arg; 40 int block_size_log2;41 40 int block_size; 42 41 char *buf; … … 52 51 return RTEMS_INVALID_NUMBER; 53 52 54 block_size_log2 = dd->block_size_log2;55 53 block_size = dd->block_size; 56 54 … … 59 57 args->bytes_moved = 0; 60 58 61 block = args->offset >> block_size_log2;62 blkofs = args->offset & (block_size - 1);59 block = args->offset / block_size; 60 blkofs = args->offset % block_size; 63 61 64 62 while (count > 0) … … 98 96 { 99 97 rtems_libio_rw_args_t *args = arg; 100 int block_size_log2;101 98 uint32_t block_size; 102 99 char *buf; … … 113 110 return RTEMS_INVALID_NUMBER; 114 111 115 block_size_log2 = dd->block_size_log2;116 112 block_size = dd->block_size; 117 113 … … 120 116 args->bytes_moved = 0; 121 117 122 block = args->offset >> block_size_log2;123 blkofs = args->offset & (block_size - 1);118 block = args->offset / block_size; 119 blkofs = args->offset % block_size; 124 120 125 121 while (count > 0) -
cpukit/libblock/src/diskdevs.c
rf14a21df r0d15414e 227 227 ) 228 228 { 229 int bs_log2;230 int i;231 229 rtems_disk_device *dd; 232 230 rtems_status_code rc; 233 rtems_bdpool_id pool;234 231 rtems_device_major_number major; 235 232 rtems_device_minor_number minor; 236 233 237 234 rtems_filesystem_split_dev_t (dev, major, minor); 238 239 240 for (bs_log2 = 0, i = block_size; (i & 1) == 0; i >>= 1, bs_log2++);241 if ((bs_log2 < 9) || (i != 1)) /* block size < 512 or not power of 2 */242 return RTEMS_INVALID_NUMBER;243 235 244 236 rc = rtems_semaphore_obtain(diskdevs_mutex, RTEMS_WAIT, RTEMS_NO_TIMEOUT); … … 247 239 diskdevs_protected = true; 248 240 249 rc = rtems_bdbuf_find_pool(block_size, &pool);250 if (rc != RTEMS_SUCCESSFUL)251 {252 diskdevs_protected = false;253 rtems_semaphore_release(diskdevs_mutex);254 return rc;255 }256 257 241 rc = create_disk(dev, name, &dd); 258 242 if (rc != RTEMS_SUCCESSFUL) … … 267 251 dd->start = 0; 268 252 dd->size = disk_size; 269 dd->block_size = block_size; 270 dd->block_size_log2 = bs_log2; 253 dd->block_size = dd->media_block_size = block_size; 271 254 dd->ioctl = handler; 272 dd->pool = pool;273 255 274 256 rc = rtems_io_register_name(name, major, minor); … … 334 316 dd->size = size; 335 317 dd->block_size = pdd->block_size; 336 dd->block_size_log2 = pdd->block_size_log2;337 318 dd->ioctl = pdd->ioctl; 338 319 … … 556 537 rc = rtems_semaphore_delete(diskdevs_mutex); 557 538 558 /* XXX bdbuf should be released too! */559 539 disk_io_initialized = 0; 560 540 return rc;
Note: See TracChangeset
for help on using the changeset viewer.