Changeset 0d15414e in rtems


Ignore:
Timestamp:
Aug 5, 2009, 12:00:54 AM (10 years ago)
Author:
Chris Johns <chrisj@…>
Branches:
4.10, 4.11, master
Children:
6605d4d
Parents:
f14a21df
Message:

009-08-05 Chris Johns <chrisj@…>

  • libmisc/dummy/dummy-networking.c: New.
  • libmisc/dummy/dummy.c, libmisc/Makefile.am: Move trhe networking configuration into a separate file so configuration varations do not cause conflicts.
  • score/inline/rtems/score/object.inl, score/include/rtems/score/object.h: Remove warnings.
  • score/inline/rtems/score/object.inl: Add _Chain_First, _Chain_Last, _Chain_Mext, and _Chain_Previous.
  • sapi/inline/rtems/chain.inl: Add rtems_chain_first, rtems_chain_last, rtems_chain_mext, and rtems_chain_previous.
  • libblock/include/rtems/diskdevs.h: Remove the bdbuf pool id and block_size_log2. Add media_block_size.
  • libblock/src/diskdevs.c: Remove size restrictions on block size. Add media block size initialisation. Remove comment to clean up the bdbuf cache.
  • libblock/src/blkdev.c: Remove references to block_size_log2. Allow any block size.
  • libblock/include/rtems/bdbuf.h, libblock/src/bdbuf.c: Remove all references to pools and make the cache handle demand driver variable buffer size allocation. Added worker threads support the swapout task.
  • sapi/include/confdefs.h: Updated the bdbuf configutation.
Location:
cpukit
Files:
1 added
13 edited

Legend:

Unmodified
Added
Removed
  • cpukit/ChangeLog

    rf14a21df r0d15414e  
     12009-08-05      Chris Johns <chrisj@rtems.org>
     2
     3        * libmisc/dummy/dummy-networking.c: New. 
     4        * libmisc/dummy/dummy.c, libmisc/Makefile.am: Move
     5        trhe networking configuration into a separate file so
     6        configuration varations do not cause conflicts.
     7        * score/inline/rtems/score/object.inl,
     8        score/include/rtems/score/object.h: Remove warnings.
     9        * score/inline/rtems/score/object.inl: Add _Chain_First,
     10        _Chain_Last, _Chain_Mext, and _Chain_Previous.
     11        * sapi/inline/rtems/chain.inl: Add rtems_chain_first,
     12        rtems_chain_last, rtems_chain_mext, and rtems_chain_previous.
     13        * libblock/include/rtems/diskdevs.h: Remove the bdbuf pool id and
     14        block_size_log2. Add media_block_size.
     15        * libblock/src/diskdevs.c: Remove size restrictions on block
     16        size. Add media block size initialisation. Remove comment to clean
     17        up the bdbuf cache.
     18        * libblock/src/blkdev.c: Remove references to
     19        block_size_log2. Allow any block size.
     20        * libblock/include/rtems/bdbuf.h, libblock/src/bdbuf.c: Remove all
     21        references to pools and make the cache handle demand driver
     22        variable buffer size allocation. Added worker threads support the
     23        swapout task.
     24        * sapi/include/confdefs.h: Updated the bdbuf configutation.
     25       
    1262009-08-04      Joel Sherrill <joel.sherrill@OARcorp.com>
    227
  • cpukit/libblock/include/rtems/bdbuf.h

    rf14a21df r0d15414e  
    1111 * Author: Victor V. Vengerov <vvv@oktet.ru>
    1212 *
    13  * Copyright (C) 2008 Chris Johns <chrisj@rtems.org>
     13 * Copyright (C) 2008,2009 Chris Johns <chrisj@rtems.org>
    1414 *    Rewritten to remove score mutex access. Fixes many performance
    1515 *    issues.
     16      Change to support demand driven variable buffer sizes.
    1617 *
    1718 * @(#) bdbuf.h,v 1.9 2005/02/02 00:06:18 joel Exp
     
    4546 * the drivers and fast cache look up using an AVL tree.
    4647 *
    47  * The buffers are held in pools based on size. Each pool has buffers and the
    48  * buffers follow this state machine:
     48 * The block size used by a file system can be set at runtime and must be a
     49 * multiple of the disk device block size. The disk device's physical block
     50 * size is called the media block size. The file system can set the block size
     51 * it uses to a larger multiple of the media block size. The driver must be
     52 * able to handle buffers sizes larger than one media block.
     53 *
     54 * The user configures the amount of memory to be used as buffers in the cache,
     55 * and the minimum and maximum buffer size. The cache will allocate additional
     56 * memory for the buffer descriptors and groups. There are enough buffer
     57 * descriptors allocated so all the buffer memory can be used as minimum sized
     58 * buffers.
     59 *
     60 * The cache is a single pool of buffers. The buffer memory is divided into
     61 * groups where the size of buffer memory allocated to a group is the maximum
     62 * buffer size. A group's memory can be divided down into small buffer sizes
     63 * that are a multiple of 2 of the minimum buffer size. A group is the minumum
     64 * allocation unit for buffers of a specific size. If a buffer of maximum size
     65 * is request the group will have a single buffer. If a buffer of minium size
     66 * is requested the group is divided into minimum sized buffers and the
     67 * remaining buffers are held ready for use. A group keeps track of which
     68 * buffers are with a file system or driver and groups who have buffer in use
     69 * cannot be realloced. Groups with no buffers in use can be taken and
     70 * realloced to a new size. This is how buffers of different sizes move around
     71 * the cache.
     72
     73 * The buffers are held in various lists in the cache. All buffers follow this
     74 * state machine:
    4975 *                                 
    5076 * @dot
     
    83109 * buffer in the transfer state. The transfer state means being read or
    84110 * written. If the file system has modifed the block and releases it as
    85  * modified it placed on the pool's modified list and a hold timer
     111 * modified it placed on the cache's modified list and a hold timer
    86112 * initialised. The buffer is held for the hold time before being written to
    87113 * disk. Buffers are held for a configurable period of time on the modified
    88114 * list as a write sets the state to transfer and this locks the buffer out
    89  * from the file system until the write complete. Buffers are often repeatable
    90  * accessed and modified in a series of small updates so if sent to the disk
    91  * when released as modified the user would have to block waiting until it had
    92  * been written. This would be a performance problem.
     115 * from the file system until the write completes. Buffers are often accessed
     116 * and modified in a series of small updates so if sent to the disk when
     117 * released as modified the user would have to block waiting until it had been
     118 * written. This would be a performance problem.
    93119 *
    94120 * The code performs mulitple block reads and writes. Multiple block reads or
     
    104130 * the file system.
    105131 *
    106  * The pool has the following lists of buffers:
     132 * The cache has the following lists of buffers:
    107133 *  - @c ready: Empty buffers created when the pool is initialised.
    108134 *  - @c modified: Buffers waiting to be written to disk.
    109135 *  - @c sync: Buffers to be synced to disk.
    110136 *  - @c lru: Accessed buffers released in least recently used order.
     137 *
     138 * The cache scans the ready list then the LRU list for a suitable buffer in
     139 * this order. A suitable buffer is one that matches the same allocation size
     140 * as the device the buffer is for. The a buffer's group has no buffers in use
     141 * with the file system or driver the group is reallocated. This means the
     142 * buffers in the group are invalidated, resized and placed on the ready queue.
     143 * There is a performance issue with this design. The reallocation of a group
     144 * may forced recently accessed buffers out of the cache when they should
     145 * not. The design should be change to have groups on a LRU list if they have
     146 * no buffers in use.
    111147 *
    112148 * @{
     
    129165
    130166/**
     167 * Forward reference to the block.
     168 */
     169struct rtems_bdbuf_group;
     170typedef struct rtems_bdbuf_group rtems_bdbuf_group;
     171
     172/**
    131173 * To manage buffers we using buffer descriptors (BD). A BD holds a buffer plus
    132174 * a range of other information related to managing the buffer in the cache. To
    133  * speed-up buffer lookup descriptors are organized in AVL-Tree.  The fields
     175 * speed-up buffer lookup descriptors are organized in AVL-Tree. The fields
    134176 * 'dev' and 'block' are search keys.
    135177 */
    136178typedef struct rtems_bdbuf_buffer
    137179{
    138   rtems_chain_node link;       /* Link in the BD onto a number of lists. */
     180  rtems_chain_node link;       /**< Link the BD onto a number of lists. */
    139181
    140182  struct rtems_bdbuf_avl_node
     
    155197  volatile rtems_bdbuf_buf_state state;  /**< State of the buffer. */
    156198
    157   volatile uint32_t waiters;    /**< The number of threads waiting on this
    158                                  * buffer. */
    159   rtems_bdpool_id pool;         /**< Identifier of buffer pool to which this buffer
    160                                     belongs */
    161 
    162   volatile uint32_t hold_timer; /**< Timer to indicate how long a buffer
    163                                  * has been held in the cache modified. */
     199  volatile uint32_t  waiters;    /**< The number of threads waiting on this
     200                                  * buffer. */
     201  rtems_bdbuf_group* group;      /**< Pointer to the group of BDs this BD is
     202                                  * part of. */
     203  volatile uint32_t  hold_timer; /**< Timer to indicate how long a buffer
     204                                  * has been held in the cache modified. */
    164205} rtems_bdbuf_buffer;
    165206
    166207/**
    167  * The groups of the blocks with the same size are collected in a pool. Note
    168  * that a several of the buffer's groups with the same size can exists.
    169  */
    170 typedef struct rtems_bdbuf_pool
     208 * A group is a continuous block of buffer descriptors. A group covers the
     209 * maximum configured buffer size and is the allocation size for the buffers to
     210 * a specific buffer size. If you allocate a buffer to be a specific size, all
     211 * buffers in the group, if there are more than 1 will also be that size. The
     212 * number of buffers in a group is a multiple of 2, ie 1, 2, 4, 8, etc.
     213 */
     214struct rtems_bdbuf_group
    171215{
    172   uint32_t            blksize;           /**< The size of the blocks (in bytes) */
    173   uint32_t            nblks;             /**< Number of blocks in this pool */
    174 
    175   uint32_t            flags;             /**< Configuration flags */
    176 
    177   rtems_id            lock;              /**< The pool lock. Lock this data and
    178                                           * all BDs. */
    179   rtems_id            sync_lock;         /**< Sync calls lock writes. */
    180   bool                sync_active;       /**< True if a sync is active. */
    181   rtems_id            sync_requester;    /**< The sync requester. */
    182   dev_t               sync_device;       /**< The device to sync */
    183 
    184   rtems_bdbuf_buffer* tree;             /**< Buffer descriptor lookup AVL tree
    185                                          * root */
    186   rtems_chain_control ready;            /**< Free buffers list (or read-ahead) */
    187   rtems_chain_control lru;              /**< Last recently used list */
    188   rtems_chain_control modified;         /**< Modified buffers list */
    189   rtems_chain_control sync;             /**< Buffers to sync list */
    190 
    191   rtems_id            access;           /**< Obtain if waiting for a buffer in the
    192                                          * ACCESS state. */
    193   volatile uint32_t   access_waiters;   /**< Count of access blockers. */
    194   rtems_id            transfer;         /**< Obtain if waiting for a buffer in the
    195                                          * TRANSFER state. */
    196   volatile uint32_t   transfer_waiters; /**< Count of transfer blockers. */
    197   rtems_id            waiting;          /**< Obtain if waiting for a buffer and the
    198                                          * none are available. */
    199   volatile uint32_t   wait_waiters;     /**< Count of waiting blockers. */
    200 
    201   rtems_bdbuf_buffer* bds;              /**< Pointer to table of buffer descriptors
    202                                          * allocated for this buffer pool. */
    203   void*               buffers;          /**< The buffer's memory. */
    204 } rtems_bdbuf_pool;
    205 
    206 /**
    207  * Configuration structure describes block configuration (size, amount, memory
    208  * location) for buffering layer pool.
    209  */
    210 typedef struct rtems_bdbuf_pool_config {
    211   int            size;      /**< Size of block */
    212   int            num;       /**< Number of blocks of appropriate size */
    213   unsigned char* mem_area;  /**< Pointer to the blocks location or NULL, in this
    214                              * case memory for blocks will be allocated by
    215                              * Buffering Layer with the help of RTEMS partition
    216                              * manager */
    217 } rtems_bdbuf_pool_config;
    218 
    219 /**
    220  * External reference to the pool configuration table describing each pool in
    221  * the system.
    222  *
    223  * The configuration table is provided by the application.
    224  */
    225 extern rtems_bdbuf_pool_config rtems_bdbuf_pool_configuration[];
    226 
    227 /**
    228  * External reference the size of the pool configuration table
    229  * @ref rtems_bdbuf_pool_configuration.
    230  *
    231  * The configuration table size is provided by the application.
    232  */
    233 extern size_t rtems_bdbuf_pool_configuration_size;
     216  rtems_chain_node    link;          /**< Link the groups on a LRU list if they
     217                                      * have no buffers in use. */
     218  size_t              bds_per_group; /**< The number of BD allocated to this
     219                                      * group. This value must be a multiple of
     220                                      * 2. */
     221  uint32_t            users;         /**< How many users the block has. */
     222  rtems_bdbuf_buffer* bdbuf;         /**< First BD this block covers. */
     223};
    234224
    235225/**
     
    238228 */
    239229typedef struct rtems_bdbuf_config {
    240   uint32_t            max_read_ahead_blocks; /**< Number of blocks to read ahead. */
    241   uint32_t            max_write_blocks;      /**< Number of blocks to write at once. */
    242   rtems_task_priority swapout_priority;      /**< Priority of the swap out task. */
    243   uint32_t            swapout_period;        /**< Period swapout checks buf timers. */
    244   uint32_t            swap_block_hold;       /**< Period a buffer is held. */
     230  uint32_t            max_read_ahead_blocks;   /**< Number of blocks to read
     231                                                * ahead. */
     232  uint32_t            max_write_blocks;        /**< Number of blocks to write
     233                                                * at once. */
     234  rtems_task_priority swapout_priority;        /**< Priority of the swap out
     235                                                * task. */
     236  uint32_t            swapout_period;          /**< Period swapout checks buf
     237                                                * timers. */
     238  uint32_t            swap_block_hold;         /**< Period a buffer is held. */
     239  uint32_t            swapout_workers;         /**< The number of worker
     240                                                * threads for the swapout
     241                                                * task. */
     242  rtems_task_priority swapout_worker_priority; /**< Priority of the swap out
     243                                                * task. */
     244  size_t              size;                    /**< Size of memory in the
     245                                                * cache */
     246  uint32_t            buffer_min;              /**< Minimum buffer size. */
     247  uint32_t            buffer_max;              /**< Maximum buffer size
     248                                                * supported. It is also the
     249                                                * allocation size. */
    245250} rtems_bdbuf_config;
    246251
     
    250255 * The configuration is provided by the application.
    251256 */
    252 extern rtems_bdbuf_config rtems_bdbuf_configuration;
     257extern const rtems_bdbuf_config rtems_bdbuf_configuration;
    253258
    254259/**
     
    279284
    280285/**
     286 * Default swap-out worker tasks. Currently disabled.
     287 */
     288#define RTEMS_BDBUF_SWAPOUT_WORKER_TASKS_DEFAULT     0
     289
     290/**
     291 * Default swap-out worker task priority. The same as the swapout task.
     292 */
     293#define RTEMS_BDBUF_SWAPOUT_WORKER_TASK_PRIORITY_DEFAULT \
     294                             RTEMS_BDBUF_SWAPOUT_TASK_PRIORITY_DEFAULT
     295
     296/**
     297 * Default size of memory allocated to the cache.
     298 */
     299#define RTEMS_BDBUF_CACHE_MEMORY_SIZE_DEFAULT (64 * 512)
     300
     301/**
     302 * Default minimum size of buffers.
     303 */
     304#define RTEMS_BDBUF_BUFFER_MIN_SIZE_DEFAULT (512)
     305
     306/**
     307 * Default maximum size of buffers.
     308 */
     309#define RTEMS_BDBUF_BUFFER_MAX_SIZE_DEFAULT (4096)
     310
     311/**
    281312 * Prepare buffering layer to work - initialize buffer descritors and (if it is
    282  * neccessary) buffers. Buffers will be allocated accoriding to the
    283  * configuration table, each entry describes the size of block and the size of
    284  * the pool. After initialization all blocks is placed into the ready state.
    285  * lists.
     313 * neccessary) buffers. After initialization all blocks is placed into the
     314 * ready state.
    286315 *
    287316 * @return RTEMS status code (RTEMS_SUCCESSFUL if operation completed
     
    396425/**
    397426 * Synchronize all modified buffers for this device with the disk and wait
    398  * until the transfers have completed. The sync mutex for the pool is locked
     427 * until the transfers have completed. The sync mutex for the cache is locked
    399428 * stopping the addition of any further modifed buffers. It is only the
    400429 * currently modified buffers that are written.
    401430 *
    402  * @note Nesting calls to sync multiple devices attached to a single pool will
    403  * be handled sequentially. A nested call will be blocked until the first sync
    404  * request has complete. This is only true for device using the same pool.
     431 * @note Nesting calls to sync multiple devices will be handled sequentially. A
     432 * nested call will be blocked until the first sync request has complete.
    405433 *
    406434 * @param dev Block device number
     
    411439rtems_status_code
    412440rtems_bdbuf_syncdev (dev_t dev);
    413 
    414 /**
    415  * Find first appropriate buffer pool. This primitive returns the index of
    416  * first buffer pool which block size is greater than or equal to specified
    417  * size.
    418  *
    419  * @param block_size Requested block size
    420  * @param pool The pool to use for the requested pool size.
    421  *
    422  * @return RTEMS status code (RTEMS_SUCCESSFUL if operation completed
    423  *         successfully or error code if error is occured)
    424  * @retval RTEMS_INVALID_SIZE The specified block size is invalid (not a power
    425  *         of 2)
    426  * @retval RTEMS_NOT_DEFINED The buffer pool for this or greater block size
    427  *         is not configured.
    428  */
    429 rtems_status_code
    430 rtems_bdbuf_find_pool (uint32_t block_size, rtems_bdpool_id *pool);
    431 
    432 /**
    433  * Obtain characteristics of buffer pool with specified number.
    434  *
    435  * @param pool Buffer pool number
    436  * @param block_size Block size for which buffer pool is configured returned
    437  *                   there
    438  * @param blocks Number of buffers in buffer pool.
    439  *
    440  * RETURNS:
    441  * @return RTEMS status code (RTEMS_SUCCESSFUL if operation completed
    442  *         successfully or error code if error is occured)
    443  * @retval RTEMS_INVALID_SIZE The appropriate buffer pool is not configured.
    444  *
    445  * @note Buffer pools enumerated continuously starting from 0.
    446  */
    447 rtems_status_code rtems_bdbuf_get_pool_info(
    448   rtems_bdpool_id pool,
    449   uint32_t *block_size,
    450   uint32_t *blocks
    451 );
    452441
    453442/** @} */
  • cpukit/libblock/include/rtems/diskdevs.h

    rf14a21df r0d15414e  
    2020#include <rtems/libio.h>
    2121#include <stdlib.h>
    22 
    23 /**
    24  * @ingroup rtems_bdbuf
    25  *
    26  * Buffer pool identifier.
    27  */
    28 typedef int rtems_bdpool_id;
    2922
    3023#include <rtems/blkdev.h>
     
    108101   * Device block size in bytes.
    109102   *
    110    * This is the minimum transfer unit and must be power of two.
     103   * This is the minimum transfer unit. It can be any size.
    111104   */
    112105  uint32_t block_size;
    113106
    114107  /**
    115    * Binary logarithm of the block size.
    116    */
    117   uint32_t block_size_log2;
    118 
    119   /**
    120    * Buffer pool assigned to this disk.
    121    */
    122   rtems_bdpool_id pool;
     108   * Device media block size in bytes.
     109   *
     110   * This is the media transfer unit the hardware defaults to.
     111   */
     112  uint32_t media_block_size;
    123113
    124114  /**
  • cpukit/libblock/src/bdbuf.c

    rf14a21df r0d15414e  
    1616 *         Alexander Kukuta <kam@oktet.ru>
    1717 *
    18  * Copyright (C) 2008 Chris Johns <chrisj@rtems.org>
     18 * Copyright (C) 2008,2009 Chris Johns <chrisj@rtems.org>
    1919 *    Rewritten to remove score mutex access. Fixes many performance
    2020 *    issues.
     
    4545#include "rtems/bdbuf.h"
    4646
    47 /**
    48  * The BD buffer context.
    49  */
    50 typedef struct rtems_bdbuf_context {
    51   rtems_bdbuf_pool* pool;      /*< Table of buffer pools */
    52   int               npools;    /*< Number of entries in pool table */
    53   rtems_id          swapout;   /*< Swapout task ID */
    54   bool              swapout_enabled;
    55 } rtems_bdbuf_context;
     47/*
     48 * Simpler label for this file.
     49 */
     50#define bdbuf_config rtems_bdbuf_configuration
     51
     52/**
     53 * A swapout transfer transaction data. This data is passed to a worked thread
     54 * to handle the write phase of the transfer.
     55 */
     56typedef struct rtems_bdbuf_swapout_transfer
     57{
     58  rtems_chain_control   bds;       /**< The transfer list of BDs. */
     59  dev_t                 dev;       /**< The device the transfer is for. */
     60  rtems_blkdev_request* write_req; /**< The write request array. */
     61} rtems_bdbuf_swapout_transfer;
     62
     63/**
     64 * Swapout worker thread. These are available to take processing from the
     65 * main swapout thread and handle the I/O operation.
     66 */
     67typedef struct rtems_bdbuf_swapout_worker
     68{
     69  rtems_chain_node             link;     /**< The threads sit on a chain when
     70                                          * idle. */
     71  rtems_id                     id;       /**< The id of the task so we can wake
     72                                          * it. */
     73  volatile bool                enabled;  /**< The worked is enabled. */
     74  rtems_bdbuf_swapout_transfer transfer; /**< The transfer data for this
     75                                          * thread. */
     76} rtems_bdbuf_swapout_worker;
     77
     78/**
     79 * The BD buffer cache.
     80 */
     81typedef struct rtems_bdbuf_cache
     82{
     83  rtems_id            swapout;           /**< Swapout task ID */
     84  volatile bool       swapout_enabled;   /**< Swapout is only running if
     85                                          * enabled. Set to false to kill the
     86                                          * swap out task. It deletes itself. */
     87  rtems_chain_control swapout_workers;   /**< The work threads for the swapout
     88                                          * task. */
     89 
     90  rtems_bdbuf_buffer* bds;               /**< Pointer to table of buffer
     91                                          * descriptors. */
     92  void*               buffers;           /**< The buffer's memory. */
     93  size_t              buffer_min_count;  /**< Number of minimum size buffers
     94                                          * that fit the buffer memory. */
     95  size_t              max_bds_per_group; /**< The number of BDs of minimum
     96                                          * buffer size that fit in a group. */
     97  uint32_t            flags;             /**< Configuration flags. */
     98
     99  rtems_id            lock;              /**< The cache lock. It locks all
     100                                          * cache data, BD and lists. */
     101  rtems_id            sync_lock;         /**< Sync calls block writes. */
     102  volatile bool       sync_active;       /**< True if a sync is active. */
     103  volatile rtems_id   sync_requester;    /**< The sync requester. */
     104  volatile dev_t      sync_device;       /**< The device to sync and -1 not a
     105                                          * device sync. */
     106
     107  rtems_bdbuf_buffer* tree;              /**< Buffer descriptor lookup AVL tree
     108                                          * root. There is only one. */
     109  rtems_chain_control ready;             /**< Free buffers list, read-ahead, or
     110                                          * resized group buffers. */
     111  rtems_chain_control lru;               /**< Least recently used list */
     112  rtems_chain_control modified;          /**< Modified buffers list */
     113  rtems_chain_control sync;              /**< Buffers to sync list */
     114
     115  rtems_id            access;            /**< Obtain if waiting for a buffer in
     116                                          * the ACCESS state. */
     117  volatile uint32_t   access_waiters;    /**< Count of access blockers. */
     118  rtems_id            transfer;          /**< Obtain if waiting for a buffer in
     119                                          * the TRANSFER state. */
     120  volatile uint32_t   transfer_waiters;  /**< Count of transfer blockers. */
     121  rtems_id            waiting;           /**< Obtain if waiting for a buffer
     122                                          * and the none are available. */
     123  volatile uint32_t   wait_waiters;      /**< Count of waiting blockers. */
     124
     125  size_t              group_count;       /**< The number of groups. */
     126  rtems_bdbuf_group*  groups;            /**< The groups. */
     127 
     128  bool                initialised;       /**< Initialised state. */
     129} rtems_bdbuf_cache;
    56130
    57131/**
     
    61135  (((uint32_t)'B' << 24) | ((uint32_t)(n) & (uint32_t)0x00FFFFFF))
    62136
    63 #define RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY RTEMS_BLKDEV_FATAL_ERROR(1)
    64 #define RTEMS_BLKDEV_FATAL_BDBUF_SWAPOUT     RTEMS_BLKDEV_FATAL_ERROR(2)
    65 #define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK   RTEMS_BLKDEV_FATAL_ERROR(3)
    66 #define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK RTEMS_BLKDEV_FATAL_ERROR(4)
    67 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_LOCK   RTEMS_BLKDEV_FATAL_ERROR(5)
    68 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_UNLOCK RTEMS_BLKDEV_FATAL_ERROR(6)
    69 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAIT   RTEMS_BLKDEV_FATAL_ERROR(7)
    70 #define RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAKE   RTEMS_BLKDEV_FATAL_ERROR(8)
    71 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_WAKE     RTEMS_BLKDEV_FATAL_ERROR(9)
    72 #define RTEMS_BLKDEV_FATAL_BDBUF_SO_NOMEM    RTEMS_BLKDEV_FATAL_ERROR(10)
    73 #define BLKDEV_FATAL_BDBUF_SWAPOUT_RE        RTEMS_BLKDEV_FATAL_ERROR(11)
    74 #define BLKDEV_FATAL_BDBUF_SWAPOUT_TS        RTEMS_BLKDEV_FATAL_ERROR(12)
     137#define RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY  RTEMS_BLKDEV_FATAL_ERROR(1)
     138#define RTEMS_BLKDEV_FATAL_BDBUF_SWAPOUT      RTEMS_BLKDEV_FATAL_ERROR(2)
     139#define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK    RTEMS_BLKDEV_FATAL_ERROR(3)
     140#define RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK  RTEMS_BLKDEV_FATAL_ERROR(4)
     141#define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_LOCK   RTEMS_BLKDEV_FATAL_ERROR(5)
     142#define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_UNLOCK RTEMS_BLKDEV_FATAL_ERROR(6)
     143#define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_1 RTEMS_BLKDEV_FATAL_ERROR(7)
     144#define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_2 RTEMS_BLKDEV_FATAL_ERROR(8)
     145#define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_3 RTEMS_BLKDEV_FATAL_ERROR(9)
     146#define RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAKE   RTEMS_BLKDEV_FATAL_ERROR(10)
     147#define RTEMS_BLKDEV_FATAL_BDBUF_SO_WAKE      RTEMS_BLKDEV_FATAL_ERROR(11)
     148#define RTEMS_BLKDEV_FATAL_BDBUF_SO_NOMEM     RTEMS_BLKDEV_FATAL_ERROR(12)
     149#define RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_CREATE RTEMS_BLKDEV_FATAL_ERROR(13)
     150#define RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_START  RTEMS_BLKDEV_FATAL_ERROR(14)
     151#define BLKDEV_FATAL_BDBUF_SWAPOUT_RE         RTEMS_BLKDEV_FATAL_ERROR(15)
     152#define BLKDEV_FATAL_BDBUF_SWAPOUT_TS         RTEMS_BLKDEV_FATAL_ERROR(16)
    75153
    76154/**
     
    92170 * @warning Priority inheritance is on.
    93171 */
    94 #define RTEMS_BDBUF_POOL_LOCK_ATTRIBS \
     172#define RTEMS_BDBUF_CACHE_LOCK_ATTRIBS \
    95173  (RTEMS_PRIORITY | RTEMS_BINARY_SEMAPHORE | \
    96174   RTEMS_INHERIT_PRIORITY | RTEMS_NO_PRIORITY_CEILING | RTEMS_LOCAL)
     
    104182 *          IDLE task which can cause unsual side effects.
    105183 */
    106 #define RTEMS_BDBUF_POOL_WAITER_ATTRIBS \
     184#define RTEMS_BDBUF_CACHE_WAITER_ATTRIBS \
    107185  (RTEMS_PRIORITY | RTEMS_SIMPLE_BINARY_SEMAPHORE | \
    108186   RTEMS_NO_INHERIT_PRIORITY | RTEMS_NO_PRIORITY_CEILING | RTEMS_LOCAL)
     
    114192
    115193/**
    116  * The context of buffering layer.
    117  */
    118 static rtems_bdbuf_context rtems_bdbuf_ctx;
     194 * The Buffer Descriptor cache.
     195 */
     196static rtems_bdbuf_cache bdbuf_cache;
    119197
    120198/**
     
    637715
    638716/**
    639  * Get the pool for the device.
    640  *
    641  * @param pid Physical disk device.
    642  */
    643 static rtems_bdbuf_pool*
    644 rtems_bdbuf_get_pool (const rtems_bdpool_id pid)
    645 {
    646   return &rtems_bdbuf_ctx.pool[pid];
    647 }
    648 
    649 /**
    650  * Lock the pool. A single task can nest calls.
    651  *
    652  * @param pool The pool to lock.
     717 * Lock the mutex. A single task can nest calls.
     718 *
     719 * @param lock The mutex to lock.
     720 * @param fatal_error_code The error code if the call fails.
    653721 */
    654722static void
    655 rtems_bdbuf_lock_pool (rtems_bdbuf_pool* pool)
    656 {
    657   rtems_status_code sc = rtems_semaphore_obtain (pool->lock,
     723rtems_bdbuf_lock (rtems_id lock, uint32_t fatal_error_code)
     724{
     725  rtems_status_code sc = rtems_semaphore_obtain (lock,
    658726                                                 RTEMS_WAIT,
    659727                                                 RTEMS_NO_TIMEOUT);
    660728  if (sc != RTEMS_SUCCESSFUL)
    661     rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_LOCK);
    662 }
    663 
    664 /**
    665  * Unlock the pool.
    666  *
    667  * @param pool The pool to unlock.
     729    rtems_fatal_error_occurred (fatal_error_code);
     730}
     731
     732/**
     733 * Unlock the mutex.
     734 *
     735 * @param lock The mutex to unlock.
     736 * @param fatal_error_code The error code if the call fails.
    668737 */
    669738static void
    670 rtems_bdbuf_unlock_pool (rtems_bdbuf_pool* pool)
    671 {
    672   rtems_status_code sc = rtems_semaphore_release (pool->lock);
     739rtems_bdbuf_unlock (rtems_id lock, uint32_t fatal_error_code)
     740{
     741  rtems_status_code sc = rtems_semaphore_release (lock);
    673742  if (sc != RTEMS_SUCCESSFUL)
    674     rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_UNLOCK);
    675 }
    676 
    677 /**
    678  * Lock the pool's sync. A single task can nest calls.
    679  *
    680  * @param pool The pool's sync to lock.
     743    rtems_fatal_error_occurred (fatal_error_code);
     744}
     745
     746/**
     747 * Lock the cache. A single task can nest calls.
    681748 */
    682749static void
    683 rtems_bdbuf_lock_sync (rtems_bdbuf_pool* pool)
    684 {
    685   rtems_status_code sc = rtems_semaphore_obtain (pool->sync_lock,
    686                                                  RTEMS_WAIT,
    687                                                  RTEMS_NO_TIMEOUT);
    688   if (sc != RTEMS_SUCCESSFUL)
    689     rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK);
    690 }
    691 
    692 /**
    693  * Unlock the pool's sync.
    694  *
    695  * @param pool The pool's sync to unlock.
     750rtems_bdbuf_lock_cache (void)
     751{
     752  rtems_bdbuf_lock (bdbuf_cache.lock, RTEMS_BLKDEV_FATAL_BDBUF_CACHE_LOCK);
     753}
     754
     755/**
     756 * Unlock the cache.
    696757 */
    697758static void
    698 rtems_bdbuf_unlock_sync (rtems_bdbuf_pool* pool)
    699 {
    700   rtems_status_code sc = rtems_semaphore_release (pool->sync_lock);
    701   if (sc != RTEMS_SUCCESSFUL)
    702     rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK);
     759rtems_bdbuf_unlock_cache (void)
     760{
     761  rtems_bdbuf_unlock (bdbuf_cache.lock, RTEMS_BLKDEV_FATAL_BDBUF_CACHE_UNLOCK);
     762}
     763
     764/**
     765 * Lock the cache's sync. A single task can nest calls.
     766 */
     767static void
     768rtems_bdbuf_lock_sync (void)
     769{
     770  rtems_bdbuf_lock (bdbuf_cache.sync_lock, RTEMS_BLKDEV_FATAL_BDBUF_SYNC_LOCK);
     771}
     772
     773/**
     774 * Unlock the cache's sync lock. Any blocked writers are woken.
     775 */
     776static void
     777rtems_bdbuf_unlock_sync (void)
     778{
     779  rtems_bdbuf_unlock (bdbuf_cache.sync_lock,
     780                      RTEMS_BLKDEV_FATAL_BDBUF_SYNC_UNLOCK);
    703781}
    704782
     
    709787 * tasks that could be waiting.
    710788 *
    711  * While we have the pool locked we can try and claim the semaphore and
    712  * therefore know when we release the lock to the pool we will block until the
     789 * While we have the cache locked we can try and claim the semaphore and
     790 * therefore know when we release the lock to the cache we will block until the
    713791 * semaphore is released. This may even happen before we get to block.
    714792 *
    715793 * A counter is used to save the release call when no one is waiting.
    716794 *
    717  * The function assumes the pool is locked on entry and it will be locked on
     795 * The function assumes the cache is locked on entry and it will be locked on
    718796 * exit.
    719797 *
    720  * @param pool The pool to wait for a buffer to return.
    721798 * @param sema The semaphore to block on and wait.
    722799 * @param waiters The wait counter for this semaphore.
    723800 */
    724801static void
    725 rtems_bdbuf_wait (rtems_bdbuf_pool* pool, rtems_id* sema,
    726                   volatile uint32_t* waiters)
     802rtems_bdbuf_wait (rtems_id* sema, volatile uint32_t* waiters)
    727803{
    728804  rtems_status_code sc;
     
    735811
    736812  /*
    737    * Disable preemption then unlock the pool and block.
    738    * There is no POSIX condition variable in the core API so
    739    * this is a work around.
     813   * Disable preemption then unlock the cache and block.  There is no POSIX
     814   * condition variable in the core API so this is a work around.
    740815   *
    741    * The issue is a task could preempt after the pool is unlocked
    742    * because it is blocking or just hits that window, and before
    743    * this task has blocked on the semaphore. If the preempting task
    744    * flushes the queue this task will not see the flush and may
    745    * block for ever or until another transaction flushes this
     816   * The issue is a task could preempt after the cache is unlocked because it is
     817   * blocking or just hits that window, and before this task has blocked on the
     818   * semaphore. If the preempting task flushes the queue this task will not see
     819   * the flush and may block for ever or until another transaction flushes this
    746820   * semaphore.
    747821   */
     
    749823
    750824  if (sc != RTEMS_SUCCESSFUL)
    751     rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAIT);
    752  
    753   /*
    754    * Unlock the pool, wait, and lock the pool when we return.
    755    */
    756   rtems_bdbuf_unlock_pool (pool);
     825    rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_1);
     826 
     827  /*
     828   * Unlock the cache, wait, and lock the cache when we return.
     829   */
     830  rtems_bdbuf_unlock_cache ();
    757831
    758832  sc = rtems_semaphore_obtain (*sema, RTEMS_WAIT, RTEMS_NO_TIMEOUT);
    759833 
    760834  if (sc != RTEMS_UNSATISFIED)
    761     rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAIT);
    762  
    763   rtems_bdbuf_lock_pool (pool);
     835    rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_2);
     836 
     837  rtems_bdbuf_lock_cache ();
    764838
    765839  sc = rtems_task_mode (prev_mode, RTEMS_ALL_MODE_MASKS, &prev_mode);
    766840
    767841  if (sc != RTEMS_SUCCESSFUL)
    768     rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAIT);
     842    rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAIT_3);
    769843 
    770844  *waiters -= 1;
     
    788862 
    789863    if (sc != RTEMS_SUCCESSFUL)
    790       rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_POOL_WAKE);
     864      rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CACHE_WAKE);
    791865  }
    792866}
     
    794868/**
    795869 * Add a buffer descriptor to the modified list. This modified list is treated
    796  * a litte differently to the other lists. To access it you must have the pool
     870 * a litte differently to the other lists. To access it you must have the cache
    797871 * locked and this is assumed to be the case on entry to this call.
    798872 *
    799  * If the pool has a device being sync'ed and the bd is for that device the
     873 * If the cache has a device being sync'ed and the bd is for that device the
    800874 * call must block and wait until the sync is over before adding the bd to the
    801875 * modified list. Once a sync happens for a device no bd's can be added the
     
    807881 * active.
    808882 *
    809  * @param pool The pool the bd belongs to.
    810  * @param bd The bd to queue to the pool's modified list.
     883 * @param bd The bd to queue to the cache's modified list.
    811884 */
    812885static void
    813 rtems_bdbuf_append_modified (rtems_bdbuf_pool* pool, rtems_bdbuf_buffer* bd)
    814 {
    815   /*
    816    * If the pool has a device being sync'ed check if this bd is for that
    817    * device. If it is unlock the pool and block on the sync lock. once we have
    818    * the sync lock reelase it.
    819    *
    820    * If the
    821    */
    822   if (pool->sync_active && (pool->sync_device == bd->dev))
    823   {
    824     rtems_bdbuf_unlock_pool (pool);
    825     rtems_bdbuf_lock_sync (pool);
    826     rtems_bdbuf_unlock_sync (pool);
    827     rtems_bdbuf_lock_pool (pool);
     886rtems_bdbuf_append_modified (rtems_bdbuf_buffer* bd)
     887{
     888  /*
     889   * If the cache has a device being sync'ed check if this bd is for that
     890   * device. If it is unlock the cache and block on the sync lock. Once we have
     891   * the sync lock release it.
     892   */
     893  if (bdbuf_cache.sync_active && (bdbuf_cache.sync_device == bd->dev))
     894  {
     895    rtems_bdbuf_unlock_cache ();
     896    /* Wait for the sync lock */
     897    rtems_bdbuf_lock_sync ();
     898    rtems_bdbuf_unlock_sync ();
     899    rtems_bdbuf_lock_cache ();
    828900  }
    829901     
    830902  bd->state = RTEMS_BDBUF_STATE_MODIFIED;
    831903
    832   rtems_chain_append (&pool->modified, &bd->link);
     904  rtems_chain_append (&bdbuf_cache.modified, &bd->link);
    833905}
    834906
     
    839911rtems_bdbuf_wake_swapper (void)
    840912{
    841   rtems_status_code sc = rtems_event_send (rtems_bdbuf_ctx.swapout,
     913  rtems_status_code sc = rtems_event_send (bdbuf_cache.swapout,
    842914                                           RTEMS_BDBUF_SWAPOUT_SYNC);
    843915  if (sc != RTEMS_SUCCESSFUL)
     
    846918
    847919/**
    848  * Initialize single buffer pool.
    849  *
    850  * @param config Buffer pool configuration
    851  * @param pid Pool number
    852  *
    853  * @return RTEMS_SUCCESSFUL, if buffer pool initialized successfully, or error
    854  *         code if error occured.
    855  */
    856 static rtems_status_code
    857 rtems_bdbuf_initialize_pool (rtems_bdbuf_pool_config* config,
    858                              rtems_bdpool_id          pid)
    859 {
    860   int                 rv = 0;
    861   unsigned char*      buffer = config->mem_area;
    862   rtems_bdbuf_pool*   pool;
     920 * Compute the number of BDs per group for a given buffer size.
     921 *
     922 * @param size The buffer size. It can be any size and we scale up.
     923 */
     924static size_t
     925rtems_bdbuf_bds_per_group (size_t size)
     926{
     927  size_t bufs_per_size;
     928  size_t bds_per_size;
     929 
     930  if (size > rtems_bdbuf_configuration.buffer_max)
     931    return 0;
     932 
     933  bufs_per_size = ((size - 1) / bdbuf_config.buffer_min) + 1;
     934 
     935  for (bds_per_size = 1;
     936       bds_per_size < bufs_per_size;
     937       bds_per_size <<= 1)
     938    ;
     939
     940  return bdbuf_cache.max_bds_per_group / bds_per_size;
     941}
     942
     943/**
     944 * Reallocate a group. The BDs currently allocated in the group are removed
     945 * from the ALV tree and any lists then the new BD's are prepended to the ready
     946 * list of the cache.
     947 *
     948 * @param group The group to reallocate.
     949 * @param new_bds_per_group The new count of BDs per group.
     950 */
     951static void
     952rtems_bdbuf_group_realloc (rtems_bdbuf_group* group, size_t new_bds_per_group)
     953{
    863954  rtems_bdbuf_buffer* bd;
     955  int                 b;
     956  size_t              bufs_per_bd;
     957
     958  bufs_per_bd = bdbuf_cache.max_bds_per_group / group->bds_per_group;
     959 
     960  for (b = 0, bd = group->bdbuf;
     961       b < group->bds_per_group;
     962       b++, bd += bufs_per_bd)
     963  {
     964    if ((bd->state == RTEMS_BDBUF_STATE_CACHED) ||
     965        (bd->state == RTEMS_BDBUF_STATE_MODIFIED) ||
     966        (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD))
     967    {
     968      if (rtems_bdbuf_avl_remove (&bdbuf_cache.tree, bd) != 0)
     969        rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY);
     970      rtems_chain_extract (&bd->link);
     971    }
     972  }
     973 
     974  group->bds_per_group = new_bds_per_group;
     975  bufs_per_bd = bdbuf_cache.max_bds_per_group / new_bds_per_group;
     976 
     977  for (b = 0, bd = group->bdbuf;
     978       b < group->bds_per_group;
     979       b++, bd += bufs_per_bd)
     980    rtems_chain_prepend (&bdbuf_cache.ready, &bd->link);
     981}
     982
     983/**
     984 * Get the next BD from the list. This call assumes the cache is locked.
     985 *
     986 * @param bds_per_group The number of BDs per block we are need.
     987 * @param list The list to find the BD on.
     988 * @return The next BD if found or NULL is none are available.
     989 */
     990static rtems_bdbuf_buffer*
     991rtems_bdbuf_get_next_bd (size_t               bds_per_group,
     992                         rtems_chain_control* list)
     993{
     994  rtems_chain_node* node = rtems_chain_first (list);
     995  while (!rtems_chain_is_tail (list, node))
     996  {
     997    rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node;
     998
     999    /*
     1000     * If this bd is already part of a group that supports the same number of
     1001     * BDs per group return it. If the bd is part of another group check the
     1002     * number of users and if 0 we can take this group and resize it.
     1003     */
     1004    if (bd->group->bds_per_group == bds_per_group)
     1005    {
     1006      rtems_chain_extract (node);
     1007      bd->group->users++;
     1008      return bd;
     1009    }
     1010
     1011    if (bd->group->users == 0)
     1012    {
     1013      /*
     1014       * We use the group to locate the start of the BDs for this group.
     1015       */
     1016      rtems_bdbuf_group_realloc (bd->group, bds_per_group);
     1017      bd = (rtems_bdbuf_buffer*) rtems_chain_get (&bdbuf_cache.ready);
     1018      return bd;
     1019    }
     1020
     1021    node = rtems_chain_next (node);
     1022  }
     1023 
     1024  return NULL;
     1025}
     1026
     1027/**
     1028 * Initialise the cache.
     1029 *
     1030 * @return rtems_status_code The initialisation status.
     1031 */
     1032rtems_status_code
     1033rtems_bdbuf_init (void)
     1034{
     1035  rtems_bdbuf_group*  group;
     1036  rtems_bdbuf_buffer* bd;
     1037  uint8_t*            buffer;
     1038  int                 b;
     1039  int                 cache_aligment;
    8641040  rtems_status_code   sc;
    865   uint32_t            b;
    866   int                 cache_aligment = 32 /* FIXME rtems_cache_get_data_line_size() */;
    867 
    868   /* For unspecified cache alignments we use the CPU alignment */
     1041
     1042#if RTEMS_BDBUF_TRACE
     1043  rtems_bdbuf_printf ("init\n");
     1044#endif
     1045
     1046  /*
     1047   * Check the configuration table values.
     1048   */
     1049  if ((bdbuf_config.buffer_max % bdbuf_config.buffer_min) != 0)
     1050    return RTEMS_INVALID_NUMBER;
     1051 
     1052  /*
     1053   * We use a special variable to manage the initialisation incase we have
     1054   * completing threads doing this. You may get errors if the another thread
     1055   * makes a call and we have not finished initialisation.
     1056   */
     1057  if (bdbuf_cache.initialised)
     1058    return RTEMS_RESOURCE_IN_USE;
     1059
     1060  bdbuf_cache.initialised = true;
     1061 
     1062  /*
     1063   * For unspecified cache alignments we use the CPU alignment.
     1064   */
     1065  cache_aligment = 32; /* FIXME rtems_cache_get_data_line_size() */
    8691066  if (cache_aligment <= 0)
    870   {
    8711067    cache_aligment = CPU_ALIGNMENT;
    872   }
    873 
    874   pool = rtems_bdbuf_get_pool (pid);
    875  
    876   pool->blksize        = config->size;
    877   pool->nblks          = config->num;
    878   pool->flags          = 0;
    879   pool->sync_active    = false;
    880   pool->sync_device    = -1;
    881   pool->sync_requester = 0;
    882   pool->tree           = NULL;
    883   pool->buffers        = NULL;
    884 
    885   rtems_chain_initialize_empty (&pool->ready);
    886   rtems_chain_initialize_empty (&pool->lru);
    887   rtems_chain_initialize_empty (&pool->modified);
    888   rtems_chain_initialize_empty (&pool->sync);
    889 
    890   pool->access           = 0;
    891   pool->access_waiters   = 0;
    892   pool->transfer         = 0;
    893   pool->transfer_waiters = 0;
    894   pool->waiting          = 0;
    895   pool->wait_waiters     = 0;
    896  
    897   /*
    898    * Allocate memory for buffer descriptors
    899    */
    900   pool->bds = calloc (config->num, sizeof (rtems_bdbuf_buffer));
    901  
    902   if (!pool->bds)
     1068
     1069  bdbuf_cache.sync_active    = false;
     1070  bdbuf_cache.sync_device    = -1;
     1071  bdbuf_cache.sync_requester = 0;
     1072  bdbuf_cache.tree           = NULL;
     1073
     1074  rtems_chain_initialize_empty (&bdbuf_cache.swapout_workers);
     1075  rtems_chain_initialize_empty (&bdbuf_cache.ready);
     1076  rtems_chain_initialize_empty (&bdbuf_cache.lru);
     1077  rtems_chain_initialize_empty (&bdbuf_cache.modified);
     1078  rtems_chain_initialize_empty (&bdbuf_cache.sync);
     1079
     1080  bdbuf_cache.access           = 0;
     1081  bdbuf_cache.access_waiters   = 0;
     1082  bdbuf_cache.transfer         = 0;
     1083  bdbuf_cache.transfer_waiters = 0;
     1084  bdbuf_cache.waiting          = 0;
     1085  bdbuf_cache.wait_waiters     = 0;
     1086
     1087  /*
     1088   * Create the locks for the cache.
     1089   */
     1090  sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 'l'),
     1091                               1, RTEMS_BDBUF_CACHE_LOCK_ATTRIBS, 0,
     1092                               &bdbuf_cache.lock);
     1093  if (sc != RTEMS_SUCCESSFUL)
     1094  {
     1095    bdbuf_cache.initialised = false;
     1096    return sc;
     1097  }
     1098
     1099  rtems_bdbuf_lock_cache ();
     1100 
     1101  sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 's'),
     1102                               1, RTEMS_BDBUF_CACHE_LOCK_ATTRIBS, 0,
     1103                               &bdbuf_cache.sync_lock);
     1104  if (sc != RTEMS_SUCCESSFUL)
     1105  {
     1106    rtems_bdbuf_unlock_cache ();
     1107    rtems_semaphore_delete (bdbuf_cache.lock);
     1108    bdbuf_cache.initialised = false;
     1109    return sc;
     1110  }
     1111 
     1112  sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 'a'),
     1113                               0, RTEMS_BDBUF_CACHE_WAITER_ATTRIBS, 0,
     1114                               &bdbuf_cache.access);
     1115  if (sc != RTEMS_SUCCESSFUL)
     1116  {
     1117    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1118    rtems_bdbuf_unlock_cache ();
     1119    rtems_semaphore_delete (bdbuf_cache.lock);
     1120    bdbuf_cache.initialised = false;
     1121    return sc;
     1122  }
     1123
     1124  sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 't'),
     1125                               0, RTEMS_BDBUF_CACHE_WAITER_ATTRIBS, 0,
     1126                               &bdbuf_cache.transfer);
     1127  if (sc != RTEMS_SUCCESSFUL)
     1128  {
     1129    rtems_semaphore_delete (bdbuf_cache.access);
     1130    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1131    rtems_bdbuf_unlock_cache ();
     1132    rtems_semaphore_delete (bdbuf_cache.lock);
     1133    bdbuf_cache.initialised = false;
     1134    return sc;
     1135  }
     1136
     1137  sc = rtems_semaphore_create (rtems_build_name ('B', 'D', 'C', 'w'),
     1138                               0, RTEMS_BDBUF_CACHE_WAITER_ATTRIBS, 0,
     1139                               &bdbuf_cache.waiting);
     1140  if (sc != RTEMS_SUCCESSFUL)
     1141  {
     1142    rtems_semaphore_delete (bdbuf_cache.transfer);
     1143    rtems_semaphore_delete (bdbuf_cache.access);
     1144    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1145    rtems_bdbuf_unlock_cache ();
     1146    rtems_semaphore_delete (bdbuf_cache.lock);
     1147    bdbuf_cache.initialised = false;
     1148    return sc;
     1149  }
     1150 
     1151  /*
     1152   * Allocate the memory for the buffer descriptors.
     1153   */
     1154  bdbuf_cache.bds = calloc (sizeof (rtems_bdbuf_buffer),
     1155                            bdbuf_config.size / bdbuf_config.buffer_min);
     1156  if (!bdbuf_cache.bds)
     1157  {
     1158    rtems_semaphore_delete (bdbuf_cache.transfer);
     1159    rtems_semaphore_delete (bdbuf_cache.access);
     1160    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1161    rtems_bdbuf_unlock_cache ();
     1162    rtems_semaphore_delete (bdbuf_cache.lock);
     1163    bdbuf_cache.initialised = false;
    9031164    return RTEMS_NO_MEMORY;
    904 
    905   /*
    906    * Allocate memory for buffers if required.  The pool memory will be cache
    907    * aligned.  It is possible to free the memory allocated by rtems_memalign()
    908    * with free().
    909    */
    910   if (buffer == NULL)
    911   {
    912     rv = rtems_memalign ((void **) &buffer,
    913                          cache_aligment,
    914                          config->num * config->size);
    915     if (rv != 0)
    916     {
    917       free (pool->bds);
    918       return RTEMS_NO_MEMORY;
    919     }
    920     pool->buffers = buffer;
    921   }
    922 
    923   for (b = 0, bd = pool->bds;
    924        b < pool->nblks;
    925        b++, bd++, buffer += pool->blksize)
     1165  }
     1166
     1167  /*
     1168   * Compute the various number of elements in the cache.
     1169   */
     1170  bdbuf_cache.buffer_min_count =
     1171    bdbuf_config.size / bdbuf_config.buffer_min;
     1172  bdbuf_cache.max_bds_per_group =
     1173    bdbuf_config.buffer_max / bdbuf_config.buffer_min;
     1174  bdbuf_cache.group_count =
     1175    bdbuf_cache.buffer_min_count / bdbuf_cache.max_bds_per_group;
     1176
     1177  /*
     1178   * Allocate the memory for the buffer descriptors.
     1179   */
     1180  bdbuf_cache.groups = calloc (sizeof (rtems_bdbuf_group),
     1181                               bdbuf_cache.group_count);
     1182  if (!bdbuf_cache.groups)
     1183  {
     1184    free (bdbuf_cache.bds);
     1185    rtems_semaphore_delete (bdbuf_cache.transfer);
     1186    rtems_semaphore_delete (bdbuf_cache.access);
     1187    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1188    rtems_bdbuf_unlock_cache ();
     1189    rtems_semaphore_delete (bdbuf_cache.lock);
     1190    bdbuf_cache.initialised = false;
     1191    return RTEMS_NO_MEMORY;
     1192  }
     1193 
     1194  /*
     1195   * Allocate memory for buffer memory. The buffer memory will be cache
     1196   * aligned. It is possible to free the memory allocated by rtems_memalign()
     1197   * with free(). Return 0 if allocated.
     1198   */
     1199  if (rtems_memalign ((void **) &bdbuf_cache.buffers,
     1200                      cache_aligment,
     1201                      bdbuf_cache.buffer_min_count * bdbuf_config.buffer_min) != 0)
     1202  {
     1203    free (bdbuf_cache.groups);
     1204    free (bdbuf_cache.bds);
     1205    rtems_semaphore_delete (bdbuf_cache.transfer);
     1206    rtems_semaphore_delete (bdbuf_cache.access);
     1207    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1208    rtems_bdbuf_unlock_cache ();
     1209    rtems_semaphore_delete (bdbuf_cache.lock);
     1210    bdbuf_cache.initialised = false;
     1211    return RTEMS_NO_MEMORY;
     1212  }
     1213
     1214  /*
     1215   * The cache is empty after opening so we need to add all the buffers to it
     1216   * and initialise the groups.
     1217   */
     1218  for (b = 0, group = bdbuf_cache.groups,
     1219         bd = bdbuf_cache.bds, buffer = bdbuf_cache.buffers;
     1220       b < bdbuf_cache.buffer_min_count;
     1221       b++, bd++, buffer += bdbuf_config.buffer_min)
    9261222  {
    9271223    bd->dev        = -1;
    928     bd->block      = 0;
     1224    bd->group      = group;
    9291225    bd->buffer     = buffer;
    9301226    bd->avl.left   = NULL;
    9311227    bd->avl.right  = NULL;
    9321228    bd->state      = RTEMS_BDBUF_STATE_EMPTY;
    933     bd->pool       = pid;
    9341229    bd->error      = 0;
    9351230    bd->waiters    = 0;
    9361231    bd->hold_timer = 0;
    9371232   
    938     rtems_chain_append (&pool->ready, &bd->link);
    939   }
    940 
    941   sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'L'),
    942                                1, RTEMS_BDBUF_POOL_LOCK_ATTRIBS, 0,
    943                                &pool->lock);
    944   if (sc != RTEMS_SUCCESSFUL)
    945   {
    946     free (pool->buffers);
    947     free (pool->bds);
    948     return sc;
    949   }
    950 
    951   sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'S'),
    952                                1, RTEMS_BDBUF_POOL_LOCK_ATTRIBS, 0,
    953                                &pool->sync_lock);
    954   if (sc != RTEMS_SUCCESSFUL)
    955   {
    956     rtems_semaphore_delete (pool->lock);
    957     free (pool->buffers);
    958     free (pool->bds);
    959     return sc;
    960   }
    961  
    962   sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'a'),
    963                                0, RTEMS_BDBUF_POOL_WAITER_ATTRIBS, 0,
    964                                &pool->access);
    965   if (sc != RTEMS_SUCCESSFUL)
    966   {
    967     rtems_semaphore_delete (pool->sync_lock);
    968     rtems_semaphore_delete (pool->lock);
    969     free (pool->buffers);
    970     free (pool->bds);
    971     return sc;
    972   }
    973 
    974   sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 't'),
    975                                0, RTEMS_BDBUF_POOL_WAITER_ATTRIBS, 0,
    976                                &pool->transfer);
    977   if (sc != RTEMS_SUCCESSFUL)
    978   {
    979     rtems_semaphore_delete (pool->access);
    980     rtems_semaphore_delete (pool->sync_lock);
    981     rtems_semaphore_delete (pool->lock);
    982     free (pool->buffers);
    983     free (pool->bds);
    984     return sc;
    985   }
    986 
    987   sc = rtems_semaphore_create (rtems_build_name ('B', 'P', '0' + pid, 'w'),
    988                                0, RTEMS_BDBUF_POOL_WAITER_ATTRIBS, 0,
    989                                &pool->waiting);
    990   if (sc != RTEMS_SUCCESSFUL)
    991   {
    992     rtems_semaphore_delete (pool->transfer);
    993     rtems_semaphore_delete (pool->access);
    994     rtems_semaphore_delete (pool->sync_lock);
    995     rtems_semaphore_delete (pool->lock);
    996     free (pool->buffers);
    997     free (pool->bds);
    998     return sc;
    999   }
    1000 
    1001   return RTEMS_SUCCESSFUL;
    1002 }
    1003 
    1004 /**
    1005  * Free resources allocated for buffer pool with specified number.
    1006  *
    1007  * @param pid Buffer pool number
    1008  *
    1009  * @retval RTEMS_SUCCESSFUL
    1010  */
    1011 static rtems_status_code
    1012 rtems_bdbuf_release_pool (rtems_bdpool_id pid)
    1013 {
    1014   rtems_bdbuf_pool* pool = rtems_bdbuf_get_pool (pid);
    1015  
    1016   rtems_bdbuf_lock_pool (pool);
    1017 
    1018   rtems_semaphore_delete (pool->waiting);
    1019   rtems_semaphore_delete (pool->transfer);
    1020   rtems_semaphore_delete (pool->access);
    1021   rtems_semaphore_delete (pool->lock);
    1022  
    1023   free (pool->buffers);
    1024   free (pool->bds);
    1025  
    1026   return RTEMS_SUCCESSFUL;
    1027 }
    1028 
    1029 rtems_status_code
    1030 rtems_bdbuf_init (void)
    1031 {
    1032   rtems_bdpool_id   p;
    1033   rtems_status_code sc;
    1034 
    1035 #if RTEMS_BDBUF_TRACE
    1036   rtems_bdbuf_printf ("init\n");
    1037 #endif
    1038 
    1039   if (rtems_bdbuf_pool_configuration_size <= 0)
    1040     return RTEMS_INVALID_SIZE;
    1041 
    1042   if (rtems_bdbuf_ctx.npools)
    1043     return RTEMS_RESOURCE_IN_USE;
    1044 
    1045   rtems_bdbuf_ctx.npools = rtems_bdbuf_pool_configuration_size;
    1046 
    1047   /*
    1048    * Allocate memory for buffer pool descriptors
    1049    */
    1050   rtems_bdbuf_ctx.pool = calloc (rtems_bdbuf_pool_configuration_size,
    1051                                  sizeof (rtems_bdbuf_pool));
    1052  
    1053   if (rtems_bdbuf_ctx.pool == NULL)
    1054     return RTEMS_NO_MEMORY;
    1055 
    1056   /*
    1057    * Initialize buffer pools and roll out if something failed,
    1058    */
    1059   for (p = 0; p < rtems_bdbuf_ctx.npools; p++)
    1060   {
    1061     sc = rtems_bdbuf_initialize_pool (&rtems_bdbuf_pool_configuration[p], p);
    1062     if (sc != RTEMS_SUCCESSFUL)
    1063     {
    1064       rtems_bdpool_id j;
    1065       for (j = 0; j < p - 1; j++)
    1066         rtems_bdbuf_release_pool (j);
    1067       return sc;
    1068     }
    1069   }
    1070 
    1071   /*
    1072    * Create and start swapout task
    1073    */
    1074 
    1075   rtems_bdbuf_ctx.swapout_enabled = true;
     1233    rtems_chain_append (&bdbuf_cache.ready, &bd->link);
     1234
     1235    if ((b % bdbuf_cache.max_bds_per_group) ==
     1236        (bdbuf_cache.max_bds_per_group - 1))
     1237      group++;
     1238  }
     1239
     1240  for (b = 0,
     1241         group = bdbuf_cache.groups,
     1242         bd = bdbuf_cache.bds;
     1243       b < bdbuf_cache.group_count;
     1244       b++,
     1245         group++,
     1246         bd += bdbuf_cache.max_bds_per_group)
     1247  {
     1248    group->bds_per_group = bdbuf_cache.max_bds_per_group;
     1249    group->users = 0;
     1250    group->bdbuf = bd;
     1251  }
     1252         
     1253  /*
     1254   * Create and start swapout task. This task will create and manage the worker
     1255   * threads.
     1256   */
     1257  bdbuf_cache.swapout_enabled = true;
    10761258 
    10771259  sc = rtems_task_create (rtems_build_name('B', 'S', 'W', 'P'),
    1078                           (rtems_bdbuf_configuration.swapout_priority ?
    1079                            rtems_bdbuf_configuration.swapout_priority :
     1260                          (bdbuf_config.swapout_priority ?
     1261                           bdbuf_config.swapout_priority :
    10801262                           RTEMS_BDBUF_SWAPOUT_TASK_PRIORITY_DEFAULT),
    10811263                          SWAPOUT_TASK_STACK_SIZE,
    10821264                          RTEMS_PREEMPT | RTEMS_NO_TIMESLICE | RTEMS_NO_ASR,
    10831265                          RTEMS_LOCAL | RTEMS_NO_FLOATING_POINT,
    1084                           &rtems_bdbuf_ctx.swapout);
     1266                          &bdbuf_cache.swapout);
    10851267  if (sc != RTEMS_SUCCESSFUL)
    10861268  {
    1087     for (p = 0; p < rtems_bdbuf_ctx.npools; p++)
    1088       rtems_bdbuf_release_pool (p);
    1089     free (rtems_bdbuf_ctx.pool);
     1269    free (bdbuf_cache.buffers);
     1270    free (bdbuf_cache.groups);
     1271    free (bdbuf_cache.bds);
     1272    rtems_semaphore_delete (bdbuf_cache.transfer);
     1273    rtems_semaphore_delete (bdbuf_cache.access);
     1274    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1275    rtems_bdbuf_unlock_cache ();
     1276    rtems_semaphore_delete (bdbuf_cache.lock);
     1277    bdbuf_cache.initialised = false;
    10901278    return sc;
    10911279  }
    10921280
    1093   sc = rtems_task_start (rtems_bdbuf_ctx.swapout,
     1281  sc = rtems_task_start (bdbuf_cache.swapout,
    10941282                         rtems_bdbuf_swapout_task,
    1095                          (rtems_task_argument) &rtems_bdbuf_ctx);
     1283                         (rtems_task_argument) &bdbuf_cache);
    10961284  if (sc != RTEMS_SUCCESSFUL)
    10971285  {
    1098     rtems_task_delete (rtems_bdbuf_ctx.swapout);
    1099     for (p = 0; p < rtems_bdbuf_ctx.npools; p++)
    1100       rtems_bdbuf_release_pool (p);
    1101     free (rtems_bdbuf_ctx.pool);
     1286    rtems_task_delete (bdbuf_cache.swapout);
     1287    free (bdbuf_cache.buffers);
     1288    free (bdbuf_cache.groups);
     1289    free (bdbuf_cache.bds);
     1290    rtems_semaphore_delete (bdbuf_cache.transfer);
     1291    rtems_semaphore_delete (bdbuf_cache.access);
     1292    rtems_semaphore_delete (bdbuf_cache.sync_lock);
     1293    rtems_bdbuf_unlock_cache ();
     1294    rtems_semaphore_delete (bdbuf_cache.lock);
     1295    bdbuf_cache.initialised = false;
    11021296    return sc;
    11031297  }
    11041298
     1299  rtems_bdbuf_unlock_cache ();
     1300 
    11051301  return RTEMS_SUCCESSFUL;
    11061302}
     
    11091305 * Get a buffer for this device and block. This function returns a buffer once
    11101306 * placed into the AVL tree. If no buffer is available and it is not a read
    1111  * ahead request and no buffers are waiting to the written to disk wait until
    1112  * one is available. If buffers are waiting to be written to disk and non are
    1113  * available expire the hold timer and wake the swap out task. If the buffer is
    1114  * for a read ahead transfer return NULL if there is not buffer or it is in the
    1115  * cache.
    1116  *
    1117  * The AVL tree of buffers for the pool is searched and if not located check
    1118  * obtain a buffer and insert it into the AVL tree. Buffers are first obtained
    1119  * from the ready list until all empty/ready buffers are used. Once all buffers
    1120  * are in use buffers are taken from the LRU list with the least recently used
    1121  * buffer taken first. A buffer taken from the LRU list is removed from the AVL
    1122  * tree. The ready list or LRU list buffer is initialised to this device and
    1123  * block. If no buffers are available due to the ready and LRU lists being
    1124  * empty a check is made of the modified list. Buffers may be queued waiting
    1125  * for the hold timer to expire. These buffers should be written to disk and
    1126  * returned to the LRU list where they can be used rather than this call
    1127  * blocking. If buffers are on the modified list the max. write block size of
    1128  * buffers have their hold timer expired and the swap out task woken. The
    1129  * caller then blocks on the waiting semaphore and counter. When buffers return
    1130  * from the upper layers (access) or lower driver (transfer) the blocked caller
    1131  * task is woken and this procedure is repeated. The repeat handles a case of a
    1132  * another thread pre-empting getting a buffer first and adding it to the AVL
    1133  * tree.
     1307 * ahead request and no buffers are waiting to the written to disk wait until a
     1308 * buffer is available. If buffers are waiting to be written to disk and none
     1309 * are available expire the hold timer's of the queued buffers and wake the
     1310 * swap out task. If the buffer is for a read ahead transfer return NULL if
     1311 * there are no buffers available or the buffer is already in the cache.
     1312 *
     1313 * The AVL tree of buffers for the cache is searched and if not found obtain a
     1314 * buffer and insert it into the AVL tree. Buffers are first obtained from the
     1315 * ready list until all empty/ready buffers are used. Once all buffers are in
     1316 * use the LRU list is searched for a buffer of the same group size or a group
     1317 * that has no active buffers in use. A buffer taken from the LRU list is
     1318 * removed from the AVL tree and assigned the new block number. The ready or
     1319 * LRU list buffer is initialised to this device and block. If no buffers are
     1320 * available due to the ready and LRU lists being empty a check is made of the
     1321 * modified list. Buffers may be queued waiting for the hold timer to
     1322 * expire. These buffers should be written to disk and returned to the LRU list
     1323 * where they can be used. If buffers are on the modified list the max. write
     1324 * block size of buffers have their hold timer's expired and the swap out task
     1325 * woken. The caller then blocks on the waiting semaphore and counter. When
     1326 * buffers return from the upper layers (access) or lower driver (transfer) the
     1327 * blocked caller task is woken and this procedure is repeated. The repeat
     1328 * handles a case of a another thread pre-empting getting a buffer first and
     1329 * adding it to the AVL tree.
    11341330 *
    11351331 * A buffer located in the AVL tree means it is already in the cache and maybe
     
    11431339 * and return to the user.
    11441340 *
    1145  * This function assumes the pool the buffer is being taken from is locked and
    1146  * it will make sure the pool is locked when it returns. The pool will be
     1341 * This function assumes the cache the buffer is being taken from is locked and
     1342 * it will make sure the cache is locked when it returns. The cache will be
    11471343 * unlocked if the call could block.
    11481344 *
    1149  * @param pdd The physical disk device
    1150  * @param pool The pool reference
    1151  * @param block Absolute media block number
    1152  * @param read_ahead The get is for a read ahead buffer
    1153  *
    1154  * @return RTEMS status code ( if operation completed successfully or error
     1345 * Variable sized buffer is handled by groups. A group is the size of the
     1346 * maximum buffer that can be allocated. The group can size in multiples of the
     1347 * minimum buffer size where the mulitples are 1,2,4,8, etc. If the buffer is
     1348 * found in the AVL tree the number of BDs in the group is check and if
     1349 * different the buffer size for the block has changed. The buffer needs to be
     1350 * invalidated.
     1351 *
     1352 * @param dd The disk device. Has the configured block size.
     1353 * @param bds_per_group The number of BDs in a group for this block.
     1354 * @param block Absolute media block number for the device
     1355 * @param read_ahead The get is for a read ahead buffer if true
     1356 * @return RTEMS status code (if operation completed successfully or error
    11551357 *         code if error is occured)
    11561358 */
    11571359static rtems_bdbuf_buffer*
    1158 rtems_bdbuf_get_buffer (rtems_disk_device* pdd,
    1159                         rtems_bdbuf_pool*  pool,
     1360rtems_bdbuf_get_buffer (rtems_disk_device* dd,
     1361                        size_t             bds_per_group,
    11601362                        rtems_blkdev_bnum  block,
    11611363                        bool               read_ahead)
    11621364{
    1163   dev_t               device = pdd->dev;
     1365  dev_t               device = dd->dev;
    11641366  rtems_bdbuf_buffer* bd;
    11651367  bool                available;
    1166 
     1368 
    11671369  /*
    11681370   * Loop until we get a buffer. Under load we could find no buffers are
    11691371   * available requiring this task to wait until some become available before
    1170    * proceeding. There is no timeout. If the call is to block and the buffer is
    1171    * for a read ahead buffer return NULL.
     1372   * proceeding. There is no timeout. If this call is to block and the buffer
     1373   * is for a read ahead buffer return NULL. The read ahead is nice but not
     1374   * that important.
    11721375   *
    11731376   * The search procedure is repeated as another thread could have pre-empted
    11741377   * us while we waited for a buffer, obtained an empty buffer and loaded the
    1175    * AVL tree with the one we are after.
     1378   * AVL tree with the one we are after. In this case we move down and wait for
     1379   * the buffer to return to the cache.
    11761380   */
    11771381  do
     
    11801384     * Search for buffer descriptor for this dev/block key.
    11811385     */
    1182     bd = rtems_bdbuf_avl_search (&pool->tree, device, block);
     1386    bd = rtems_bdbuf_avl_search (&bdbuf_cache.tree, device, block);
    11831387
    11841388    /*
     
    11951399    {
    11961400      /*
    1197        * Assign new buffer descriptor from the empty list if one is present. If
    1198        * the empty queue is empty get the oldest buffer from LRU list. If the
     1401       * Assign new buffer descriptor from the ready list if one is present. If
     1402       * the ready queue is empty get the oldest buffer from LRU list. If the
    11991403       * LRU list is empty there are no available buffers check the modified
    12001404       * list.
    12011405       */
    1202       if (rtems_chain_is_empty (&pool->ready))
     1406      bd = rtems_bdbuf_get_next_bd (bds_per_group, &bdbuf_cache.ready);
     1407
     1408      if (!bd)
    12031409      {
    12041410        /*
    1205          * No unsed or read-ahead buffers.
     1411         * No unused or read-ahead buffers.
    12061412         *
    12071413         * If this is a read ahead buffer just return. No need to place further
    12081414         * pressure on the cache by reading something that may be needed when
    1209          * we have data in the cache that was needed and may still be.
     1415         * we have data in the cache that was needed and may still be in the
     1416         * future.
    12101417         */
    12111418        if (read_ahead)
     
    12151422         * Check the LRU list.
    12161423         */
    1217         bd = (rtems_bdbuf_buffer *) rtems_chain_get (&pool->lru);
     1424        bd = rtems_bdbuf_get_next_bd (bds_per_group, &bdbuf_cache.lru);
    12181425       
    12191426        if (bd)
     
    12221429           * Remove the buffer from the AVL tree.
    12231430           */
    1224           if (rtems_bdbuf_avl_remove (&pool->tree, bd) != 0)
     1431          if (rtems_bdbuf_avl_remove (&bdbuf_cache.tree, bd) != 0)
    12251432            rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY);
    12261433        }
     
    12301437           * If there are buffers on the modified list expire the hold timer
    12311438           * and wake the swap out task then wait else just go and wait.
     1439           *
     1440           * The check for an empty list is made so the swapper is only woken
     1441           * when if timers are changed.
    12321442           */
    1233           if (!rtems_chain_is_empty (&pool->modified))
     1443          if (!rtems_chain_is_empty (&bdbuf_cache.modified))
    12341444          {
    1235             rtems_chain_node* node = rtems_chain_head (&pool->modified);
     1445            rtems_chain_node* node = rtems_chain_first (&bdbuf_cache.modified);
    12361446            uint32_t          write_blocks = 0;
    12371447           
    1238             node = node->next;
    1239             while ((write_blocks < rtems_bdbuf_configuration.max_write_blocks) &&
    1240                    !rtems_chain_is_tail (&pool->modified, node))
     1448            while ((write_blocks < bdbuf_config.max_write_blocks) &&
     1449                   !rtems_chain_is_tail (&bdbuf_cache.modified, node))
    12411450            {
    12421451              rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node;
    12431452              bd->hold_timer = 0;
    12441453              write_blocks++;
    1245               node = node->next;
     1454              node = rtems_chain_next (node);
    12461455            }
    12471456
     
    12501459         
    12511460          /*
    1252            * Wait for a buffer to be returned to the pool. The buffer will be
     1461           * Wait for a buffer to be returned to the cache. The buffer will be
    12531462           * placed on the LRU list.
    12541463           */
    1255           rtems_bdbuf_wait (pool, &pool->waiting, &pool->wait_waiters);
     1464          rtems_bdbuf_wait (&bdbuf_cache.waiting, &bdbuf_cache.wait_waiters);
    12561465        }
    12571466      }
    12581467      else
    12591468      {
    1260         bd = (rtems_bdbuf_buffer *) rtems_chain_get (&(pool->ready));
    1261 
     1469        /*
     1470         * We have a new buffer for this block.
     1471         */
    12621472        if ((bd->state != RTEMS_BDBUF_STATE_EMPTY) &&
    12631473            (bd->state != RTEMS_BDBUF_STATE_READ_AHEAD))
     
    12661476        if (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD)
    12671477        {
    1268           if (rtems_bdbuf_avl_remove (&pool->tree, bd) != 0)
     1478          if (rtems_bdbuf_avl_remove (&bdbuf_cache.tree, bd) != 0)
    12691479            rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY);
    12701480        }
     
    12811491        bd->waiters   = 0;
    12821492
    1283         if (rtems_bdbuf_avl_insert (&pool->tree, bd) != 0)
     1493        if (rtems_bdbuf_avl_insert (&bdbuf_cache.tree, bd) != 0)
    12841494          rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_CONSISTENCY);
    12851495
    12861496        return bd;
     1497      }
     1498    }
     1499    else
     1500    {
     1501      /*
     1502       * We have the buffer for the block from the cache. Check if the buffer
     1503       * in the cache is the same size and the requested size we are after.
     1504       */
     1505      if (bd->group->bds_per_group != bds_per_group)
     1506      {
     1507        bd->state = RTEMS_BDBUF_STATE_EMPTY;
     1508        rtems_chain_extract (&bd->link);
     1509        rtems_chain_prepend (&bdbuf_cache.ready, &bd->link);
     1510        bd = NULL;
    12871511      }
    12881512    }
     
    13151539      case RTEMS_BDBUF_STATE_ACCESS_MODIFIED:
    13161540        bd->waiters++;
    1317         rtems_bdbuf_wait (pool, &pool->access, &pool->access_waiters);
     1541        rtems_bdbuf_wait (&bdbuf_cache.access,
     1542                          &bdbuf_cache.access_waiters);
    13181543        bd->waiters--;
    13191544        break;
     
    13221547      case RTEMS_BDBUF_STATE_TRANSFER:
    13231548        bd->waiters++;
    1324         rtems_bdbuf_wait (pool, &pool->transfer, &pool->transfer_waiters);
     1549        rtems_bdbuf_wait (&bdbuf_cache.transfer,
     1550                          &bdbuf_cache.transfer_waiters);
    13251551        bd->waiters--;
    13261552        break;
     
    13451571{
    13461572  rtems_disk_device*  dd;
    1347   rtems_bdbuf_pool*   pool;
    13481573  rtems_bdbuf_buffer* bd;
    1349 
    1350   /*
    1351    * Do not hold the pool lock when obtaining the disk table.
     1574  size_t              bds_per_group;
     1575
     1576  if (!bdbuf_cache.initialised)
     1577    return RTEMS_NOT_CONFIGURED;
     1578
     1579  /*
     1580   * Do not hold the cache lock when obtaining the disk table.
    13521581   */
    13531582  dd = rtems_disk_obtain (device);
    1354   if (dd == NULL)
     1583  if (!dd)
    13551584    return RTEMS_INVALID_ID;
    13561585
    13571586  if (block >= dd->size)
     1587  {
     1588    rtems_disk_release (dd);
     1589    return RTEMS_INVALID_ADDRESS;
     1590  }
     1591
     1592  bds_per_group = rtems_bdbuf_bds_per_group (dd->block_size);
     1593  if (!bds_per_group)
    13581594  {
    13591595    rtems_disk_release (dd);
    13601596    return RTEMS_INVALID_NUMBER;
    13611597  }
    1362 
    1363   pool = rtems_bdbuf_get_pool (dd->phys_dev->pool);
    1364  
    1365   rtems_bdbuf_lock_pool (pool);
     1598 
     1599  rtems_bdbuf_lock_cache ();
    13661600
    13671601#if RTEMS_BDBUF_TRACE
     
    13701604#endif
    13711605
    1372   bd = rtems_bdbuf_get_buffer (dd->phys_dev, pool, block + dd->start, false);
     1606  bd = rtems_bdbuf_get_buffer (dd, bds_per_group, block + dd->start, false);
    13731607
    13741608  if (bd->state == RTEMS_BDBUF_STATE_MODIFIED)
     
    13761610  else
    13771611    bd->state = RTEMS_BDBUF_STATE_ACCESS;
    1378  
    1379   rtems_bdbuf_unlock_pool (pool);
     1612
     1613  rtems_bdbuf_unlock_cache ();
    13801614
    13811615  rtems_disk_release(dd);
    13821616
    13831617  *bdp = bd;
    1384  
     1618
    13851619  return RTEMS_SUCCESSFUL;
    13861620}
     
    14131647{
    14141648  rtems_disk_device*    dd;
    1415   rtems_bdbuf_pool*     pool;
    14161649  rtems_bdbuf_buffer*   bd = NULL;
    14171650  uint32_t              read_ahead_count;
    14181651  rtems_blkdev_request* req;
    1419  
     1652  size_t                bds_per_group;
     1653 
     1654  if (!bdbuf_cache.initialised)
     1655    return RTEMS_NOT_CONFIGURED;
     1656
    14201657  /*
    14211658   * @todo This type of request structure is wrong and should be removed.
     
    14281665
    14291666  /*
    1430    * Do not hold the pool lock when obtaining the disk table.
     1667   * Do not hold the cache lock when obtaining the disk table.
    14311668   */
    14321669  dd = rtems_disk_obtain (device);
    1433   if (dd == NULL)
     1670  if (!dd)
    14341671    return RTEMS_INVALID_ID;
    14351672 
    14361673  if (block >= dd->size) {
    14371674    rtems_disk_release(dd);
     1675    return RTEMS_INVALID_NUMBER;
     1676  }
     1677 
     1678  bds_per_group = rtems_bdbuf_bds_per_group (dd->block_size);
     1679  if (!bds_per_group)
     1680  {
     1681    rtems_disk_release (dd);
    14381682    return RTEMS_INVALID_NUMBER;
    14391683  }
     
    14581702    read_ahead_count = dd->size - block;
    14591703
    1460   pool = rtems_bdbuf_get_pool (dd->phys_dev->pool);
    1461 
    1462   rtems_bdbuf_lock_pool (pool);
     1704  rtems_bdbuf_lock_cache ();
    14631705
    14641706  while (req->bufnum < read_ahead_count)
     
    14721714     * caller.
    14731715     */
    1474     bd = rtems_bdbuf_get_buffer (dd->phys_dev, pool,
     1716    bd = rtems_bdbuf_get_buffer (dd, bds_per_group,
    14751717                                 block + dd->start + req->bufnum,
    14761718                                 req->bufnum == 0 ? false : true);
     
    15151757  {
    15161758    /*
    1517      * Unlock the pool. We have the buffer for the block and it will be in the
     1759     * Unlock the cache. We have the buffer for the block and it will be in the
    15181760     * access or transfer state. We may also have a number of read ahead blocks
    15191761     * if we need to transfer data. At this point any other threads can gain
    1520      * access to the pool and if they are after any of the buffers we have they
    1521      * will block and be woken when the buffer is returned to the pool.
     1762     * access to the cache and if they are after any of the buffers we have
     1763     * they will block and be woken when the buffer is returned to the cache.
    15221764     *
    15231765     * If a transfer is needed the I/O operation will occur with pre-emption
    1524      * enabled and the pool unlocked. This is a change to the previous version
     1766     * enabled and the cache unlocked. This is a change to the previous version
    15251767     * of the bdbuf code.
    15261768     */
     
    15361778                         0, &out);
    15371779                         
    1538     rtems_bdbuf_unlock_pool (pool);
     1780    rtems_bdbuf_unlock_cache ();
    15391781
    15401782    req->req = RTEMS_BLKDEV_REQ_READ;
     
    15681810    }
    15691811
    1570     rtems_bdbuf_lock_pool (pool);
     1812    rtems_bdbuf_lock_cache ();
    15711813
    15721814    for (b = 1; b < req->bufnum; b++)
     
    15891831    bd->state = RTEMS_BDBUF_STATE_ACCESS;
    15901832
    1591   rtems_bdbuf_unlock_pool (pool);
     1833  rtems_bdbuf_unlock_cache ();
    15921834  rtems_disk_release (dd);
    15931835
     
    16001842rtems_bdbuf_release (rtems_bdbuf_buffer* bd)
    16011843{
    1602   rtems_bdbuf_pool* pool;
     1844  if (!bdbuf_cache.initialised)
     1845    return RTEMS_NOT_CONFIGURED;
    16031846
    16041847  if (bd == NULL)
    16051848    return RTEMS_INVALID_ADDRESS;
    16061849
    1607   pool = rtems_bdbuf_get_pool (bd->pool);
    1608 
    1609   rtems_bdbuf_lock_pool (pool);
     1850  rtems_bdbuf_lock_cache ();
    16101851
    16111852#if RTEMS_BDBUF_TRACE
     
    16151856  if (bd->state == RTEMS_BDBUF_STATE_ACCESS_MODIFIED)
    16161857  {
    1617     rtems_bdbuf_append_modified (pool, bd);
     1858    rtems_bdbuf_append_modified (bd);
    16181859  }
    16191860  else
     
    16251866     */
    16261867    if (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD)
    1627       rtems_chain_prepend (&pool->ready, &bd->link);
     1868      rtems_chain_prepend (&bdbuf_cache.ready, &bd->link);
    16281869    else
    16291870    {
    16301871      bd->state = RTEMS_BDBUF_STATE_CACHED;
    1631       rtems_chain_append (&pool->lru, &bd->link);
    1632     }
     1872      rtems_chain_append (&bdbuf_cache.lru, &bd->link);
     1873    }
     1874
     1875    /*
     1876     * One less user for the group of bds.
     1877     */
     1878    bd->group->users--;
    16331879  }
    16341880 
     
    16381884   */
    16391885  if (bd->waiters)
    1640     rtems_bdbuf_wake (pool->access, &pool->access_waiters);
     1886    rtems_bdbuf_wake (bdbuf_cache.access, &bdbuf_cache.access_waiters);
    16411887  else
    16421888  {
    16431889    if (bd->state == RTEMS_BDBUF_STATE_READ_AHEAD)
    16441890    {
    1645       if (rtems_chain_has_only_one_node (&pool->ready))
    1646         rtems_bdbuf_wake (pool->waiting, &pool->wait_waiters);
     1891      if (rtems_chain_has_only_one_node (&bdbuf_cache.ready))
     1892        rtems_bdbuf_wake (bdbuf_cache.waiting, &bdbuf_cache.wait_waiters);
    16471893    }
    16481894    else
    16491895    {
    1650       if (rtems_chain_has_only_one_node (&pool->lru))
    1651         rtems_bdbuf_wake (pool->waiting, &pool->wait_waiters);
    1652     }
    1653   }
    1654  
    1655   rtems_bdbuf_unlock_pool (pool);
     1896      if (rtems_chain_has_only_one_node (&bdbuf_cache.lru))
     1897        rtems_bdbuf_wake (bdbuf_cache.waiting, &bdbuf_cache.wait_waiters);
     1898    }
     1899  }
     1900 
     1901  rtems_bdbuf_unlock_cache ();
    16561902
    16571903  return RTEMS_SUCCESSFUL;
     
    16611907rtems_bdbuf_release_modified (rtems_bdbuf_buffer* bd)
    16621908{
    1663   rtems_bdbuf_pool* pool;
    1664 
    1665   if (bd == NULL)
     1909  if (!bdbuf_cache.initialised)
     1910    return RTEMS_NOT_CONFIGURED;
     1911
     1912  if (!bd)
    16661913    return RTEMS_INVALID_ADDRESS;
    16671914
    1668   pool = rtems_bdbuf_get_pool (bd->pool);
    1669 
    1670   rtems_bdbuf_lock_pool (pool);
     1915  rtems_bdbuf_lock_cache ();
    16711916
    16721917#if RTEMS_BDBUF_TRACE
     
    16761921  bd->hold_timer = rtems_bdbuf_configuration.swap_block_hold;
    16771922 
    1678   rtems_bdbuf_append_modified (pool, bd);
     1923  rtems_bdbuf_append_modified (bd);
    16791924
    16801925  if (bd->waiters)
    1681     rtems_bdbuf_wake (pool->access, &pool->access_waiters);
    1682  
    1683   rtems_bdbuf_unlock_pool (pool);
     1926    rtems_bdbuf_wake (bdbuf_cache.access, &bdbuf_cache.access_waiters);
     1927 
     1928  rtems_bdbuf_unlock_cache ();
    16841929
    16851930  return RTEMS_SUCCESSFUL;
     
    16891934rtems_bdbuf_sync (rtems_bdbuf_buffer* bd)
    16901935{
    1691   rtems_bdbuf_pool* pool;
    1692   bool              available;
     1936  bool available;
    16931937
    16941938#if RTEMS_BDBUF_TRACE
     
    16961940#endif
    16971941 
    1698   if (bd == NULL)
     1942  if (!bdbuf_cache.initialised)
     1943    return RTEMS_NOT_CONFIGURED;
     1944
     1945  if (!bd)
    16991946    return RTEMS_INVALID_ADDRESS;
    17001947
    1701   pool = rtems_bdbuf_get_pool (bd->pool);
    1702 
    1703   rtems_bdbuf_lock_pool (pool);
     1948  rtems_bdbuf_lock_cache ();
    17041949
    17051950  bd->state = RTEMS_BDBUF_STATE_SYNC;
    17061951
    1707   rtems_chain_append (&pool->sync, &bd->link);
     1952  rtems_chain_append (&bdbuf_cache.sync, &bd->link);
    17081953
    17091954  rtems_bdbuf_wake_swapper ();
     
    17251970      case RTEMS_BDBUF_STATE_TRANSFER:
    17261971        bd->waiters++;
    1727         rtems_bdbuf_wait (pool, &pool->transfer, &pool->transfer_waiters);
     1972        rtems_bdbuf_wait (&bdbuf_cache.transfer, &bdbuf_cache.transfer_waiters);
    17281973        bd->waiters--;
    17291974        break;
     
    17341979  }
    17351980
    1736   rtems_bdbuf_unlock_pool (pool);
     1981  rtems_bdbuf_unlock_cache ();
    17371982 
    17381983  return RTEMS_SUCCESSFUL;
     
    17431988{
    17441989  rtems_disk_device*  dd;
    1745   rtems_bdbuf_pool*   pool;
    17461990  rtems_status_code   sc;
    17471991  rtems_event_set     out;
     
    17511995#endif
    17521996
    1753   /*
    1754    * Do not hold the pool lock when obtaining the disk table.
     1997  if (!bdbuf_cache.initialised)
     1998    return RTEMS_NOT_CONFIGURED;
     1999
     2000  /*
     2001   * Do not hold the cache lock when obtaining the disk table.
    17552002   */
    17562003  dd = rtems_disk_obtain (dev);
    1757   if (dd == NULL)
     2004  if (!dd)
    17582005    return RTEMS_INVALID_ID;
    17592006
    1760   pool = rtems_bdbuf_get_pool (dd->pool);
    1761 
    1762   /*
    1763    * Take the sync lock before locking the pool. Once we have the sync lock
    1764    * we can lock the pool. If another thread has the sync lock it will cause
    1765    * this thread to block until it owns the sync lock then it can own the
    1766    * pool. The sync lock can only be obtained with the pool unlocked.
    1767    */
    1768  
    1769   rtems_bdbuf_lock_sync (pool);
    1770   rtems_bdbuf_lock_pool (pool); 
    1771 
    1772   /*
    1773    * Set the pool to have a sync active for a specific device and let the swap
     2007  /*
     2008   * Take the sync lock before locking the cache. Once we have the sync lock we
     2009   * can lock the cache. If another thread has the sync lock it will cause this
     2010   * thread to block until it owns the sync lock then it can own the cache. The
     2011   * sync lock can only be obtained with the cache unlocked.
     2012   */
     2013 
     2014  rtems_bdbuf_lock_sync ();
     2015  rtems_bdbuf_lock_cache (); 
     2016
     2017  /*
     2018   * Set the cache to have a sync active for a specific device and let the swap
    17742019   * out task know the id of the requester to wake when done.
    17752020   *
    17762021   * The swap out task will negate the sync active flag when no more buffers
    1777    * for the device are held on the modified for sync queues.
    1778    */
    1779   pool->sync_active    = true;
    1780   pool->sync_requester = rtems_task_self ();
    1781   pool->sync_device    = dev;
     2022   * for the device are held on the "modified for sync" queues.
     2023   */
     2024  bdbuf_cache.sync_active    = true;
     2025  bdbuf_cache.sync_requester = rtems_task_self ();
     2026  bdbuf_cache.sync_device    = dev;
    17822027 
    17832028  rtems_bdbuf_wake_swapper ();
    1784   rtems_bdbuf_unlock_pool (pool);
     2029  rtems_bdbuf_unlock_cache ();
    17852030 
    17862031  sc = rtems_event_receive (RTEMS_BDBUF_TRANSFER_SYNC,
     
    17912036    rtems_fatal_error_occurred (BLKDEV_FATAL_BDBUF_SWAPOUT_RE);
    17922037     
    1793   rtems_bdbuf_unlock_sync (pool);
    1794  
    1795   return rtems_disk_release(dd);
     2038  rtems_bdbuf_unlock_sync ();
     2039 
     2040  return rtems_disk_release (dd);
    17962041}
    17972042
     
    18182063
    18192064/**
    1820  * Process the modified list of buffers. There us a sync or modified list that
    1821  * needs to be handled.
    1822  *
    1823  * @param pid The pool id to process modified buffers on.
    1824  * @param dev The device to handle. If -1 no device is selected so select the
    1825  *            device of the first buffer to be written to disk.
    1826  * @param chain The modified chain to process.
    1827  * @param transfer The chain to append buffers to be written too.
    1828  * @param sync_active If true this is a sync operation so expire all timers.
    1829  * @param update_timers If true update the timers.
    1830  * @param timer_delta It update_timers is true update the timers by this
    1831  *                    amount.
     2065 * Swapout transfer to the driver. The driver will break this I/O into groups
     2066 * of consecutive write requests is multiple consecutive buffers are required
     2067 * by the driver.
     2068 *
     2069 * @param transfer The transfer transaction.
    18322070 */
    18332071static void
    1834 rtems_bdbuf_swapout_modified_processing (rtems_bdpool_id      pid,
    1835                                          dev_t*               dev,
    1836                                          rtems_chain_control* chain,
    1837                                          rtems_chain_control* transfer,
    1838                                          bool                 sync_active,
    1839                                          bool                 update_timers,
    1840                                          uint32_t             timer_delta)
    1841 {
    1842   if (!rtems_chain_is_empty (chain))
    1843   {
    1844     rtems_chain_node* node = rtems_chain_head (chain);
    1845     node = node->next;
    1846 
    1847     while (!rtems_chain_is_tail (chain, node))
    1848     {
    1849       rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node;
    1850    
    1851       if (bd->pool == pid)
    1852       {
    1853         /*
    1854          * Check if the buffer's hold timer has reached 0. If a sync is active
    1855          * force all the timers to 0.
    1856          *
    1857          * @note Lots of sync requests will skew this timer. It should be based
    1858          *       on TOD to be accurate. Does it matter ?
    1859          */
    1860         if (sync_active)
    1861           bd->hold_timer = 0;
    1862  
    1863         if (bd->hold_timer)
    1864         {
    1865           if (update_timers)
    1866           {
    1867             if (bd->hold_timer > timer_delta)
    1868               bd->hold_timer -= timer_delta;
    1869             else
    1870               bd->hold_timer = 0;
    1871           }
    1872 
    1873           if (bd->hold_timer)
    1874           {
    1875             node = node->next;
    1876             continue;
    1877           }
    1878         }
    1879 
    1880         /*
    1881          * This assumes we can set dev_t to -1 which is just an
    1882          * assumption. Cannot use the transfer list being empty the sync dev
    1883          * calls sets the dev to use.
    1884          */
    1885         if (*dev == (dev_t)-1)
    1886           *dev = bd->dev;
    1887 
    1888         if (bd->dev == *dev)
    1889         {
    1890           rtems_chain_node* next_node = node->next;
    1891           rtems_chain_node* tnode = rtems_chain_tail (transfer);
    1892    
    1893           /*
    1894            * The blocks on the transfer list are sorted in block order. This
    1895            * means multi-block transfers for drivers that require consecutive
    1896            * blocks perform better with sorted blocks and for real disks it may
    1897            * help lower head movement.
    1898            */
    1899 
    1900           bd->state = RTEMS_BDBUF_STATE_TRANSFER;
    1901 
    1902           rtems_chain_extract (node);
    1903 
    1904           tnode = tnode->previous;
    1905          
    1906           while (node && !rtems_chain_is_head (transfer, tnode))
    1907           {
    1908             rtems_bdbuf_buffer* tbd = (rtems_bdbuf_buffer*) tnode;
    1909 
    1910             if (bd->block > tbd->block)
    1911             {
    1912               rtems_chain_insert (tnode, node);
    1913               node = NULL;
    1914             }
    1915             else
    1916               tnode = tnode->previous;
    1917           }
    1918 
    1919           if (node)
    1920             rtems_chain_prepend (transfer, node);
    1921          
    1922           node = next_node;
    1923         }
    1924         else
    1925         {
    1926           node = node->next;
    1927         }
    1928       }
    1929     }
    1930   }
    1931 }
    1932 
    1933 /**
    1934  * Process a pool's modified buffers. Check the sync list first then the
    1935  * modified list extracting the buffers suitable to be written to disk. We have
    1936  * a device at a time. The task level loop will repeat this operation while
    1937  * there are buffers to be written. If the transfer fails place the buffers
    1938  * back on the modified list and try again later. The pool is unlocked while
    1939  * the buffers are being written to disk.
    1940  *
    1941  * @param pid The pool id to process modified buffers on.
    1942  * @param timer_delta It update_timers is true update the timers by this
    1943  *                    amount.
    1944  * @param update_timers If true update the timers.
    1945  * @param write_req The write request structure. There is only one.
    1946  *
    1947  * @retval true Buffers where written to disk so scan again.
    1948  * @retval false No buffers where written to disk.
    1949  */
    1950 static bool
    1951 rtems_bdbuf_swapout_pool_processing (rtems_bdpool_id       pid,
    1952                                      unsigned long         timer_delta,
    1953                                      bool                  update_timers,
    1954                                      rtems_blkdev_request* write_req)
    1955 {
    1956   rtems_bdbuf_pool*   pool = rtems_bdbuf_get_pool (pid);
    1957   rtems_chain_control transfer;
    1958   dev_t               dev = -1;
     2072rtems_bdbuf_swapout_write (rtems_bdbuf_swapout_transfer* transfer)
     2073{
    19592074  rtems_disk_device*  dd;
    1960   bool                transfered_buffers = true;
    1961 
    1962   rtems_chain_initialize_empty (&transfer);
    1963    
    1964   rtems_bdbuf_lock_pool (pool);
    1965 
    1966   /*
    1967    * When the sync is for a device limit the sync to that device. If the sync
    1968    * is for a buffer handle process the devices in the order on the sync
    1969    * list. This means the dev is -1.
    1970    */
    1971   if (pool->sync_active)
    1972     dev = pool->sync_device;
    1973 
    1974   /*
    1975    * If we have any buffers in the sync queue move them to the modified
    1976    * list. The first sync buffer will select the device we use.
    1977    */
    1978   rtems_bdbuf_swapout_modified_processing (pid, &dev,
    1979                                            &pool->sync, &transfer,
    1980                                            true, false,
    1981                                            timer_delta);
    1982 
    1983   /*
    1984    * Process the pool's modified list.
    1985    */
    1986   rtems_bdbuf_swapout_modified_processing (pid, &dev,
    1987                                            &pool->modified, &transfer,
    1988                                            pool->sync_active,
    1989                                            update_timers,
    1990                                            timer_delta);
    1991 
    1992   /*
    1993    * We have all the buffers that have been modified for this device so the
    1994    * pool can be unlocked because the state of each buffer has been set to
    1995    * TRANSFER.
    1996    */
    1997   rtems_bdbuf_unlock_pool (pool);
     2075 
     2076#if RTEMS_BDBUF_TRACE
     2077  rtems_bdbuf_printf ("swapout transfer: %08x\n", transfer->dev);
     2078#endif
    19982079
    19992080  /*
    20002081   * If there are buffers to transfer to the media transfer them.
    20012082   */
    2002   if (rtems_chain_is_empty (&transfer))
    2003     transfered_buffers = false;
    2004   else
     2083  if (!rtems_chain_is_empty (&transfer->bds))
    20052084  {
    20062085    /*
    2007      * Obtain the disk device. The pool's mutex has been released to avoid a
     2086     * Obtain the disk device. The cache's mutex has been released to avoid a
    20082087     * dead lock.
    20092088     */
    2010     dd = rtems_disk_obtain (dev);
    2011     if (dd == NULL)
    2012        transfered_buffers = false;
    2013     else
     2089    dd = rtems_disk_obtain (transfer->dev);
     2090    if (dd)
    20142091    {
    20152092      /*
     
    20282105       * trouble waiting to happen.
    20292106       */
    2030       write_req->status = RTEMS_RESOURCE_IN_USE;
    2031       write_req->error = 0;
    2032       write_req->bufnum = 0;
    2033 
    2034       while (!rtems_chain_is_empty (&transfer))
     2107      transfer->write_req->status = RTEMS_RESOURCE_IN_USE;
     2108      transfer->write_req->error = 0;
     2109      transfer->write_req->bufnum = 0;
     2110
     2111      while (!rtems_chain_is_empty (&transfer->bds))
    20352112      {
    20362113        rtems_bdbuf_buffer* bd =
    2037           (rtems_bdbuf_buffer*) rtems_chain_get (&transfer);
     2114          (rtems_bdbuf_buffer*) rtems_chain_get (&transfer->bds);
    20382115
    20392116        bool write = false;
     
    20462123         */
    20472124       
    2048         if ((dd->capabilities & RTEMS_BLKDEV_CAP_MULTISECTOR_CONT) &&
    2049             write_req->bufnum &&
     2125#if RTEMS_BDBUF_TRACE
     2126        rtems_bdbuf_printf ("swapout write: bd:%d, bufnum:%d mode:%s\n",
     2127                            bd->block, transfer->write_req->bufnum,
     2128                            dd->phys_dev->capabilities &
     2129                            RTEMS_BLKDEV_CAP_MULTISECTOR_CONT ? "MULIT" : "SCAT");
     2130#endif
     2131
     2132        if ((dd->phys_dev->capabilities & RTEMS_BLKDEV_CAP_MULTISECTOR_CONT) &&
     2133            transfer->write_req->bufnum &&
    20502134            (bd->block != (last_block + 1)))
    20512135        {
    2052           rtems_chain_prepend (&transfer, &bd->link);
     2136          rtems_chain_prepend (&transfer->bds, &bd->link);
    20532137          write = true;
    20542138        }
    20552139        else
    20562140        {
    2057           write_req->bufs[write_req->bufnum].user   = bd;
    2058           write_req->bufs[write_req->bufnum].block  = bd->block;
    2059           write_req->bufs[write_req->bufnum].length = dd->block_size;
    2060           write_req->bufs[write_req->bufnum].buffer = bd->buffer;
    2061           write_req->bufnum++;
    2062           last_block = bd->block;
     2141          rtems_blkdev_sg_buffer* buf;
     2142          buf = &transfer->write_req->bufs[transfer->write_req->bufnum];
     2143          transfer->write_req->bufnum++;
     2144          buf->user   = bd;
     2145          buf->block  = bd->block;
     2146          buf->length = dd->block_size;
     2147          buf->buffer = bd->buffer;
     2148          last_block  = bd->block;
    20632149        }
    20642150
     
    20682154         */
    20692155
    2070         if (rtems_chain_is_empty (&transfer) ||
    2071             (write_req->bufnum >= rtems_bdbuf_configuration.max_write_blocks))
     2156        if (rtems_chain_is_empty (&transfer->bds) ||
     2157            (transfer->write_req->bufnum >= rtems_bdbuf_configuration.max_write_blocks))
    20722158          write = true;
    20732159
     
    20772163          uint32_t b;
    20782164
     2165#if RTEMS_BDBUF_TRACE
     2166          rtems_bdbuf_printf ("swapout write: writing bufnum:%d\n",
     2167                              transfer->write_req->bufnum);
     2168#endif
    20792169          /*
    2080            * Perform the transfer. No pool locks, no preemption, only the disk
     2170           * Perform the transfer. No cache locks, no preemption, only the disk
    20812171           * device is being held.
    20822172           */
    2083           result = dd->ioctl (dd->phys_dev->dev,
    2084                               RTEMS_BLKIO_REQUEST, write_req);
     2173          result = dd->phys_dev->ioctl (dd->phys_dev->dev,
     2174                                        RTEMS_BLKIO_REQUEST, transfer->write_req);
    20852175
    20862176          if (result < 0)
    20872177          {
    2088             rtems_bdbuf_lock_pool (pool);
     2178            rtems_bdbuf_lock_cache ();
    20892179             
    2090             for (b = 0; b < write_req->bufnum; b++)
     2180            for (b = 0; b < transfer->write_req->bufnum; b++)
    20912181            {
    2092               bd = write_req->bufs[b].user;
     2182              bd = transfer->write_req->bufs[b].user;
    20932183              bd->state  = RTEMS_BDBUF_STATE_MODIFIED;
    20942184              bd->error = errno;
    20952185
    20962186              /*
    2097                * Place back on the pools modified queue and try again.
     2187               * Place back on the cache's modified queue and try again.
    20982188               *
    20992189               * @warning Not sure this is the best option but I do not know
    21002190               *          what else can be done.
    21012191               */
    2102               rtems_chain_append (&pool->modified, &bd->link);
     2192              rtems_chain_append (&bdbuf_cache.modified, &bd->link);
    21032193            }
    21042194          }
     
    21152205              rtems_fatal_error_occurred (BLKDEV_FATAL_BDBUF_SWAPOUT_RE);
    21162206
    2117             rtems_bdbuf_lock_pool (pool);
    2118 
    2119             for (b = 0; b < write_req->bufnum; b++)
     2207            rtems_bdbuf_lock_cache ();
     2208
     2209            for (b = 0; b < transfer->write_req->bufnum; b++)
    21202210            {
    2121               bd = write_req->bufs[b].user;
     2211              bd = transfer->write_req->bufs[b].user;
    21222212              bd->state = RTEMS_BDBUF_STATE_CACHED;
    21232213              bd->error = 0;
    2124 
    2125               rtems_chain_append (&pool->lru, &bd->link);
     2214              bd->group->users--;
     2215             
     2216              rtems_chain_append (&bdbuf_cache.lru, &bd->link);
    21262217             
    21272218              if (bd->waiters)
    2128                 rtems_bdbuf_wake (pool->transfer, &pool->transfer_waiters);
     2219                rtems_bdbuf_wake (bdbuf_cache.transfer, &bdbuf_cache.transfer_waiters);
    21292220              else
    21302221              {
    2131                 if (rtems_chain_has_only_one_node (&pool->lru))
    2132                   rtems_bdbuf_wake (pool->waiting, &pool->wait_waiters);
     2222                if (rtems_chain_has_only_one_node (&bdbuf_cache.lru))
     2223                  rtems_bdbuf_wake (bdbuf_cache.waiting, &bdbuf_cache.wait_waiters);
    21332224              }
    21342225            }
    21352226          }
    2136 
    2137           rtems_bdbuf_unlock_pool (pool);
    2138 
    2139           write_req->status = RTEMS_RESOURCE_IN_USE;
    2140           write_req->error = 0;
    2141           write_req->bufnum = 0;
     2227             
     2228          rtems_bdbuf_unlock_cache ();
     2229
     2230          transfer->write_req->status = RTEMS_RESOURCE_IN_USE;
     2231          transfer->write_req->error = 0;
     2232          transfer->write_req->bufnum = 0;
    21422233        }
    21432234      }
     
    21452236      rtems_disk_release (dd);
    21462237    }
    2147   }
    2148 
    2149   if (pool->sync_active && !  transfered_buffers)
    2150   {
    2151     rtems_id sync_requester = pool->sync_requester;
    2152     pool->sync_active = false;
    2153     pool->sync_requester = 0;
     2238    else
     2239    {
     2240      /*
     2241       * We have buffers but no device. Put the BDs back onto the
     2242       * ready queue and exit.
     2243       */
     2244      /* @todo fixme */
     2245    }
     2246  }
     2247}
     2248
     2249/**
     2250 * Process the modified list of buffers. There is a sync or modified list that
     2251 * needs to be handled so we have a common function to do the work.
     2252 *
     2253 * @param dev The device to handle. If -1 no device is selected so select the
     2254 *            device of the first buffer to be written to disk.
     2255 * @param chain The modified chain to process.
     2256 * @param transfer The chain to append buffers to be written too.
     2257 * @param sync_active If true this is a sync operation so expire all timers.
     2258 * @param update_timers If true update the timers.
     2259 * @param timer_delta It update_timers is true update the timers by this
     2260 *                    amount.
     2261 */
     2262static void
     2263rtems_bdbuf_swapout_modified_processing (dev_t*               dev,
     2264                                         rtems_chain_control* chain,
     2265                                         rtems_chain_control* transfer,
     2266                                         bool                 sync_active,
     2267                                         bool                 update_timers,
     2268                                         uint32_t             timer_delta)
     2269{
     2270  if (!rtems_chain_is_empty (chain))
     2271  {
     2272    rtems_chain_node* node = rtems_chain_head (chain);
     2273    node = node->next;
     2274
     2275    while (!rtems_chain_is_tail (chain, node))
     2276    {
     2277      rtems_bdbuf_buffer* bd = (rtems_bdbuf_buffer*) node;
     2278   
     2279      /*
     2280       * Check if the buffer's hold timer has reached 0. If a sync is active
     2281       * force all the timers to 0.
     2282       *
     2283       * @note Lots of sync requests will skew this timer. It should be based
     2284       *       on TOD to be accurate. Does it matter ?
     2285       */
     2286      if (sync_active)
     2287        bd->hold_timer = 0;
     2288 
     2289      if (bd->hold_timer)
     2290      {
     2291        if (update_timers)
     2292        {
     2293          if (bd->hold_timer > timer_delta)
     2294            bd->hold_timer -= timer_delta;
     2295          else
     2296            bd->hold_timer = 0;
     2297        }
     2298
     2299        if (bd->hold_timer)
     2300        {
     2301          node = node->next;
     2302          continue;
     2303        }
     2304      }
     2305
     2306      /*
     2307       * This assumes we can set dev_t to -1 which is just an
     2308       * assumption. Cannot use the transfer list being empty the sync dev
     2309       * calls sets the dev to use.
     2310       */
     2311      if (*dev == (dev_t)-1)
     2312        *dev = bd->dev;
     2313
     2314      if (bd->dev == *dev)
     2315      {
     2316        rtems_chain_node* next_node = node->next;
     2317        rtems_chain_node* tnode = rtems_chain_tail (transfer);
     2318   
     2319        /*
     2320         * The blocks on the transfer list are sorted in block order. This
     2321         * means multi-block transfers for drivers that require consecutive
     2322         * blocks perform better with sorted blocks and for real disks it may
     2323         * help lower head movement.
     2324         */
     2325
     2326        bd->state = RTEMS_BDBUF_STATE_TRANSFER;
     2327
     2328        rtems_chain_extract (node);
     2329
     2330        tnode = tnode->previous;
     2331         
     2332        while (node && !rtems_chain_is_head (transfer, tnode))
     2333        {
     2334          rtems_bdbuf_buffer* tbd = (rtems_bdbuf_buffer*) tnode;
     2335
     2336          if (bd->block > tbd->block)
     2337          {
     2338            rtems_chain_insert (tnode, node);
     2339            node = NULL;
     2340          }
     2341          else
     2342            tnode = tnode->previous;
     2343        }
     2344       
     2345        if (node)
     2346          rtems_chain_prepend (transfer, node);
     2347         
     2348        node = next_node;
     2349      }
     2350      else
     2351      {
     2352        node = node->next;
     2353      }
     2354    }
     2355  }
     2356}
     2357
     2358/**
     2359 * Process the cache's modified buffers. Check the sync list first then the
     2360 * modified list extracting the buffers suitable to be written to disk. We have
     2361 * a device at a time. The task level loop will repeat this operation while
     2362 * there are buffers to be written. If the transfer fails place the buffers
     2363 * back on the modified list and try again later. The cache is unlocked while
     2364 * the buffers are being written to disk.
     2365 *
     2366 * @param timer_delta It update_timers is true update the timers by this
     2367 *                    amount.
     2368 * @param update_timers If true update the timers.
     2369 * @param transfer The transfer transaction data.
     2370 *
     2371 * @retval true Buffers where written to disk so scan again.
     2372 * @retval false No buffers where written to disk.
     2373 */
     2374static bool
     2375rtems_bdbuf_swapout_processing (unsigned long                 timer_delta,
     2376                                bool                          update_timers,
     2377                                rtems_bdbuf_swapout_transfer* transfer)
     2378{
     2379  rtems_bdbuf_swapout_worker* worker;
     2380  bool                        transfered_buffers = false;
     2381
     2382  rtems_bdbuf_lock_cache ();
     2383
     2384  /*
     2385   * If a sync is active do not use a worker because the current code does not
     2386   * cleaning up after. We need to know the buffers have been written when
     2387   * syncing to release sync lock and currently worker threads do not return to
     2388   * here. We do not know the worker is the last in a sequence of sync writes
     2389   * until after we have it running so we do not know to tell it to release the
     2390   * lock. The simplest solution is to get the main swap out task perform all
     2391   * sync operations.
     2392   */
     2393  if (bdbuf_cache.sync_active)
     2394    worker = NULL;
     2395  else
     2396  {
     2397    worker = (rtems_bdbuf_swapout_worker*)
     2398      rtems_chain_get (&bdbuf_cache.swapout_workers);
     2399    if (worker)
     2400      transfer = &worker->transfer;
     2401  }
     2402 
     2403  rtems_chain_initialize_empty (&transfer->bds);
     2404  transfer->dev = -1;
     2405 
     2406  /*
     2407   * When the sync is for a device limit the sync to that device. If the sync
     2408   * is for a buffer handle process the devices in the order on the sync
     2409   * list. This means the dev is -1.
     2410   */
     2411  if (bdbuf_cache.sync_active)
     2412    transfer->dev = bdbuf_cache.sync_device;
     2413 
     2414  /*
     2415   * If we have any buffers in the sync queue move them to the modified
     2416   * list. The first sync buffer will select the device we use.
     2417   */
     2418  rtems_bdbuf_swapout_modified_processing (&transfer->dev,
     2419                                           &bdbuf_cache.sync,
     2420                                           &transfer->bds,
     2421                                           true, false,
     2422                                           timer_delta);
     2423
     2424  /*
     2425   * Process the cache's modified list.
     2426   */
     2427  rtems_bdbuf_swapout_modified_processing (&transfer->dev,
     2428                                           &bdbuf_cache.modified,
     2429                                           &transfer->bds,
     2430                                           bdbuf_cache.sync_active,
     2431                                           update_timers,
     2432                                           timer_delta);
     2433
     2434  /*
     2435   * We have all the buffers that have been modified for this device so the
     2436   * cache can be unlocked because the state of each buffer has been set to
     2437   * TRANSFER.
     2438   */
     2439  rtems_bdbuf_unlock_cache ();
     2440
     2441  /*
     2442   * If there are buffers to transfer to the media transfer them.
     2443   */
     2444  if (!rtems_chain_is_empty (&transfer->bds))
     2445  {
     2446    if (worker)
     2447    {
     2448      rtems_status_code sc = rtems_event_send (worker->id,
     2449                                               RTEMS_BDBUF_SWAPOUT_SYNC);
     2450      if (sc != RTEMS_SUCCESSFUL)
     2451        rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_WAKE);
     2452    }
     2453    else
     2454    {
     2455      rtems_bdbuf_swapout_write (transfer);
     2456    }
     2457   
     2458    transfered_buffers = true;
     2459  }
     2460   
     2461  if (bdbuf_cache.sync_active && !transfered_buffers)
     2462  {
     2463    rtems_id sync_requester;
     2464    rtems_bdbuf_lock_cache ();
     2465    sync_requester = bdbuf_cache.sync_requester;
     2466    bdbuf_cache.sync_active = false;
     2467    bdbuf_cache.sync_requester = 0;
     2468    rtems_bdbuf_unlock_cache ();
    21542469    if (sync_requester)
    21552470      rtems_event_send (sync_requester, RTEMS_BDBUF_TRANSFER_SYNC);
    21562471  }
    21572472 
    2158   return  transfered_buffers;
    2159 }
    2160 
    2161 /**
    2162  * Body of task which takes care on flushing modified buffers to the disk.
    2163  *
    2164  * @param arg The task argument which is the context.
    2165  */
    2166 static rtems_task
    2167 rtems_bdbuf_swapout_task (rtems_task_argument arg)
    2168 {
    2169   rtems_bdbuf_context*  context = (rtems_bdbuf_context*) arg;
    2170   rtems_blkdev_request* write_req;
    2171   uint32_t              period_in_ticks;
    2172   const uint32_t        period_in_msecs = rtems_bdbuf_configuration.swapout_period;
    2173   uint32_t              timer_delta;
    2174   rtems_status_code     sc;
    2175 
     2473  return transfered_buffers;
     2474}
     2475
     2476/**
     2477 * Allocate the write request and initialise it for good measure.
     2478 *
     2479 * @return rtems_blkdev_request* The write reference memory.
     2480 */
     2481static rtems_blkdev_request*
     2482rtems_bdbuf_swapout_writereq_alloc (void)
     2483{
    21762484  /*
    21772485   * @note chrisj The rtems_blkdev_request and the array at the end is a hack.
     
    21802488   * is already part of the buffer structure.
    21812489   */
    2182   write_req =
     2490  rtems_blkdev_request* write_req =
    21832491    malloc (sizeof (rtems_blkdev_request) +
    21842492            (rtems_bdbuf_configuration.max_write_blocks *
     
    21932501  write_req->io_task = rtems_task_self ();
    21942502
     2503  return write_req;
     2504}
     2505
     2506/**
     2507 * The swapout worker thread body.
     2508 *
     2509 * @param arg A pointer to the worker thread's private data.
     2510 * @return rtems_task Not used.
     2511 */
     2512static rtems_task
     2513rtems_bdbuf_swapout_worker_task (rtems_task_argument arg)
     2514{
     2515  rtems_bdbuf_swapout_worker* worker = (rtems_bdbuf_swapout_worker*) arg;
     2516
     2517  while (worker->enabled)
     2518  {
     2519    rtems_event_set   out;
     2520    rtems_status_code sc;
     2521   
     2522    sc = rtems_event_receive (RTEMS_BDBUF_SWAPOUT_SYNC,
     2523                              RTEMS_EVENT_ALL | RTEMS_WAIT,
     2524                              RTEMS_NO_TIMEOUT,
     2525                              &out);
     2526
     2527    if (sc != RTEMS_SUCCESSFUL)
     2528      rtems_fatal_error_occurred (BLKDEV_FATAL_BDBUF_SWAPOUT_RE);
     2529
     2530    rtems_bdbuf_swapout_write (&worker->transfer);
     2531
     2532    rtems_bdbuf_lock_cache ();
     2533
     2534    rtems_chain_initialize_empty (&worker->transfer.bds);
     2535    worker->transfer.dev = -1;
     2536
     2537    rtems_chain_append (&bdbuf_cache.swapout_workers, &worker->link);
     2538   
     2539    rtems_bdbuf_unlock_cache ();
     2540  }
     2541
     2542  free (worker->transfer.write_req);
     2543  free (worker);
     2544
     2545  rtems_task_delete (RTEMS_SELF);
     2546}
     2547
     2548/**
     2549 * Open the swapout worker threads.
     2550 */
     2551static void
     2552rtems_bdbuf_swapout_workers_open (void)
     2553{
     2554  rtems_status_code sc;
     2555  int               w;
     2556 
     2557  rtems_bdbuf_lock_cache ();
     2558 
     2559  for (w = 0; w < rtems_bdbuf_configuration.swapout_workers; w++)
     2560  {
     2561    rtems_bdbuf_swapout_worker* worker;
     2562
     2563    worker = malloc (sizeof (rtems_bdbuf_swapout_worker));
     2564    if (!worker)
     2565      rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_NOMEM);
     2566
     2567    rtems_chain_append (&bdbuf_cache.swapout_workers, &worker->link);
     2568    worker->enabled = true;
     2569    worker->transfer.write_req = rtems_bdbuf_swapout_writereq_alloc ();
     2570   
     2571    rtems_chain_initialize_empty (&worker->transfer.bds);
     2572    worker->transfer.dev = -1;
     2573
     2574    sc = rtems_task_create (rtems_build_name('B', 'D', 'o', 'a' + w),
     2575                            (rtems_bdbuf_configuration.swapout_priority ?
     2576                             rtems_bdbuf_configuration.swapout_priority :
     2577                             RTEMS_BDBUF_SWAPOUT_TASK_PRIORITY_DEFAULT),
     2578                            SWAPOUT_TASK_STACK_SIZE,
     2579                            RTEMS_PREEMPT | RTEMS_NO_TIMESLICE | RTEMS_NO_ASR,
     2580                            RTEMS_LOCAL | RTEMS_NO_FLOATING_POINT,
     2581                            &worker->id);
     2582    if (sc != RTEMS_SUCCESSFUL)
     2583      rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_CREATE);
     2584
     2585    sc = rtems_task_start (worker->id,
     2586                           rtems_bdbuf_swapout_worker_task,
     2587                           (rtems_task_argument) worker);
     2588    if (sc != RTEMS_SUCCESSFUL)
     2589      rtems_fatal_error_occurred (RTEMS_BLKDEV_FATAL_BDBUF_SO_WK_START);
     2590  }
     2591 
     2592  rtems_bdbuf_unlock_cache ();
     2593}
     2594
     2595/**
     2596 * Close the swapout worker threads.
     2597 */
     2598static void
     2599rtems_bdbuf_swapout_workers_close (void)
     2600{
     2601  rtems_chain_node* node;
     2602 
     2603  rtems_bdbuf_lock_cache ();
     2604 
     2605  node = rtems_chain_first (&bdbuf_cache.swapout_workers);
     2606  while (!rtems_chain_is_tail (&bdbuf_cache.swapout_workers, node))
     2607  {
     2608    rtems_bdbuf_swapout_worker* worker = (rtems_bdbuf_swapout_worker*) node;
     2609    worker->enabled = false;
     2610    rtems_event_send (worker->id, RTEMS_BDBUF_SWAPOUT_SYNC);
     2611    node = rtems_chain_next (node);
     2612  }
     2613 
     2614  rtems_bdbuf_unlock_cache ();
     2615}
     2616
     2617/**
     2618 * Body of task which takes care on flushing modified buffers to the disk.
     2619 *
     2620 * @param arg A pointer to the global cache data. Use the global variable and
     2621 *            not this.
     2622 * @return rtems_task Not used.
     2623 */
     2624static rtems_task
     2625rtems_bdbuf_swapout_task (rtems_task_argument arg)
     2626{
     2627  rtems_bdbuf_swapout_transfer transfer;
     2628  uint32_t                     period_in_ticks;
     2629  const uint32_t               period_in_msecs = bdbuf_config.swapout_period;;
     2630  uint32_t                     timer_delta;
     2631
     2632  transfer.write_req = rtems_bdbuf_swapout_writereq_alloc ();
     2633  rtems_chain_initialize_empty (&transfer.bds);
     2634  transfer.dev = -1;
     2635
     2636  /*
     2637   * Localise the period.
     2638   */
    21952639  period_in_ticks = RTEMS_MICROSECONDS_TO_TICKS (period_in_msecs * 1000);
    21962640
     
    22002644  timer_delta = period_in_msecs;
    22012645
    2202   while (context->swapout_enabled)
    2203   {
    2204     rtems_event_set out;
     2646  /*
     2647   * Create the worker threads.
     2648   */
     2649  rtems_bdbuf_swapout_workers_open ();
     2650 
     2651  while (bdbuf_cache.swapout_enabled)
     2652  {
     2653    rtems_event_set   out;
     2654    rtems_status_code sc;
    22052655
    22062656    /*
     
    22112661    /*
    22122662     * If we write buffers to any disk perform a check again. We only write a
    2213      * single device at a time and a pool may have more than one devices
     2663     * single device at a time and the cache may have more than one device's
    22142664     * buffers modified waiting to be written.
    22152665     */
     
    22182668    do
    22192669    {
    2220       rtems_bdpool_id pid;
    2221    
    22222670      transfered_buffers = false;
    22232671
    22242672      /*
    2225        * Loop over each pool extacting all the buffers we find for a specific
    2226        * device. The device is the first one we find on a modified list of a
    2227        * pool. Process the sync queue of buffers first.
     2673       * Extact all the buffers we find for a specific device. The device is
     2674       * the first one we find on a modified list. Process the sync queue of
     2675       * buffers first.
    22282676       */
    2229       for (pid = 0; pid < context->npools; pid++)
     2677      if (rtems_bdbuf_swapout_processing (timer_delta,
     2678                                          update_timers,
     2679                                          &transfer))
    22302680      {
    2231         if (rtems_bdbuf_swapout_pool_processing (pid,
    2232                                                  timer_delta,
    2233                                                  update_timers,
    2234                                                  write_req))
    2235         {
    2236           transfered_buffers = true;
    2237         }
     2681        transfered_buffers = true;
    22382682      }
    2239 
     2683     
    22402684      /*
    22412685       * Only update the timers once.
     
    22542698  }
    22552699
    2256   free (write_req);
     2700  rtems_bdbuf_swapout_workers_close ();
     2701 
     2702  free (transfer.write_req);
    22572703
    22582704  rtems_task_delete (RTEMS_SELF);
    22592705}
    22602706
    2261 rtems_status_code
    2262 rtems_bdbuf_find_pool (uint32_t block_size, rtems_bdpool_id *pool)
    2263 {
    2264   rtems_bdbuf_pool* p;
    2265   rtems_bdpool_id   i;
    2266   rtems_bdpool_id   curid = -1;
    2267   bool              found = false;
    2268   uint32_t          cursize = UINT_MAX;
    2269   int               j;
    2270 
    2271   for (j = block_size; (j != 0) && ((j & 1) == 0); j >>= 1);
    2272   if (j != 1)
    2273     return RTEMS_INVALID_SIZE;
    2274 
    2275   for (i = 0; i < rtems_bdbuf_ctx.npools; i++)
    2276   {
    2277     p = rtems_bdbuf_get_pool (i);
    2278     if ((p->blksize >= block_size) &&
    2279         (p->blksize < cursize))
    2280     {
    2281       curid = i;
    2282       cursize = p->blksize;
    2283       found = true;
    2284     }
    2285   }
    2286 
    2287   if (found)
    2288   {
    2289     if (pool != NULL)
    2290       *pool = curid;
    2291     return RTEMS_SUCCESSFUL;
    2292   }
    2293   else
    2294   {
    2295     return RTEMS_NOT_DEFINED;
    2296   }
    2297 }
    2298 
    2299 rtems_status_code rtems_bdbuf_get_pool_info(
    2300   rtems_bdpool_id pool,
    2301   uint32_t *block_size,
    2302   uint32_t *blocks
    2303 )
    2304 {
    2305   if (pool >= rtems_bdbuf_ctx.npools)
    2306     return RTEMS_INVALID_NUMBER;
    2307 
    2308   if (block_size != NULL)
    2309   {
    2310     *block_size = rtems_bdbuf_ctx.pool[pool].blksize;
    2311   }
    2312 
    2313   if (blocks != NULL)
    2314   {
    2315     *blocks = rtems_bdbuf_ctx.pool[pool].nblks;
    2316   }
    2317 
    2318   return RTEMS_SUCCESSFUL;
    2319 }
  • cpukit/libblock/src/blkdev.c

    rf14a21df r0d15414e  
    3838{
    3939    rtems_libio_rw_args_t *args = arg;
    40     int block_size_log2;
    4140    int block_size;
    4241    char         *buf;
     
    5251        return RTEMS_INVALID_NUMBER;
    5352
    54     block_size_log2 = dd->block_size_log2;
    5553    block_size = dd->block_size;
    5654
     
    5957    args->bytes_moved = 0;
    6058
    61     block = args->offset >> block_size_log2;
    62     blkofs = args->offset & (block_size - 1);
     59    block = args->offset / block_size;
     60    blkofs = args->offset % block_size;
    6361
    6462    while (count > 0)
     
    9896{
    9997    rtems_libio_rw_args_t *args = arg;
    100     int           block_size_log2;
    10198    uint32_t      block_size;
    10299    char         *buf;
     
    113110        return RTEMS_INVALID_NUMBER;
    114111
    115     block_size_log2 = dd->block_size_log2;
    116112    block_size = dd->block_size;
    117113
     
    120116    args->bytes_moved = 0;
    121117
    122     block = args->offset >> block_size_log2;
    123     blkofs = args->offset & (block_size - 1);
     118    block = args->offset / block_size;
     119    blkofs = args->offset % block_size;
    124120
    125121    while (count > 0)
  • cpukit/libblock/src/diskdevs.c

    rf14a21df r0d15414e  
    227227)
    228228{
    229     int bs_log2;
    230     int i;
    231229    rtems_disk_device *dd;
    232230    rtems_status_code rc;
    233     rtems_bdpool_id pool;
    234231    rtems_device_major_number major;
    235232    rtems_device_minor_number minor;
    236233
    237234    rtems_filesystem_split_dev_t (dev, major, minor);
    238 
    239 
    240     for (bs_log2 = 0, i = block_size; (i & 1) == 0; i >>= 1, bs_log2++);
    241     if ((bs_log2 < 9) || (i != 1)) /* block size < 512 or not power of 2 */
    242         return RTEMS_INVALID_NUMBER;
    243235
    244236    rc = rtems_semaphore_obtain(diskdevs_mutex, RTEMS_WAIT, RTEMS_NO_TIMEOUT);
     
    247239    diskdevs_protected = true;
    248240
    249     rc = rtems_bdbuf_find_pool(block_size, &pool);
    250     if (rc != RTEMS_SUCCESSFUL)
    251     {
    252         diskdevs_protected = false;
    253         rtems_semaphore_release(diskdevs_mutex);
    254         return rc;
    255     }
    256 
    257241    rc = create_disk(dev, name, &dd);
    258242    if (rc != RTEMS_SUCCESSFUL)
     
    267251    dd->start = 0;
    268252    dd->size = disk_size;
    269     dd->block_size = block_size;
    270     dd->block_size_log2 = bs_log2;
     253    dd->block_size = dd->media_block_size = block_size;
    271254    dd->ioctl = handler;
    272     dd->pool = pool;
    273255
    274256    rc = rtems_io_register_name(name, major, minor);
     
    334316  dd->size = size;
    335317  dd->block_size = pdd->block_size;
    336   dd->block_size_log2 = pdd->block_size_log2;
    337318  dd->ioctl = pdd->ioctl;
    338319
     
    556537    rc = rtems_semaphore_delete(diskdevs_mutex);
    557538
    558     /* XXX bdbuf should be released too! */
    559539    disk_io_initialized = 0;
    560540    return rc;
  • cpukit/libmisc/Makefile.am

    rf14a21df r0d15414e  
    3232
    3333noinst_LIBRARIES += libdummy.a
    34 libdummy_a_SOURCES = dummy/dummy.c
     34libdummy_a_SOURCES = dummy/dummy.c dummy/dummy-networking.c
    3535
    3636## dumpbuf
  • cpukit/libmisc/dummy/dummy.c

    rf14a21df r0d15414e  
    3737#include <rtems/confdefs.h>
    3838
    39 /* Loopback Network Configuration */
    40 #if defined(RTEMS_NETWORKING)
    41   #include <rtems/rtems_bsdnet.h>
    42   #include <sys/socket.h>
    43   #include <netinet/in.h>
    44 
    45   extern int rtems_bsdnet_loopattach(struct rtems_bsdnet_ifconfig *, int);
    46 
    47   static struct rtems_bsdnet_ifconfig loopback_config = {
    48       "lo0",                     /* name */
    49       rtems_bsdnet_loopattach,   /* attach function */
    50       NULL,                      /* link to next interface */
    51       "127.0.0.1",               /* IP address */
    52       "255.0.0.0",               /* IP net mask */
    53       0,                         /* hardware_address */
    54       0, 0, 0, 0,
    55       0, 0, 0,
    56       0
    57   };
    58 
    59   struct rtems_bsdnet_config rtems_bsdnet_config = {
    60       &loopback_config,       /* Network interface */
    61       NULL,                   /* Use fixed network configuration */
    62       0,                      /* Default network task priority */
    63       0,                      /* Default mbuf capacity */
    64       0,                      /* Default mbuf cluster capacity */
    65       "testSystem",           /* Host name */
    66       "nowhere.com",          /* Domain name */
    67       "127.0.0.1",            /* Gateway */
    68       "127.0.0.1",            /* Log host */
    69       {"127.0.0.1" },         /* Name server(s) */
    70       {"127.0.0.1" },         /* NTP server(s) */
    71       1,                      /* sb_efficiency */
    72       0,                      /* udp_tx_buf_size */
    73       0,                      /* udp_rx_buf_size */
    74       0,                      /* tcp_tx_buf_size */
    75       0                       /* tcp_rx_buf_size */
    76   };
    77 #endif
    78 
  • cpukit/sapi/include/confdefs.h

    rf14a21df r0d15414e  
    728728                              RTEMS_BDBUF_SWAPOUT_TASK_BLOCK_HOLD_DEFAULT
    729729  #endif
     730  #ifndef CONFIGURE_SWAPOUT_WORKER_TASKS
     731    #define CONFIGURE_SWAPOUT_WORKER_TASKS \
     732                              RTEMS_BDBUF_SWAPOUT_WORKER_TASKS_DEFAULT
     733  #endif
     734  #ifndef CONFIGURE_SWAPOUT_WORKER_TASK_PRIORITY
     735    #define CONFIGURE_SWAPOUT_WORKER_TASK_PRIORITY \
     736                              RTEMS_BDBUF_SWAPOUT_WORKER_TASK_PRIORITY_DEFAULT
     737  #endif
     738  #ifndef CONFIGURE_BDBUF_CACHE_MEMORY_SIZE
     739    #define CONFIGURE_BDBUF_CACHE_MEMORY_SIZE \
     740                              RTEMS_BDBUF_CACHE_MEMORY_SIZE_DEFAULT
     741  #endif
     742  #ifndef CONFIGURE_BDBUF_BUFFER_MIN_SIZE
     743    #define CONFIGURE_BDBUF_BUFFER_MIN_SIZE \
     744                              RTEMS_BDBUF_BUFFER_MIN_SIZE_DEFAULT
     745  #endif
     746  #ifndef CONFIGURE_BDBUF_BUFFER_MAX_SIZE
     747    #define CONFIGURE_BDBUF_BUFFER_MAX_SIZE \
     748                              RTEMS_BDBUF_BUFFER_MAX_SIZE_DEFAULT
     749  #endif
    730750  #ifdef CONFIGURE_INIT
    731     rtems_bdbuf_config rtems_bdbuf_configuration = {
     751    const rtems_bdbuf_config rtems_bdbuf_configuration = {
    732752      CONFIGURE_BDBUF_MAX_READ_AHEAD_BLOCKS,
    733753      CONFIGURE_BDBUF_MAX_WRITE_BLOCKS,
    734754      CONFIGURE_SWAPOUT_TASK_PRIORITY,
    735755      CONFIGURE_SWAPOUT_SWAP_PERIOD,
    736       CONFIGURE_SWAPOUT_BLOCK_HOLD
     756      CONFIGURE_SWAPOUT_BLOCK_HOLD,
     757      CONFIGURE_SWAPOUT_WORKER_TASKS,
     758      CONFIGURE_SWAPOUT_WORKER_TASK_PRIORITY,
     759      CONFIGURE_BDBUF_CACHE_MEMORY_SIZE,
     760      CONFIGURE_BDBUF_BUFFER_MIN_SIZE,
     761      CONFIGURE_BDBUF_BUFFER_MAX_SIZE
    737762    };
    738763  #endif
    739   #ifndef CONFIGURE_HAS_OWN_BDBUF_TABLE
    740     #ifndef CONFIGURE_BDBUF_BUFFER_COUNT
    741       #define CONFIGURE_BDBUF_BUFFER_COUNT 64
    742     #endif
    743 
    744     #ifndef CONFIGURE_BDBUF_BUFFER_SIZE
    745       #define CONFIGURE_BDBUF_BUFFER_SIZE 512
    746     #endif
    747     #ifdef CONFIGURE_INIT
    748       rtems_bdbuf_pool_config rtems_bdbuf_pool_configuration[] = {
    749         {CONFIGURE_BDBUF_BUFFER_SIZE, CONFIGURE_BDBUF_BUFFER_COUNT, NULL}
    750       };
    751       size_t rtems_bdbuf_pool_configuration_size =
    752         (sizeof(rtems_bdbuf_pool_configuration) /
    753          sizeof(rtems_bdbuf_pool_configuration[0]));
    754     #endif /* CONFIGURE_INIT */
    755   #endif /* CONFIGURE_HAS_OWN_BDBUF_TABLE        */
     764  #if defined(CONFIGURE_HAS_OWN_BDBUF_TABLE) || \
     765      defined(CONFIGURE_BDBUF_BUFFER_SIZE) || \
     766      defined(CONFIGURE_BDBUF_BUFFER_COUNT)
     767    #error BDBUF Cache does not use a buffer configuration table. Please remove.
     768  #endif
    756769#endif /* CONFIGURE_APPLICATION_NEEDS_LIBBLOCK */
    757770
  • cpukit/sapi/inline/rtems/chain.inl

    rf14a21df r0d15414e  
    9797 *  @param[in] the_chain is the chain to be operated upon.
    9898 *
    99  *  @return This method returns the permanent head node of the chain.
     99 *  @return This method returns the permanent node of the chain.
    100100 */
    101101RTEMS_INLINE_ROUTINE rtems_chain_node *rtems_chain_head(
     
    120120{
    121121  return _Chain_Tail( the_chain );
     122}
     123
     124/**
     125 *  @brief Return pointer to Chain's First node after the permanent head.
     126 *
     127 *  This function returns a pointer to the first node on the chain after the
     128 *  head.
     129 *
     130 *  @param[in] the_chain is the chain to be operated upon.
     131 *
     132 *  @return This method returns the first node of the chain.
     133 */
     134RTEMS_INLINE_ROUTINE rtems_chain_node *rtems_chain_first(
     135  rtems_chain_control *the_chain
     136)
     137{
     138  return _Chain_First( the_chain );
     139}
     140
     141/**
     142 *  @brief Return pointer to Chain's Last node before the permanent tail.
     143 *
     144 *  This function returns a pointer to the last node on the chain just before
     145 *  the tail.
     146 *
     147 *  @param[in] the_chain is the chain to be operated upon.
     148 *
     149 *  @return This method returns the last node of the chain.
     150 */
     151RTEMS_INLINE_ROUTINE rtems_chain_node *rtems_chain_last(
     152  rtems_chain_control *the_chain
     153)
     154{
     155  return _Chain_Last( the_chain );
     156}
     157
     158/**
     159 *  @brief Return pointer the next node from this node
     160 *
     161 *  This function returns a pointer to the next node after this node.
     162 *
     163 *  @param[in] the_node is the node to be operated upon.
     164 *
     165 *  @return This method returns the next node on the chain.
     166 */
     167RTEMS_INLINE_ROUTINE rtems_chain_node *rtems_chain_next(
     168  rtems_chain_node *the_node
     169)
     170{
     171  return _Chain_Next( the_node );
    122172}
    123173
  • cpukit/score/include/rtems/score/object.h

    rf14a21df r0d15414e  
    7474typedef uint8_t    Objects_Maximum;
    7575
    76 #define OBJECTS_INDEX_START_BIT  0
    77 #define OBJECTS_API_START_BIT    8
    78 #define OBJECTS_CLASS_START_BIT 11
    79 
    80 #define OBJECTS_INDEX_MASK      (Objects_Id)0x00ff
    81 #define OBJECTS_API_MASK        (Objects_Id)0x0700
    82 #define OBJECTS_CLASS_MASK      (Objects_Id)0xF800
    83 
    84 #define OBJECTS_INDEX_VALID_BITS  (Objects_Id)0x00ff
    85 #define OBJECTS_API_VALID_BITS    (Objects_Id)0x0007
     76#define OBJECTS_INDEX_START_BIT  0U
     77#define OBJECTS_API_START_BIT    8U
     78#define OBJECTS_CLASS_START_BIT 11U
     79
     80#define OBJECTS_INDEX_MASK      (Objects_Id)0x00ffU
     81#define OBJECTS_API_MASK        (Objects_Id)0x0700U
     82#define OBJECTS_CLASS_MASK      (Objects_Id)0xF800U
     83
     84#define OBJECTS_INDEX_VALID_BITS  (Objects_Id)0x00ffU
     85#define OBJECTS_API_VALID_BITS    (Objects_Id)0x0007U
    8686/* OBJECTS_NODE_VALID_BITS should not be used with 16 bit Ids */
    87 #define OBJECTS_CLASS_VALID_BITS  (Objects_Id)0x001f
    88 
    89 #define OBJECTS_UNLIMITED_OBJECTS 0x8000
     87#define OBJECTS_CLASS_VALID_BITS  (Objects_Id)0x001fU
     88
     89#define OBJECTS_UNLIMITED_OBJECTS 0x8000U
    9090
    9191#define OBJECTS_ID_INITIAL_INDEX  (0)
     
    114114 *  the object Id.
    115115 */
    116 #define OBJECTS_INDEX_START_BIT  0
     116#define OBJECTS_INDEX_START_BIT  0U
    117117
    118118
     
    121121 *  the object Id.
    122122 */
    123 #define OBJECTS_NODE_START_BIT  16
     123#define OBJECTS_NODE_START_BIT  16U
    124124
    125125/**
     
    127127 *  the object Id.
    128128 */
    129 #define OBJECTS_API_START_BIT   24
     129#define OBJECTS_API_START_BIT   24U
    130130
    131131/**
     
    133133 *  the object Id.
    134134 */
    135 #define OBJECTS_CLASS_START_BIT 27
     135#define OBJECTS_CLASS_START_BIT 27U
    136136
    137137/**
    138138 *  This mask is used to extract the index portion of an object Id.
    139139 */
    140 #define OBJECTS_INDEX_MASK      (Objects_Id)0x0000ffff
     140#define OBJECTS_INDEX_MASK      (Objects_Id)0x0000ffffU
    141141
    142142/**
    143143 *  This mask is used to extract the node portion of an object Id.
    144144 */
    145 #define OBJECTS_NODE_MASK       (Objects_Id)0x00ff0000
     145#define OBJECTS_NODE_MASK       (Objects_Id)0x00ff0000U
    146146
    147147/**
    148148 *  This mask is used to extract the API portion of an object Id.
    149149 */
    150 #define OBJECTS_API_MASK        (Objects_Id)0x07000000
     150#define OBJECTS_API_MASK        (Objects_Id)0x07000000U
    151151
    152152/**
    153153 *  This mask is used to extract the class portion of an object Id.
    154154 */
    155 #define OBJECTS_CLASS_MASK      (Objects_Id)0xf8000000
     155#define OBJECTS_CLASS_MASK      (Objects_Id)0xf8000000U
    156156
    157157/**
     
    159159 *  are set after shifting to extract the index portion of an object Id.
    160160 */
    161 #define OBJECTS_INDEX_VALID_BITS  (Objects_Id)0x0000ffff
     161#define OBJECTS_INDEX_VALID_BITS  (Objects_Id)0x0000ffffU
    162162
    163163/**
     
    165165 *  are set after shifting to extract the node portion of an object Id.
    166166 */
    167 #define OBJECTS_NODE_VALID_BITS   (Objects_Id)0x000000ff
     167#define OBJECTS_NODE_VALID_BITS   (Objects_Id)0x000000ffU
    168168
    169169/**
     
    171171 *  are set after shifting to extract the API portion of an object Id.
    172172 */
    173 #define OBJECTS_API_VALID_BITS    (Objects_Id)0x00000007
     173#define OBJECTS_API_VALID_BITS    (Objects_Id)0x00000007U
    174174
    175175/**
     
    177177 *  are set after shifting to extract the class portion of an object Id.
    178178 */
    179 #define OBJECTS_CLASS_VALID_BITS  (Objects_Id)0x0000001f
     179#define OBJECTS_CLASS_VALID_BITS  (Objects_Id)0x0000001fU
    180180
    181181/**
     
    183183 *  table when specifying the number of configured objects.
    184184 */
    185 #define OBJECTS_UNLIMITED_OBJECTS 0x80000000
     185#define OBJECTS_UNLIMITED_OBJECTS 0x80000000U
    186186
    187187/**
     
    193193 *  This is the highest value for the index portion of an object Id.
    194194 */
    195 #define OBJECTS_ID_FINAL_INDEX    (0xffff)
     195#define OBJECTS_ID_FINAL_INDEX    (0xffffU)
    196196#endif
    197197
     
    337337  bool              auto_extend;
    338338  /** This is the number of objects in a block. */
    339   uint32_t          allocation_size;
     339  Objects_Maximum   allocation_size;
    340340  /** This is the size in bytes of each object instance. */
    341341  size_t            size;
  • cpukit/score/inline/rtems/score/chain.inl

    rf14a21df r0d15414e  
    8484/** @brief Return pointer to Chain Head
    8585 *
    86  *  This function returns a pointer to the first node on the chain.
     86 *  This function returns a pointer to the head node on the chain.
    8787 *
    8888 *  @param[in] the_chain is the chain to be operated upon.
     
    110110{
    111111   return (Chain_Node *) &the_chain->permanent_null;
     112}
     113
     114/** @brief Return pointer to Chain's First node
     115 *
     116 *  This function returns a pointer to the first node on the chain after the
     117 *  head.
     118 *
     119 *  @param[in] the_chain is the chain to be operated upon.
     120 *
     121 *  @return This method returns the first node of the chain.
     122 */
     123RTEMS_INLINE_ROUTINE Chain_Node *_Chain_First(
     124  Chain_Control *the_chain
     125)
     126{
     127  return the_chain->first;
     128}
     129
     130/** @brief Return pointer to Chain's Last node
     131 *
     132 *  This function returns a pointer to the last node on the chain just before
     133 *  the tail.
     134 *
     135 *  @param[in] the_chain is the chain to be operated upon.
     136 *
     137 *  @return This method returns the last node of the chain.
     138 */
     139RTEMS_INLINE_ROUTINE Chain_Node *_Chain_Last(
     140  Chain_Control *the_chain
     141)
     142{
     143  return the_chain->last;
     144}
     145
     146/** @brief Return pointer the next node from this node
     147 *
     148 *  This function returns a pointer to the next node after this node.
     149 *
     150 *  @param[in] the_node is the node to be operated upon.
     151 *
     152 *  @return This method returns the next node on the chain.
     153 */
     154RTEMS_INLINE_ROUTINE Chain_Node *_Chain_Next(
     155  Chain_Node *the_node
     156)
     157{
     158  return the_node->next;
     159}
     160
     161/** @brief Return pointer the previous node from this node
     162 *
     163 *  This function returns a pointer to the previous node on this chain.
     164 *
     165 *  @param[in] the_node is the node to be operated upon.
     166 *
     167 *  @return This method returns the previous node on the chain.
     168 */
     169RTEMS_INLINE_ROUTINE Chain_Node *_Chain_Previous(
     170  Chain_Node *the_node
     171)
     172{
     173  return the_node->previous;
    112174}
    113175
  • cpukit/score/inline/rtems/score/object.inl

    rf14a21df r0d15414e  
    112112)
    113113{
    114   return (id >> OBJECTS_INDEX_START_BIT) & OBJECTS_INDEX_VALID_BITS;
     114  return
     115    (Objects_Maximum)((id >> OBJECTS_INDEX_START_BIT) &
     116                                          OBJECTS_INDEX_VALID_BITS);
    115117}
    116118
     
    225227RTEMS_INLINE_ROUTINE void _Objects_Set_local_object(
    226228  Objects_Information *information,
    227   uint16_t             index,
     229  uint32_t             index,
    228230  Objects_Control     *the_object
    229231)
Note: See TracChangeset for help on using the changeset viewer.