[d46ab11b] | 1 | @c |
---|
| 2 | @c COPYRIGHT (c) 2014. |
---|
| 3 | @c On-Line Applications Research Corporation (OAR). |
---|
[89e72a80] | 4 | @c All rights reserved. |
---|
[d46ab11b] | 5 | @c |
---|
| 6 | |
---|
| 7 | @chapter Symmetric Multiprocessing Services |
---|
| 8 | |
---|
| 9 | @section Introduction |
---|
| 10 | |
---|
[4fa4ccb6] | 11 | The Symmetric Multiprocessing (SMP) support of the RTEMS @value{VERSION} is |
---|
| 12 | available on |
---|
| 13 | |
---|
| 14 | @itemize @bullet |
---|
| 15 | @item ARM, |
---|
| 16 | @item PowerPC, and |
---|
| 17 | @item SPARC. |
---|
| 18 | @end itemize |
---|
| 19 | |
---|
| 20 | It must be explicitly enabled via the @code{--enable-smp} configure command |
---|
| 21 | line option. To enable SMP in the application configuration see |
---|
| 22 | @ref{Configuring a System Enable SMP Support for Applications}. The default |
---|
| 23 | scheduler for SMP applications supports up to 32 processors and is a global |
---|
| 24 | fixed priority scheduler, see also @ref{Configuring a System Configuring |
---|
| 25 | Clustered Schedulers}. For example applications see |
---|
| 26 | @file{testsuites/smptests}. |
---|
| 27 | |
---|
[d46ab11b] | 28 | This chapter describes the services related to Symmetric Multiprocessing |
---|
[89e72a80] | 29 | provided by RTEMS. |
---|
[d46ab11b] | 30 | |
---|
| 31 | The application level services currently provided are: |
---|
| 32 | |
---|
| 33 | @itemize @bullet |
---|
[6809383d] | 34 | @item @code{rtems_get_processor_count} - Get processor count |
---|
[4a93980] | 35 | @item @code{rtems_get_current_processor} - Get current processor index |
---|
[89c1a21f] | 36 | @item @code{rtems_scheduler_ident} - Get ID of a scheduler |
---|
[d1f2f22] | 37 | @item @code{rtems_scheduler_get_processor_set} - Get processor set of a scheduler |
---|
[babb1a2c] | 38 | @item @code{rtems_task_get_scheduler} - Get scheduler of a task |
---|
[f830029] | 39 | @item @code{rtems_task_set_scheduler} - Set scheduler of a task |
---|
[2be51cc] | 40 | @item @code{rtems_task_get_affinity} - Get task processor affinity |
---|
[ed859d51] | 41 | @item @code{rtems_task_set_affinity} - Set task processor affinity |
---|
[d46ab11b] | 42 | @end itemize |
---|
| 43 | |
---|
[89e72a80] | 44 | @c |
---|
| 45 | @c |
---|
| 46 | @c |
---|
[d46ab11b] | 47 | @section Background |
---|
| 48 | |
---|
[89e72a80] | 49 | @subsection Uniprocessor versus SMP Parallelism |
---|
| 50 | |
---|
| 51 | Uniprocessor systems have long been used in embedded systems. In this hardware |
---|
| 52 | model, there are some system execution characteristics which have long been |
---|
| 53 | taken for granted: |
---|
| 54 | |
---|
| 55 | @itemize @bullet |
---|
| 56 | @item one task executes at a time |
---|
| 57 | @item hardware events result in interrupts |
---|
| 58 | @end itemize |
---|
| 59 | |
---|
| 60 | There is no true parallelism. Even when interrupts appear to occur |
---|
| 61 | at the same time, they are processed in largely a serial fashion. |
---|
| 62 | This is true even when the interupt service routines are allowed to |
---|
| 63 | nest. From a tasking viewpoint, it is the responsibility of the real-time |
---|
| 64 | operatimg system to simulate parallelism by switching between tasks. |
---|
| 65 | These task switches occur in response to hardware interrupt events and explicit |
---|
| 66 | application events such as blocking for a resource or delaying. |
---|
| 67 | |
---|
| 68 | With symmetric multiprocessing, the presence of multiple processors |
---|
| 69 | allows for true concurrency and provides for cost-effective performance |
---|
| 70 | improvements. Uniprocessors tend to increase performance by increasing |
---|
| 71 | clock speed and complexity. This tends to lead to hot, power hungry |
---|
| 72 | microprocessors which are poorly suited for many embedded applications. |
---|
| 73 | |
---|
| 74 | The true concurrency is in sharp contrast to the single task and |
---|
| 75 | interrupt model of uniprocessor systems. This results in a fundamental |
---|
| 76 | change to uniprocessor system characteristics listed above. Developers |
---|
| 77 | are faced with a different set of characteristics which, in turn, break |
---|
| 78 | some existing assumptions and result in new challenges. In an SMP system |
---|
| 79 | with N processors, these are the new execution characteristics. |
---|
| 80 | |
---|
| 81 | @itemize @bullet |
---|
| 82 | @item N tasks execute in parallel |
---|
| 83 | @item hardware events result in interrupts |
---|
| 84 | @end itemize |
---|
| 85 | |
---|
| 86 | There is true parallelism with a task executing on each processor and |
---|
| 87 | the possibility of interrupts occurring on each processor. Thus in contrast |
---|
| 88 | to their being one task and one interrupt to consider on a uniprocessor, |
---|
| 89 | there are N tasks and potentially N simultaneous interrupts to consider |
---|
| 90 | on an SMP system. |
---|
| 91 | |
---|
| 92 | This increase in hardware complexity and presence of true parallelism |
---|
| 93 | results in the application developer needing to be even more cautious |
---|
| 94 | about mutual exclusion and shared data access than in a uniprocessor |
---|
| 95 | embedded system. Race conditions that never or rarely happened when an |
---|
| 96 | application executed on a uniprocessor system, become much more likely |
---|
| 97 | due to multiple threads executing in parallel. On a uniprocessor system, |
---|
| 98 | these race conditions would only happen when a task switch occurred at |
---|
| 99 | just the wrong moment. Now there are N-1 tasks executing in parallel |
---|
| 100 | all the time and this results in many more opportunities for small |
---|
| 101 | windows in critical sections to be hit. |
---|
| 102 | |
---|
| 103 | @subsection Task Affinity |
---|
| 104 | |
---|
| 105 | @cindex task affinity |
---|
| 106 | @cindex thread affinity |
---|
| 107 | |
---|
| 108 | RTEMS provides services to manipulate the affinity of a task. Affinity |
---|
| 109 | is used to specify the subset of processors in an SMP system on which |
---|
| 110 | a particular task can execute. |
---|
| 111 | |
---|
| 112 | By default, tasks have an affinity which allows them to execute on any |
---|
| 113 | available processor. |
---|
| 114 | |
---|
| 115 | Task affinity is a possible feature to be supported by SMP-aware |
---|
| 116 | schedulers. However, only a subset of the available schedulers support |
---|
| 117 | affinity. Although the behavior is scheduler specific, if the scheduler |
---|
| 118 | does not support affinity, it is likely to ignore all attempts to set |
---|
| 119 | affinity. |
---|
| 120 | |
---|
[38b59a6] | 121 | @subsection Task Migration |
---|
| 122 | |
---|
| 123 | @cindex task migration |
---|
| 124 | @cindex thread migration |
---|
| 125 | |
---|
| 126 | With more than one processor in the system tasks can migrate from one processor |
---|
| 127 | to another. There are three reasons why tasks migrate in RTEMS. |
---|
| 128 | |
---|
| 129 | @itemize @bullet |
---|
| 130 | @item The scheduler changes explicitly via @code{rtems_task_set_scheduler()} or |
---|
| 131 | similar directives. |
---|
| 132 | @item The task resumes execution after a blocking operation. On a priority |
---|
| 133 | based scheduler it will evict the lowest priority task currently assigned to a |
---|
| 134 | processor in the processor set managed by the scheduler instance. |
---|
| 135 | @item The task moves temporarily to another scheduler instance due to locking |
---|
| 136 | protocols like @cite{Migratory Priority Inheritance} or the |
---|
| 137 | @cite{Multiprocessor Resource Sharing Protocol}. |
---|
| 138 | @end itemize |
---|
| 139 | |
---|
| 140 | Task migration should be avoided so that the working set of a task can stay on |
---|
| 141 | the most local cache level. |
---|
| 142 | |
---|
| 143 | The current implementation of task migration in RTEMS has some implications |
---|
| 144 | with respect to the interrupt latency. It is crucial to preserve the system |
---|
| 145 | invariant that a task can execute on at most one processor in the system at a |
---|
| 146 | time. This is accomplished with a boolean indicator in the task context. The |
---|
| 147 | processor architecture specific low-level task context switch code will mark |
---|
| 148 | that a task context is no longer executing and waits that the heir context |
---|
| 149 | stopped execution before it restores the heir context and resumes execution of |
---|
| 150 | the heir task. So there is one point in time in which a processor is without a |
---|
| 151 | task. This is essential to avoid cyclic dependencies in case multiple tasks |
---|
| 152 | migrate at once. Otherwise some supervising entity is necessary to prevent |
---|
| 153 | life-locks. Such a global supervisor would lead to scalability problems so |
---|
| 154 | this approach is not used. Currently the thread dispatch is performed with |
---|
| 155 | interrupts disabled. So in case the heir task is currently executing on |
---|
| 156 | another processor then this prolongs the time of disabled interrupts since one |
---|
| 157 | processor has to wait for another processor to make progress. |
---|
| 158 | |
---|
| 159 | It is difficult to avoid this issue with the interrupt latency since interrupts |
---|
| 160 | normally store the context of the interrupted task on its stack. In case a |
---|
| 161 | task is marked as not executing we must not use its task stack to store such an |
---|
| 162 | interrupt context. We cannot use the heir stack before it stopped execution on |
---|
| 163 | another processor. So if we enable interrupts during this transition we have |
---|
| 164 | to provide an alternative task independent stack for this time frame. This |
---|
| 165 | issue needs further investigation. |
---|
| 166 | |
---|
[cee82df] | 167 | @subsection Clustered Scheduling |
---|
| 168 | |
---|
| 169 | We have clustered scheduling in case the set of processors of a system is |
---|
| 170 | partitioned into non-empty pairwise-disjoint subsets. These subsets are called |
---|
| 171 | clusters. Clusters with a cardinality of one are partitions. Each cluster is |
---|
| 172 | owned by exactly one scheduler instance. |
---|
| 173 | |
---|
| 174 | Clustered scheduling helps to control the worst-case latencies in |
---|
| 175 | multi-processor systems, see @cite{Brandenburg, Björn B.: Scheduling and |
---|
| 176 | Locking in Multiprocessor Real-Time Operating Systems. PhD thesis, 2011. |
---|
| 177 | @uref{http://www.cs.unc.edu/~bbb/diss/brandenburg-diss.pdf}}. The goal is to |
---|
| 178 | reduce the amount of shared state in the system and thus prevention of lock |
---|
| 179 | contention. Modern multi-processor systems tend to have several layers of data |
---|
| 180 | and instruction caches. With clustered scheduling it is possible to honour the |
---|
| 181 | cache topology of a system and thus avoid expensive cache synchronization |
---|
| 182 | traffic. It is easy to implement. The problem is to provide synchronization |
---|
| 183 | primitives for inter-cluster synchronization (more than one cluster is involved |
---|
[dafa5d88] | 184 | in the synchronization process). In RTEMS there are currently four means |
---|
[cee82df] | 185 | available |
---|
| 186 | |
---|
| 187 | @itemize @bullet |
---|
| 188 | @item events, |
---|
[dafa5d88] | 189 | @item message queues, |
---|
| 190 | @item semaphores using the @ref{Semaphore Manager Priority Inheritance} |
---|
| 191 | protocol (priority boosting), and |
---|
[cee82df] | 192 | @item semaphores using the @ref{Semaphore Manager Multiprocessor Resource |
---|
| 193 | Sharing Protocol} (MrsP). |
---|
| 194 | @end itemize |
---|
| 195 | |
---|
| 196 | The clustered scheduling approach enables separation of functions with |
---|
| 197 | real-time requirements and functions that profit from fairness and high |
---|
| 198 | throughput provided the scheduler instances are fully decoupled and adequate |
---|
| 199 | inter-cluster synchronization primitives are used. This is work in progress. |
---|
| 200 | |
---|
| 201 | For the configuration of clustered schedulers see @ref{Configuring a System |
---|
| 202 | Configuring Clustered Schedulers}. |
---|
| 203 | |
---|
| 204 | To set the scheduler of a task see @ref{Symmetric Multiprocessing Services |
---|
| 205 | SCHEDULER_IDENT - Get ID of a scheduler} and @ref{Symmetric Multiprocessing |
---|
| 206 | Services TASK_SET_SCHEDULER - Set scheduler of a task}. |
---|
| 207 | |
---|
[3995e6d] | 208 | @subsection Task Priority Queues |
---|
| 209 | |
---|
| 210 | Due to the support for clustered scheduling the task priority queues need |
---|
| 211 | special attention. It makes no sense to compare the priority values of two |
---|
| 212 | different scheduler instances. Thus, it is not possible to simply use one |
---|
| 213 | plain priority queue for tasks of different scheduler instances. |
---|
| 214 | |
---|
| 215 | One solution to this problem is to use two levels of queues. The top level |
---|
| 216 | queue provides FIFO ordering and contains priority queues. Each priority queue |
---|
| 217 | is associated with a scheduler instance and contains only tasks of this |
---|
| 218 | scheduler instance. Tasks are enqueued in the priority queue corresponding to |
---|
| 219 | their scheduler instance. In case this priority queue was empty, then it is |
---|
| 220 | appended to the FIFO. To dequeue a task the highest priority task of the first |
---|
| 221 | priority queue in the FIFO is selected. Then the first priority queue is |
---|
| 222 | removed from the FIFO. In case the previously first priority queue is not |
---|
| 223 | empty, then it is appended to the FIFO. So there is FIFO fairness with respect |
---|
| 224 | to the highest priority task of each scheduler instances. See also @cite{ |
---|
| 225 | Brandenburg, Björn B.: A fully preemptive multiprocessor semaphore protocol for |
---|
| 226 | latency-sensitive real-time applications. In Proceedings of the 25th Euromicro |
---|
| 227 | Conference on Real-Time Systems (ECRTS 2013), pages 292â302, 2013. |
---|
| 228 | @uref{http://www.mpi-sws.org/~bbb/papers/pdf/ecrts13b.pdf}}. |
---|
| 229 | |
---|
| 230 | Such a two level queue may need a considerable amount of memory if fast enqueue |
---|
| 231 | and dequeue operations are desired (depends on the scheduler instance count). |
---|
| 232 | To mitigate this problem an approch of the FreeBSD kernel was implemented in |
---|
| 233 | RTEMS. We have the invariant that a task can be enqueued on at most one task |
---|
| 234 | queue. Thus, we need only as many queues as we have tasks. Each task is |
---|
| 235 | equipped with spare task queue which it can give to an object on demand. The |
---|
| 236 | task queue uses a dedicated memory space independent of the other memory used |
---|
| 237 | for the task itself. In case a task needs to block, then there are two options |
---|
| 238 | |
---|
| 239 | @itemize @bullet |
---|
| 240 | @item the object already has task queue, then the task enqueues itself to this |
---|
| 241 | already present queue and the spare task queue of the task is added to a list |
---|
| 242 | of free queues for this object, or |
---|
| 243 | @item otherwise, then the queue of the task is given to the object and the task |
---|
| 244 | enqueues itself to this queue. |
---|
| 245 | @end itemize |
---|
| 246 | |
---|
| 247 | In case the task is dequeued, then there are two options |
---|
| 248 | |
---|
| 249 | @itemize @bullet |
---|
| 250 | @item the task is the last task on the queue, then it removes this queue from |
---|
| 251 | the object and reclaims it for its own purpose, or |
---|
| 252 | @item otherwise, then the task removes one queue from the free list of the |
---|
| 253 | object and reclaims it for its own purpose. |
---|
| 254 | @end itemize |
---|
| 255 | |
---|
| 256 | Since there are usually more objects than tasks, this actually reduces the |
---|
| 257 | memory demands. In addition the objects contain only a pointer to the task |
---|
| 258 | queue structure. This helps to hide implementation details and makes it |
---|
| 259 | possible to use self-contained synchronization objects in Newlib and GCC (C++ |
---|
| 260 | and OpenMP run-time support). |
---|
| 261 | |
---|
[5c3d250] | 262 | @subsection Scheduler Helping Protocol |
---|
| 263 | |
---|
| 264 | The scheduler provides a helping protocol to support locking protocols like |
---|
| 265 | @cite{Migratory Priority Inheritance} or the @cite{Multiprocessor Resource |
---|
| 266 | Sharing Protocol}. Each ready task can use at least one scheduler node at a |
---|
| 267 | time to gain access to a processor. Each scheduler node has an owner, a user |
---|
| 268 | and an optional idle task. The owner of a scheduler node is determined a task |
---|
| 269 | creation and never changes during the life time of a scheduler node. The user |
---|
| 270 | of a scheduler node may change due to the scheduler helping protocol. A |
---|
| 271 | scheduler node is in one of the four scheduler help states: |
---|
| 272 | |
---|
| 273 | @table @dfn |
---|
| 274 | |
---|
| 275 | @item help yourself |
---|
| 276 | |
---|
| 277 | This scheduler node is solely used by the owner task. This task owns no |
---|
| 278 | resources using a helping protocol and thus does not take part in the scheduler |
---|
| 279 | helping protocol. No help will be provided for other tasks. |
---|
| 280 | |
---|
| 281 | @item help active owner |
---|
| 282 | |
---|
| 283 | This scheduler node is owned by a task actively owning a resource and can be |
---|
| 284 | used to help out tasks. |
---|
| 285 | |
---|
| 286 | In case this scheduler node changes its state from ready to scheduled and the |
---|
| 287 | task executes using another node, then an idle task will be provided as a user |
---|
| 288 | of this node to temporarily execute on behalf of the owner task. Thus lower |
---|
| 289 | priority tasks are denied access to the processors of this scheduler instance. |
---|
| 290 | |
---|
| 291 | In case a task actively owning a resource performs a blocking operation, then |
---|
| 292 | an idle task will be used also in case this node is in the scheduled state. |
---|
| 293 | |
---|
| 294 | @item help active rival |
---|
| 295 | |
---|
| 296 | This scheduler node is owned by a task actively obtaining a resource currently |
---|
| 297 | owned by another task and can be used to help out tasks. |
---|
| 298 | |
---|
| 299 | The task owning this node is ready and will give away its processor in case the |
---|
| 300 | task owning the resource asks for help. |
---|
| 301 | |
---|
| 302 | @item help passive |
---|
| 303 | |
---|
| 304 | This scheduler node is owned by a task obtaining a resource currently owned by |
---|
| 305 | another task and can be used to help out tasks. |
---|
| 306 | |
---|
| 307 | The task owning this node is blocked. |
---|
| 308 | |
---|
| 309 | @end table |
---|
| 310 | |
---|
| 311 | The following scheduler operations return a task in need for help |
---|
| 312 | |
---|
| 313 | @itemize @bullet |
---|
| 314 | @item unblock, |
---|
| 315 | @item change priority, |
---|
| 316 | @item yield, and |
---|
| 317 | @item ask for help. |
---|
| 318 | @end itemize |
---|
| 319 | |
---|
| 320 | A task in need for help is a task that encounters a scheduler state change from |
---|
| 321 | scheduled to ready (this is a pre-emption by a higher priority task) or a task |
---|
| 322 | that cannot be scheduled in an unblock operation. Such a task can ask tasks |
---|
| 323 | which depend on resources owned by this task for help. |
---|
| 324 | |
---|
| 325 | In case it is not possible to schedule a task in need for help, then the |
---|
| 326 | scheduler nodes available for the task will be placed into the set of ready |
---|
| 327 | scheduler nodes of the corresponding scheduler instances. Once a state change |
---|
| 328 | from ready to scheduled happens for one of scheduler nodes it will be used to |
---|
| 329 | schedule the task in need for help. |
---|
| 330 | |
---|
| 331 | The ask for help scheduler operation is used to help tasks in need for help |
---|
| 332 | returned by the operations mentioned above. This operation is also used in |
---|
| 333 | case the root of a resource sub-tree owned by a task changes. |
---|
| 334 | |
---|
| 335 | The run-time of the ask for help procedures depend on the size of the resource |
---|
| 336 | tree of the task needing help and other resource trees in case tasks in need |
---|
| 337 | for help are produced during this operation. Thus the worst-case latency in |
---|
| 338 | the system depends on the maximum resource tree size of the application. |
---|
| 339 | |
---|
[89e72a80] | 340 | @subsection Critical Section Techniques and SMP |
---|
| 341 | |
---|
| 342 | As discussed earlier, SMP systems have opportunities for true parallelism |
---|
| 343 | which was not possible on uniprocessor systems. Consequently, multiple |
---|
| 344 | techniques that provided adequate critical sections on uniprocessor |
---|
| 345 | systems are unsafe on SMP systems. In this section, some of these |
---|
| 346 | unsafe techniques will be discussed. |
---|
| 347 | |
---|
| 348 | In general, applications must use proper operating system provided mutual |
---|
| 349 | exclusion mechanisms to ensure correct behavior. This primarily means |
---|
| 350 | the use of binary semaphores or mutexes to implement critical sections. |
---|
| 351 | |
---|
[3bb342ca] | 352 | @subsubsection Disable Interrupts and Interrupt Locks |
---|
| 353 | |
---|
| 354 | A low overhead means to ensure mutual exclusion in uni-processor configurations |
---|
| 355 | is to disable interrupts around a critical section. This is commonly used in |
---|
| 356 | device driver code and throughout the operating system core. On SMP |
---|
| 357 | configurations, however, disabling the interrupts on one processor has no |
---|
| 358 | effect on other processors. So, this is insufficient to ensure system wide |
---|
| 359 | mutual exclusion. The macros |
---|
| 360 | @itemize @bullet |
---|
| 361 | @item @code{rtems_interrupt_disable()}, |
---|
| 362 | @item @code{rtems_interrupt_enable()}, and |
---|
| 363 | @item @code{rtems_interrupt_flush()} |
---|
| 364 | @end itemize |
---|
| 365 | are disabled on SMP configurations and its use will lead to compiler warnings |
---|
| 366 | and linker errors. In the unlikely case that interrupts must be disabled on |
---|
| 367 | the current processor, then the |
---|
| 368 | @itemize @bullet |
---|
| 369 | @item @code{rtems_interrupt_local_disable()}, and |
---|
| 370 | @item @code{rtems_interrupt_local_enable()} |
---|
| 371 | @end itemize |
---|
| 372 | macros are now available in all configurations. |
---|
| 373 | |
---|
| 374 | Since disabling of interrupts is not enough to ensure system wide mutual |
---|
| 375 | exclusion on SMP, a new low-level synchronization primitive was added - the |
---|
| 376 | interrupt locks. They are a simple API layer on top of the SMP locks used for |
---|
| 377 | low-level synchronization in the operating system core. Currently they are |
---|
| 378 | implemented as a ticket lock. On uni-processor configurations they degenerate |
---|
| 379 | to simple interrupt disable/enable sequences. It is disallowed to acquire a |
---|
| 380 | single interrupt lock in a nested way. This will result in an infinite loop |
---|
| 381 | with interrupts disabled. While converting legacy code to interrupt locks care |
---|
| 382 | must be taken to avoid this situation. |
---|
| 383 | |
---|
| 384 | @example |
---|
| 385 | @group |
---|
| 386 | void legacy_code_with_interrupt_disable_enable( void ) |
---|
| 387 | @{ |
---|
| 388 | rtems_interrupt_level level; |
---|
| 389 | |
---|
| 390 | rtems_interrupt_disable( level ); |
---|
| 391 | /* Some critical stuff */ |
---|
| 392 | rtems_interrupt_enable( level ); |
---|
| 393 | @} |
---|
| 394 | |
---|
| 395 | RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" ) |
---|
| 396 | |
---|
| 397 | void smp_ready_code_with_interrupt_lock( void ) |
---|
| 398 | @{ |
---|
| 399 | rtems_interrupt_lock_context lock_context; |
---|
| 400 | |
---|
| 401 | rtems_interrupt_lock_acquire( &lock, &lock_context ); |
---|
| 402 | /* Some critical stuff */ |
---|
| 403 | rtems_interrupt_lock_release( &lock, &lock_context ); |
---|
| 404 | @} |
---|
| 405 | @end group |
---|
| 406 | @end example |
---|
| 407 | |
---|
| 408 | The @code{rtems_interrupt_lock} structure is empty on uni-processor |
---|
| 409 | configurations. Empty structures have a different size in C |
---|
| 410 | (implementation-defined, zero in case of GCC) and C++ (implementation-defined |
---|
| 411 | non-zero value, one in case of GCC). Thus the |
---|
| 412 | @code{RTEMS_INTERRUPT_LOCK_DECLARE()}, @code{RTEMS_INTERRUPT_LOCK_DEFINE()}, |
---|
| 413 | @code{RTEMS_INTERRUPT_LOCK_MEMBER()}, and |
---|
| 414 | @code{RTEMS_INTERRUPT_LOCK_REFERENCE()} macros are provided to ensure ABI |
---|
| 415 | compatibility. |
---|
[89e72a80] | 416 | |
---|
| 417 | @subsubsection Highest Priority Task Assumption |
---|
| 418 | |
---|
| 419 | On a uniprocessor system, it is safe to assume that when the highest |
---|
| 420 | priority task in an application executes, it will execute without being |
---|
| 421 | preempted until it voluntarily blocks. Interrupts may occur while it is |
---|
| 422 | executing, but there will be no context switch to another task unless |
---|
| 423 | the highest priority task voluntarily initiates it. |
---|
| 424 | |
---|
| 425 | Given the assumption that no other tasks will have their execution |
---|
| 426 | interleaved with the highest priority task, it is possible for this |
---|
| 427 | task to be constructed such that it does not need to acquire a binary |
---|
| 428 | semaphore or mutex for protected access to shared data. |
---|
| 429 | |
---|
| 430 | In an SMP system, it cannot be assumed there will never be a single task |
---|
| 431 | executing. It should be assumed that every processor is executing another |
---|
| 432 | application task. Further, those tasks will be ones which would not have |
---|
| 433 | been executed in a uniprocessor configuration and should be assumed to |
---|
| 434 | have data synchronization conflicts with what was formerly the highest |
---|
| 435 | priority task which executed without conflict. |
---|
| 436 | |
---|
| 437 | @subsubsection Disable Preemption |
---|
| 438 | |
---|
| 439 | On a uniprocessor system, disabling preemption in a task is very similar |
---|
| 440 | to making the highest priority task assumption. While preemption is |
---|
| 441 | disabled, no task context switches will occur unless the task initiates |
---|
| 442 | them voluntarily. And, just as with the highest priority task assumption, |
---|
| 443 | there are N-1 processors also running tasks. Thus the assumption that no |
---|
| 444 | other tasks will run while the task has preemption disabled is violated. |
---|
| 445 | |
---|
| 446 | @subsection Task Unique Data and SMP |
---|
| 447 | |
---|
| 448 | Per task variables are a service commonly provided by real-time operating |
---|
| 449 | systems for application use. They work by allowing the application |
---|
| 450 | to specify a location in memory (typically a @code{void *}) which is |
---|
| 451 | logically added to the context of a task. On each task switch, the |
---|
| 452 | location in memory is stored and each task can have a unique value in |
---|
| 453 | the same memory location. This memory location is directly accessed as a |
---|
| 454 | variable in a program. |
---|
| 455 | |
---|
| 456 | This works well in a uniprocessor environment because there is one task |
---|
| 457 | executing and one memory location containing a task-specific value. But |
---|
| 458 | it is fundamentally broken on an SMP system because there are always N |
---|
| 459 | tasks executing. With only one location in memory, N-1 tasks will not |
---|
| 460 | have the correct value. |
---|
| 461 | |
---|
| 462 | This paradigm for providing task unique data values is fundamentally |
---|
| 463 | broken on SMP systems. |
---|
| 464 | |
---|
| 465 | @subsubsection Classic API Per Task Variables |
---|
| 466 | |
---|
| 467 | The Classic API provides three directives to support per task variables. These are: |
---|
| 468 | |
---|
| 469 | @itemize @bullet |
---|
| 470 | @item @code{@value{DIRPREFIX}task_variable_add} - Associate per task variable |
---|
| 471 | @item @code{@value{DIRPREFIX}task_variable_get} - Obtain value of a a per task variable |
---|
| 472 | @item @code{@value{DIRPREFIX}task_variable_delete} - Remove per task variable |
---|
| 473 | @end itemize |
---|
| 474 | |
---|
[b56ddbb] | 475 | As task variables are unsafe for use on SMP systems, the use of these services |
---|
| 476 | must be eliminated in all software that is to be used in an SMP environment. |
---|
| 477 | The task variables API is disabled on SMP. Its use will lead to compile-time |
---|
| 478 | and link-time errors. It is recommended that the application developer consider |
---|
| 479 | the use of POSIX Keys or Thread Local Storage (TLS). POSIX Keys are available |
---|
| 480 | in all RTEMS configurations. For the availablity of TLS on a particular |
---|
| 481 | architecture please consult the @cite{RTEMS CPU Architecture Supplement}. |
---|
| 482 | |
---|
| 483 | The only remaining user of task variables in the RTEMS code base is the Ada |
---|
| 484 | support. So basically Ada is not available on RTEMS SMP. |
---|
[89e72a80] | 485 | |
---|
[e1769f27] | 486 | @subsection OpenMP |
---|
| 487 | |
---|
| 488 | OpenMP support for RTEMS is available via the GCC provided libgomp. There is |
---|
| 489 | libgomp support for RTEMS in the POSIX configuration of libgomp since GCC 4.9 |
---|
| 490 | (requires a Newlib snapshot after 2015-03-12). In GCC 6.1 or later (requires a |
---|
| 491 | Newlib snapshot after 2015-07-30 for <sys/lock.h> provided self-contained |
---|
| 492 | synchronization objects) there is a specialized libgomp configuration for RTEMS |
---|
| 493 | which offers a significantly better performance compared to the POSIX |
---|
| 494 | configuration of libgomp. In addition application configurable thread pools |
---|
| 495 | for each scheduler instance are available in GCC 6.1 or later. |
---|
| 496 | |
---|
| 497 | The run-time configuration of libgomp is done via environment variables |
---|
| 498 | documented in the @uref{https://gcc.gnu.org/onlinedocs/libgomp/, libgomp |
---|
| 499 | manual}. The environment variables are evaluated in a constructor function |
---|
| 500 | which executes in the context of the first initialization task before the |
---|
| 501 | actual initialization task function is called (just like a global C++ |
---|
| 502 | constructor). To set application specific values, a higher priority |
---|
| 503 | constructor function must be used to set up the environment variables. |
---|
| 504 | |
---|
| 505 | @example |
---|
| 506 | @group |
---|
| 507 | #include <stdlib.h> |
---|
| 508 | |
---|
| 509 | void __attribute__((constructor(1000))) config_libgomp( void ) |
---|
| 510 | @{ |
---|
| 511 | setenv( "OMP_DISPLAY_ENV", "VERBOSE", 1 ); |
---|
| 512 | setenv( "GOMP_SPINCOUNT", "30000", 1 ); |
---|
| 513 | setenv( "GOMP_RTEMS_THREAD_POOLS", "1$2@@SCHD", 1 ); |
---|
| 514 | @} |
---|
| 515 | @end group |
---|
| 516 | @end example |
---|
| 517 | |
---|
[252e244] | 518 | The environment variable @env{GOMP_RTEMS_THREAD_POOLS} is RTEMS-specific. It |
---|
[e1769f27] | 519 | determines the thread pools for each scheduler instance. The format for |
---|
| 520 | @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional |
---|
| 521 | @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations |
---|
| 522 | separated by @code{:} where: |
---|
| 523 | |
---|
| 524 | @itemize @bullet |
---|
| 525 | @item @code{<thread-pool-count>} is the thread pool count for this scheduler |
---|
| 526 | instance. |
---|
| 527 | @item @code{$<priority>} is an optional priority for the worker threads of a |
---|
| 528 | thread pool according to @code{pthread_setschedparam}. In case a priority |
---|
| 529 | value is omitted, then a worker thread will inherit the priority of the OpenMP |
---|
| 530 | master thread that created it. The priority of the worker thread is not |
---|
| 531 | changed by libgomp after creation, even if a new OpenMP master thread using the |
---|
| 532 | worker has a different priority. |
---|
| 533 | @item @code{@@<scheduler-name>} is the scheduler instance name according to the |
---|
| 534 | RTEMS application configuration. |
---|
| 535 | @end itemize |
---|
| 536 | |
---|
| 537 | In case no thread pool configuration is specified for a scheduler instance, |
---|
| 538 | then each OpenMP master thread of this scheduler instance will use its own |
---|
| 539 | dynamically allocated thread pool. To limit the worker thread count of the |
---|
| 540 | thread pools, each OpenMP master thread must call @code{omp_set_num_threads}. |
---|
| 541 | |
---|
| 542 | Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and |
---|
| 543 | @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to |
---|
| 544 | @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for |
---|
| 545 | scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is |
---|
| 546 | one thread pool available. Since no priority is specified for this scheduler |
---|
| 547 | instance, the worker thread inherits the priority of the OpenMP master thread |
---|
| 548 | that created it. In the scheduler instance @code{WRK1} there are three thread |
---|
| 549 | pools available and their worker threads run at priority four. |
---|
| 550 | |
---|
[9154c3f9] | 551 | @subsection Thread Dispatch Details |
---|
| 552 | |
---|
| 553 | This section gives background information to developers interested in the |
---|
| 554 | interrupt latencies introduced by thread dispatching. A thread dispatch |
---|
| 555 | consists of all work which must be done to stop the currently executing thread |
---|
| 556 | on a processor and hand over this processor to an heir thread. |
---|
| 557 | |
---|
| 558 | On SMP systems, scheduling decisions on one processor must be propagated to |
---|
| 559 | other processors through inter-processor interrupts. So, a thread dispatch |
---|
| 560 | which must be carried out on another processor happens not instantaneous. Thus |
---|
| 561 | several thread dispatch requests might be in the air and it is possible that |
---|
| 562 | some of them may be out of date before the corresponding processor has time to |
---|
| 563 | deal with them. The thread dispatch mechanism uses three per-processor |
---|
| 564 | variables, |
---|
| 565 | @itemize @bullet |
---|
| 566 | @item the executing thread, |
---|
| 567 | @item the heir thread, and |
---|
| 568 | @item an boolean flag indicating if a thread dispatch is necessary or not. |
---|
| 569 | @end itemize |
---|
| 570 | Updates of the heir thread and the thread dispatch necessary indicator are |
---|
| 571 | synchronized via explicit memory barriers without the use of locks. A thread |
---|
| 572 | can be an heir thread on at most one processor in the system. The thread context |
---|
| 573 | is protected by a TTAS lock embedded in the context to ensure that it is used |
---|
| 574 | on at most one processor at a time. The thread post-switch actions use a |
---|
| 575 | per-processor lock. This implementation turned out to be quite efficient and |
---|
| 576 | no lock contention was observed in the test suite. |
---|
| 577 | |
---|
| 578 | The current implementation of thread dispatching has some implications with |
---|
| 579 | respect to the interrupt latency. It is crucial to preserve the system |
---|
| 580 | invariant that a thread can execute on at most one processor in the system at a |
---|
| 581 | time. This is accomplished with a boolean indicator in the thread context. |
---|
| 582 | The processor architecture specific context switch code will mark that a thread |
---|
| 583 | context is no longer executing and waits that the heir context stopped |
---|
| 584 | execution before it restores the heir context and resumes execution of the heir |
---|
| 585 | thread (the boolean indicator is basically a TTAS lock). So, there is one |
---|
| 586 | point in time in which a processor is without a thread. This is essential to |
---|
| 587 | avoid cyclic dependencies in case multiple threads migrate at once. Otherwise |
---|
| 588 | some supervising entity is necessary to prevent deadlocks. Such a global |
---|
| 589 | supervisor would lead to scalability problems so this approach is not used. |
---|
| 590 | Currently the context switch is performed with interrupts disabled. Thus in |
---|
| 591 | case the heir thread is currently executing on another processor, the time of |
---|
| 592 | disabled interrupts is prolonged since one processor has to wait for another |
---|
| 593 | processor to make progress. |
---|
| 594 | |
---|
| 595 | It is difficult to avoid this issue with the interrupt latency since interrupts |
---|
| 596 | normally store the context of the interrupted thread on its stack. In case a |
---|
| 597 | thread is marked as not executing, we must not use its thread stack to store |
---|
| 598 | such an interrupt context. We cannot use the heir stack before it stopped |
---|
| 599 | execution on another processor. If we enable interrupts during this |
---|
| 600 | transition, then we have to provide an alternative thread independent stack for |
---|
| 601 | interrupts in this time frame. This issue needs further investigation. |
---|
| 602 | |
---|
| 603 | The problematic situation occurs in case we have a thread which executes with |
---|
| 604 | thread dispatching disabled and should execute on another processor (e.g. it is |
---|
| 605 | an heir thread on another processor). In this case the interrupts on this |
---|
| 606 | other processor are disabled until the thread enables thread dispatching and |
---|
| 607 | starts the thread dispatch sequence. The scheduler (an exception is the |
---|
| 608 | scheduler with thread processor affinity support) tries to avoid such a |
---|
| 609 | situation and checks if a new scheduled thread already executes on a processor. |
---|
| 610 | In case the assigned processor differs from the processor on which the thread |
---|
| 611 | already executes and this processor is a member of the processor set managed by |
---|
| 612 | this scheduler instance, it will reassign the processors to keep the already |
---|
| 613 | executing thread in place. Therefore normal scheduler requests will not lead |
---|
| 614 | to such a situation. Explicit thread migration requests, however, can lead to |
---|
| 615 | this situation. Explicit thread migrations may occur due to the scheduler |
---|
| 616 | helping protocol or explicit scheduler instance changes. The situation can |
---|
| 617 | also be provoked by interrupts which suspend and resume threads multiple times |
---|
| 618 | and produce stale asynchronous thread dispatch requests in the system. |
---|
| 619 | |
---|
[89e72a80] | 620 | @c |
---|
| 621 | @c |
---|
| 622 | @c |
---|
[d46ab11b] | 623 | @section Operations |
---|
| 624 | |
---|
[89e72a80] | 625 | @subsection Setting Affinity to a Single Processor |
---|
| 626 | |
---|
[fab2f188] | 627 | On some embedded applications targeting SMP systems, it may be beneficial to |
---|
| 628 | lock individual tasks to specific processors. In this way, one can designate a |
---|
| 629 | processor for I/O tasks, another for computation, etc.. The following |
---|
| 630 | illustrates the code sequence necessary to assign a task an affinity for |
---|
| 631 | processor with index @code{processor_index}. |
---|
[89e72a80] | 632 | |
---|
| 633 | @example |
---|
[fab2f188] | 634 | @group |
---|
| 635 | #include <rtems.h> |
---|
| 636 | #include <assert.h> |
---|
| 637 | |
---|
| 638 | void pin_to_processor(rtems_id task_id, int processor_index) |
---|
| 639 | @{ |
---|
| 640 | rtems_status_code sc; |
---|
| 641 | cpu_set_t cpuset; |
---|
| 642 | |
---|
| 643 | CPU_ZERO(&cpuset); |
---|
| 644 | CPU_SET(processor_index, &cpuset); |
---|
| 645 | |
---|
| 646 | sc = rtems_task_set_affinity(task_id, sizeof(cpuset), &cpuset); |
---|
| 647 | assert(sc == RTEMS_SUCCESSFUL); |
---|
| 648 | @} |
---|
| 649 | @end group |
---|
[89e72a80] | 650 | @end example |
---|
| 651 | |
---|
[fab2f188] | 652 | It is important to note that the @code{cpuset} is not validated until the |
---|
[89e72a80] | 653 | @code{@value{DIRPREFIX}task_set_affinity} call is made. At that point, |
---|
| 654 | it is validated against the current system configuration. |
---|
| 655 | |
---|
| 656 | @c |
---|
| 657 | @c |
---|
| 658 | @c |
---|
[d46ab11b] | 659 | @section Directives |
---|
| 660 | |
---|
| 661 | This section details the symmetric multiprocessing services. A subsection |
---|
| 662 | is dedicated to each of these services and describes the calling sequence, |
---|
| 663 | related constants, usage, and status codes. |
---|
| 664 | |
---|
| 665 | @c |
---|
[6809383d] | 666 | @c rtems_get_processor_count |
---|
[d46ab11b] | 667 | @c |
---|
| 668 | @page |
---|
[6809383d] | 669 | @subsection GET_PROCESSOR_COUNT - Get processor count |
---|
[d46ab11b] | 670 | |
---|
| 671 | @subheading CALLING SEQUENCE: |
---|
| 672 | |
---|
| 673 | @ifset is-C |
---|
| 674 | @example |
---|
[6809383d] | 675 | uint32_t rtems_get_processor_count(void); |
---|
[d46ab11b] | 676 | @end example |
---|
| 677 | @end ifset |
---|
| 678 | |
---|
| 679 | @ifset is-Ada |
---|
| 680 | @end ifset |
---|
| 681 | |
---|
[6809383d] | 682 | @subheading DIRECTIVE STATUS CODES: |
---|
[d46ab11b] | 683 | |
---|
[6809383d] | 684 | The count of processors in the system. |
---|
[d46ab11b] | 685 | |
---|
| 686 | @subheading DESCRIPTION: |
---|
| 687 | |
---|
[6809383d] | 688 | On uni-processor configurations a value of one will be returned. |
---|
| 689 | |
---|
| 690 | On SMP configurations this returns the value of a global variable set during |
---|
| 691 | system initialization to indicate the count of utilized processors. The |
---|
| 692 | processor count depends on the physically or virtually available processors and |
---|
| 693 | application configuration. The value will always be less than or equal to the |
---|
| 694 | maximum count of application configured processors. |
---|
[d46ab11b] | 695 | |
---|
| 696 | @subheading NOTES: |
---|
| 697 | |
---|
[6809383d] | 698 | None. |
---|
[d46ab11b] | 699 | |
---|
[4a93980] | 700 | @c |
---|
| 701 | @c rtems_get_current_processor |
---|
| 702 | @c |
---|
| 703 | @page |
---|
| 704 | @subsection GET_CURRENT_PROCESSOR - Get current processor index |
---|
| 705 | |
---|
| 706 | @subheading CALLING SEQUENCE: |
---|
| 707 | |
---|
| 708 | @ifset is-C |
---|
| 709 | @example |
---|
| 710 | uint32_t rtems_get_current_processor(void); |
---|
| 711 | @end example |
---|
| 712 | @end ifset |
---|
| 713 | |
---|
| 714 | @ifset is-Ada |
---|
| 715 | @end ifset |
---|
| 716 | |
---|
| 717 | @subheading DIRECTIVE STATUS CODES: |
---|
| 718 | |
---|
| 719 | The index of the current processor. |
---|
| 720 | |
---|
| 721 | @subheading DESCRIPTION: |
---|
| 722 | |
---|
| 723 | On uni-processor configurations a value of zero will be returned. |
---|
| 724 | |
---|
| 725 | On SMP configurations an architecture specific method is used to obtain the |
---|
| 726 | index of the current processor in the system. The set of processor indices is |
---|
| 727 | the range of integers starting with zero up to the processor count minus one. |
---|
| 728 | |
---|
| 729 | Outside of sections with disabled thread dispatching the current processor |
---|
| 730 | index may change after every instruction since the thread may migrate from one |
---|
| 731 | processor to another. Sections with disabled interrupts are sections with |
---|
| 732 | thread dispatching disabled. |
---|
| 733 | |
---|
| 734 | @subheading NOTES: |
---|
| 735 | |
---|
| 736 | None. |
---|
| 737 | |
---|
[89c1a21f] | 738 | @c |
---|
| 739 | @c rtems_scheduler_ident |
---|
| 740 | @c |
---|
| 741 | @page |
---|
| 742 | @subsection SCHEDULER_IDENT - Get ID of a scheduler |
---|
| 743 | |
---|
| 744 | @subheading CALLING SEQUENCE: |
---|
| 745 | |
---|
| 746 | @ifset is-C |
---|
| 747 | @example |
---|
| 748 | rtems_status_code rtems_scheduler_ident( |
---|
| 749 | rtems_name name, |
---|
| 750 | rtems_id *id |
---|
| 751 | ); |
---|
| 752 | @end example |
---|
| 753 | @end ifset |
---|
| 754 | |
---|
| 755 | @ifset is-Ada |
---|
| 756 | @end ifset |
---|
| 757 | |
---|
| 758 | @subheading DIRECTIVE STATUS CODES: |
---|
| 759 | |
---|
| 760 | @code{@value{RPREFIX}SUCCESSFUL} - successful operation@* |
---|
| 761 | @code{@value{RPREFIX}INVALID_ADDRESS} - @code{id} is NULL@* |
---|
[e239760] | 762 | @code{@value{RPREFIX}INVALID_NAME} - invalid scheduler name@* |
---|
| 763 | @code{@value{RPREFIX}UNSATISFIED} - - a scheduler with this name exists, but |
---|
| 764 | the processor set of this scheduler is empty |
---|
[89c1a21f] | 765 | |
---|
| 766 | @subheading DESCRIPTION: |
---|
| 767 | |
---|
| 768 | Identifies a scheduler by its name. The scheduler name is determined by the |
---|
[06496fb] | 769 | scheduler configuration. @xref{Configuring a System Configuring Clustered |
---|
| 770 | Schedulers}. |
---|
[89c1a21f] | 771 | |
---|
| 772 | @subheading NOTES: |
---|
| 773 | |
---|
| 774 | None. |
---|
| 775 | |
---|
[d1f2f22] | 776 | @c |
---|
| 777 | @c rtems_scheduler_get_processor_set |
---|
| 778 | @c |
---|
| 779 | @page |
---|
| 780 | @subsection SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler |
---|
| 781 | |
---|
| 782 | @subheading CALLING SEQUENCE: |
---|
| 783 | |
---|
| 784 | @ifset is-C |
---|
| 785 | @example |
---|
| 786 | rtems_status_code rtems_scheduler_get_processor_set( |
---|
| 787 | rtems_id scheduler_id, |
---|
| 788 | size_t cpusetsize, |
---|
| 789 | cpu_set_t *cpuset |
---|
| 790 | ); |
---|
| 791 | @end example |
---|
| 792 | @end ifset |
---|
| 793 | |
---|
| 794 | @ifset is-Ada |
---|
| 795 | @end ifset |
---|
| 796 | |
---|
| 797 | @subheading DIRECTIVE STATUS CODES: |
---|
| 798 | |
---|
| 799 | @code{@value{RPREFIX}SUCCESSFUL} - successful operation@* |
---|
| 800 | @code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@* |
---|
| 801 | @code{@value{RPREFIX}INVALID_ID} - invalid scheduler id@* |
---|
| 802 | @code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for |
---|
| 803 | set of processors owned by the scheduler |
---|
| 804 | |
---|
| 805 | @subheading DESCRIPTION: |
---|
| 806 | |
---|
| 807 | Returns the processor set owned by the scheduler in @code{cpuset}. A set bit |
---|
| 808 | in the processor set means that this processor is owned by the scheduler and a |
---|
| 809 | cleared bit means the opposite. |
---|
| 810 | |
---|
| 811 | @subheading NOTES: |
---|
| 812 | |
---|
| 813 | None. |
---|
| 814 | |
---|
[babb1a2c] | 815 | @c |
---|
| 816 | @c rtems_task_get_scheduler |
---|
| 817 | @c |
---|
| 818 | @page |
---|
| 819 | @subsection TASK_GET_SCHEDULER - Get scheduler of a task |
---|
| 820 | |
---|
| 821 | @subheading CALLING SEQUENCE: |
---|
| 822 | |
---|
| 823 | @ifset is-C |
---|
| 824 | @example |
---|
| 825 | rtems_status_code rtems_task_get_scheduler( |
---|
[e5274df] | 826 | rtems_id task_id, |
---|
[babb1a2c] | 827 | rtems_id *scheduler_id |
---|
| 828 | ); |
---|
| 829 | @end example |
---|
| 830 | @end ifset |
---|
| 831 | |
---|
| 832 | @ifset is-Ada |
---|
| 833 | @end ifset |
---|
| 834 | |
---|
| 835 | @subheading DIRECTIVE STATUS CODES: |
---|
| 836 | |
---|
| 837 | @code{@value{RPREFIX}SUCCESSFUL} - successful operation@* |
---|
| 838 | @code{@value{RPREFIX}INVALID_ADDRESS} - @code{scheduler_id} is NULL@* |
---|
| 839 | @code{@value{RPREFIX}INVALID_ID} - invalid task id |
---|
| 840 | |
---|
| 841 | @subheading DESCRIPTION: |
---|
| 842 | |
---|
[e5274df] | 843 | Returns the scheduler identifier of a task identified by @code{task_id} in |
---|
| 844 | @code{scheduler_id}. |
---|
[babb1a2c] | 845 | |
---|
| 846 | @subheading NOTES: |
---|
| 847 | |
---|
| 848 | None. |
---|
| 849 | |
---|
[f830029] | 850 | @c |
---|
| 851 | @c rtems_task_set_scheduler |
---|
| 852 | @c |
---|
| 853 | @page |
---|
| 854 | @subsection TASK_SET_SCHEDULER - Set scheduler of a task |
---|
| 855 | |
---|
| 856 | @subheading CALLING SEQUENCE: |
---|
| 857 | |
---|
| 858 | @ifset is-C |
---|
| 859 | @example |
---|
| 860 | rtems_status_code rtems_task_set_scheduler( |
---|
[e5274df] | 861 | rtems_id task_id, |
---|
[f830029] | 862 | rtems_id scheduler_id |
---|
| 863 | ); |
---|
| 864 | @end example |
---|
| 865 | @end ifset |
---|
| 866 | |
---|
| 867 | @ifset is-Ada |
---|
| 868 | @end ifset |
---|
| 869 | |
---|
| 870 | @subheading DIRECTIVE STATUS CODES: |
---|
| 871 | |
---|
| 872 | @code{@value{RPREFIX}SUCCESSFUL} - successful operation@* |
---|
| 873 | @code{@value{RPREFIX}INVALID_ID} - invalid task or scheduler id@* |
---|
| 874 | @code{@value{RPREFIX}INCORRECT_STATE} - the task is in the wrong state to |
---|
| 875 | perform a scheduler change |
---|
| 876 | |
---|
| 877 | @subheading DESCRIPTION: |
---|
| 878 | |
---|
[e5274df] | 879 | Sets the scheduler of a task identified by @code{task_id} to the scheduler |
---|
| 880 | identified by @code{scheduler_id}. The scheduler of a task is initialized to |
---|
| 881 | the scheduler of the task that created it. |
---|
[f830029] | 882 | |
---|
| 883 | @subheading NOTES: |
---|
| 884 | |
---|
| 885 | None. |
---|
| 886 | |
---|
| 887 | @subheading EXAMPLE: |
---|
| 888 | |
---|
| 889 | @example |
---|
| 890 | @group |
---|
| 891 | #include <rtems.h> |
---|
| 892 | #include <assert.h> |
---|
| 893 | |
---|
| 894 | void task(rtems_task_argument arg); |
---|
| 895 | |
---|
| 896 | void example(void) |
---|
| 897 | @{ |
---|
| 898 | rtems_status_code sc; |
---|
| 899 | rtems_id task_id; |
---|
| 900 | rtems_id scheduler_id; |
---|
| 901 | rtems_name scheduler_name; |
---|
| 902 | |
---|
| 903 | scheduler_name = rtems_build_name('W', 'O', 'R', 'K'); |
---|
| 904 | |
---|
| 905 | sc = rtems_scheduler_ident(scheduler_name, &scheduler_id); |
---|
| 906 | assert(sc == RTEMS_SUCCESSFUL); |
---|
| 907 | |
---|
| 908 | sc = rtems_task_create( |
---|
| 909 | rtems_build_name('T', 'A', 'S', 'K'), |
---|
| 910 | 1, |
---|
| 911 | RTEMS_MINIMUM_STACK_SIZE, |
---|
| 912 | RTEMS_DEFAULT_MODES, |
---|
| 913 | RTEMS_DEFAULT_ATTRIBUTES, |
---|
| 914 | &task_id |
---|
| 915 | ); |
---|
| 916 | assert(sc == RTEMS_SUCCESSFUL); |
---|
| 917 | |
---|
| 918 | sc = rtems_task_set_scheduler(task_id, scheduler_id); |
---|
| 919 | assert(sc == RTEMS_SUCCESSFUL); |
---|
| 920 | |
---|
| 921 | sc = rtems_task_start(task_id, task, 0); |
---|
| 922 | assert(sc == RTEMS_SUCCESSFUL); |
---|
| 923 | @} |
---|
| 924 | @end group |
---|
| 925 | @end example |
---|
| 926 | |
---|
[d46ab11b] | 927 | @c |
---|
| 928 | @c rtems_task_get_affinity |
---|
| 929 | @c |
---|
| 930 | @page |
---|
[2be51cc] | 931 | @subsection TASK_GET_AFFINITY - Get task processor affinity |
---|
[d46ab11b] | 932 | |
---|
| 933 | @subheading CALLING SEQUENCE: |
---|
| 934 | |
---|
| 935 | @ifset is-C |
---|
| 936 | @example |
---|
| 937 | rtems_status_code rtems_task_get_affinity( |
---|
[2be51cc] | 938 | rtems_id id, |
---|
| 939 | size_t cpusetsize, |
---|
| 940 | cpu_set_t *cpuset |
---|
[d46ab11b] | 941 | ); |
---|
| 942 | @end example |
---|
| 943 | @end ifset |
---|
| 944 | |
---|
| 945 | @ifset is-Ada |
---|
| 946 | @end ifset |
---|
| 947 | |
---|
[2be51cc] | 948 | @subheading DIRECTIVE STATUS CODES: |
---|
[d46ab11b] | 949 | |
---|
[2be51cc] | 950 | @code{@value{RPREFIX}SUCCESSFUL} - successful operation@* |
---|
| 951 | @code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@* |
---|
| 952 | @code{@value{RPREFIX}INVALID_ID} - invalid task id@* |
---|
| 953 | @code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for |
---|
| 954 | the current processor affinity set of the task |
---|
[d46ab11b] | 955 | |
---|
| 956 | @subheading DESCRIPTION: |
---|
| 957 | |
---|
[2be51cc] | 958 | Returns the current processor affinity set of the task in @code{cpuset}. A set |
---|
| 959 | bit in the affinity set means that the task can execute on this processor and a |
---|
| 960 | cleared bit means the opposite. |
---|
[d46ab11b] | 961 | |
---|
| 962 | @subheading NOTES: |
---|
| 963 | |
---|
[2be51cc] | 964 | None. |
---|
[d46ab11b] | 965 | |
---|
| 966 | @c |
---|
| 967 | @c rtems_task_set_affinity |
---|
| 968 | @c |
---|
| 969 | @page |
---|
[ed859d51] | 970 | @subsection TASK_SET_AFFINITY - Set task processor affinity |
---|
[d46ab11b] | 971 | |
---|
| 972 | @subheading CALLING SEQUENCE: |
---|
| 973 | |
---|
| 974 | @ifset is-C |
---|
| 975 | @example |
---|
| 976 | rtems_status_code rtems_task_set_affinity( |
---|
[ed859d51] | 977 | rtems_id id, |
---|
| 978 | size_t cpusetsize, |
---|
| 979 | const cpu_set_t *cpuset |
---|
[d46ab11b] | 980 | ); |
---|
| 981 | @end example |
---|
| 982 | @end ifset |
---|
| 983 | |
---|
| 984 | @ifset is-Ada |
---|
| 985 | @end ifset |
---|
| 986 | |
---|
[ed859d51] | 987 | @subheading DIRECTIVE STATUS CODES: |
---|
[d46ab11b] | 988 | |
---|
[ed859d51] | 989 | @code{@value{RPREFIX}SUCCESSFUL} - successful operation@* |
---|
| 990 | @code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@* |
---|
| 991 | @code{@value{RPREFIX}INVALID_ID} - invalid task id@* |
---|
| 992 | @code{@value{RPREFIX}INVALID_NUMBER} - invalid processor affinity set |
---|
[d46ab11b] | 993 | |
---|
| 994 | @subheading DESCRIPTION: |
---|
| 995 | |
---|
[ed859d51] | 996 | Sets the processor affinity set for the task specified by @code{cpuset}. A set |
---|
| 997 | bit in the affinity set means that the task can execute on this processor and a |
---|
| 998 | cleared bit means the opposite. |
---|
[d46ab11b] | 999 | |
---|
| 1000 | @subheading NOTES: |
---|
| 1001 | |
---|
[25f5730f] | 1002 | This function will not change the scheduler of the task. The intersection of |
---|
| 1003 | the processor affinity set and the set of processors owned by the scheduler of |
---|
| 1004 | the task must be non-empty. It is not an error if the processor affinity set |
---|
| 1005 | contains processors that are not part of the set of processors owned by the |
---|
| 1006 | scheduler instance of the task. A task will simply not run under normal |
---|
| 1007 | circumstances on these processors since the scheduler ignores them. Some |
---|
| 1008 | locking protocols may temporarily use processors that are not included in the |
---|
| 1009 | processor affinity set of the task. It is also not an error if the processor |
---|
| 1010 | affinity set contains processors that are not part of the system. |
---|