source: rtems-docs/c-user/symmetric_multiprocessing_services.rst @ a238912

5
Last change on this file since a238912 was a238912, checked in by Sebastian Huber <sebastian.huber@…>, on 02/01/17 at 12:07:25

c-user: Move some task directives

  • Property mode set to 100644
File size: 32.5 KB
Line 
1.. comment SPDX-License-Identifier: CC-BY-SA-4.0
2
3.. COMMENT: COPYRIGHT (c) 2014.
4.. COMMENT: On-Line Applications Research Corporation (OAR).
5.. COMMENT: All rights reserved.
6
7Symmetric Multiprocessing Services
8**********************************
9
10Introduction
11============
12
13The Symmetric Multiprocessing (SMP) support of the RTEMS 4.11.0 and later is available
14on
15
16- ARM,
17
18- PowerPC, and
19
20- SPARC.
21
22It must be explicitly enabled via the ``--enable-smp`` configure command line
23option.  To enable SMP in the application configuration see :ref:`Enable SMP
24Support for Applications`.  The default scheduler for SMP applications supports
25up to 32 processors and is a global fixed priority scheduler, see also
26:ref:`Configuring Clustered Schedulers`.  For example applications
27see:file:`testsuites/smptests`.
28
29.. warning::
30
31   The SMP support in the release of RTEMS is a work in progress. Before you
32   start using this RTEMS version for SMP ask on the RTEMS mailing list.
33
34This chapter describes the services related to Symmetric Multiprocessing
35provided by RTEMS.
36
37The application level services currently provided are:
38
39- rtems_get_processor_count_ - Get processor count
40
41- rtems_get_current_processor_ - Get current processor index
42
43- rtems_scheduler_ident_ - Get ID of a scheduler
44
45- rtems_scheduler_get_processor_set_ - Get processor set of a scheduler
46
47- rtems_scheduler_add_processor_ - Add processor to a scheduler
48
49- rtems_scheduler_remove_processor_ - Remove processor from a scheduler
50
51Background
52==========
53
54Uniprocessor versus SMP Parallelism
55-----------------------------------
56
57Uniprocessor systems have long been used in embedded systems. In this hardware
58model, there are some system execution characteristics which have long been
59taken for granted:
60
61- one task executes at a time
62
63- hardware events result in interrupts
64
65There is no true parallelism. Even when interrupts appear to occur at the same
66time, they are processed in largely a serial fashion.  This is true even when
67the interupt service routines are allowed to nest.  From a tasking viewpoint,
68it is the responsibility of the real-time operatimg system to simulate
69parallelism by switching between tasks.  These task switches occur in response
70to hardware interrupt events and explicit application events such as blocking
71for a resource or delaying.
72
73With symmetric multiprocessing, the presence of multiple processors allows for
74true concurrency and provides for cost-effective performance
75improvements. Uniprocessors tend to increase performance by increasing clock
76speed and complexity. This tends to lead to hot, power hungry microprocessors
77which are poorly suited for many embedded applications.
78
79The true concurrency is in sharp contrast to the single task and interrupt
80model of uniprocessor systems. This results in a fundamental change to
81uniprocessor system characteristics listed above. Developers are faced with a
82different set of characteristics which, in turn, break some existing
83assumptions and result in new challenges. In an SMP system with N processors,
84these are the new execution characteristics.
85
86- N tasks execute in parallel
87
88- hardware events result in interrupts
89
90There is true parallelism with a task executing on each processor and the
91possibility of interrupts occurring on each processor. Thus in contrast to
92their being one task and one interrupt to consider on a uniprocessor, there are
93N tasks and potentially N simultaneous interrupts to consider on an SMP system.
94
95This increase in hardware complexity and presence of true parallelism results
96in the application developer needing to be even more cautious about mutual
97exclusion and shared data access than in a uniprocessor embedded system. Race
98conditions that never or rarely happened when an application executed on a
99uniprocessor system, become much more likely due to multiple threads executing
100in parallel. On a uniprocessor system, these race conditions would only happen
101when a task switch occurred at just the wrong moment. Now there are N-1 tasks
102executing in parallel all the time and this results in many more opportunities
103for small windows in critical sections to be hit.
104
105Task Affinity
106-------------
107.. index:: task affinity
108.. index:: thread affinity
109
110RTEMS provides services to manipulate the affinity of a task. Affinity is used
111to specify the subset of processors in an SMP system on which a particular task
112can execute.
113
114By default, tasks have an affinity which allows them to execute on any
115available processor.
116
117Task affinity is a possible feature to be supported by SMP-aware
118schedulers. However, only a subset of the available schedulers support
119affinity. Although the behavior is scheduler specific, if the scheduler does
120not support affinity, it is likely to ignore all attempts to set affinity.
121
122The scheduler with support for arbitary processor affinities uses a proof of
123concept implementation.  See https://devel.rtems.org/ticket/2510.
124
125Task Migration
126--------------
127.. index:: task migration
128.. index:: thread migration
129
130With more than one processor in the system tasks can migrate from one processor
131to another.  There are three reasons why tasks migrate in RTEMS.
132
133- The scheduler changes explicitly via ``rtems_task_set_scheduler()`` or
134  similar directives.
135
136- The task resumes execution after a blocking operation.  On a priority based
137  scheduler it will evict the lowest priority task currently assigned to a
138  processor in the processor set managed by the scheduler instance.
139
140- The task moves temporarily to another scheduler instance due to locking
141  protocols like *Migratory Priority Inheritance* or the *Multiprocessor
142  Resource Sharing Protocol*.
143
144Task migration should be avoided so that the working set of a task can stay on
145the most local cache level.
146
147The current implementation of task migration in RTEMS has some implications
148with respect to the interrupt latency.  It is crucial to preserve the system
149invariant that a task can execute on at most one processor in the system at a
150time.  This is accomplished with a boolean indicator in the task context.  The
151processor architecture specific low-level task context switch code will mark
152that a task context is no longer executing and waits that the heir context
153stopped execution before it restores the heir context and resumes execution of
154the heir task.  So there is one point in time in which a processor is without a
155task.  This is essential to avoid cyclic dependencies in case multiple tasks
156migrate at once.  Otherwise some supervising entity is necessary to prevent
157life-locks.  Such a global supervisor would lead to scalability problems so
158this approach is not used.  Currently the thread dispatch is performed with
159interrupts disabled.  So in case the heir task is currently executing on
160another processor then this prolongs the time of disabled interrupts since one
161processor has to wait for another processor to make progress.
162
163It is difficult to avoid this issue with the interrupt latency since interrupts
164normally store the context of the interrupted task on its stack.  In case a
165task is marked as not executing we must not use its task stack to store such an
166interrupt context.  We cannot use the heir stack before it stopped execution on
167another processor.  So if we enable interrupts during this transition we have
168to provide an alternative task independent stack for this time frame.  This
169issue needs further investigation.
170
171Clustered Scheduling
172--------------------
173
174We have clustered scheduling in case the set of processors of a system is
175partitioned into non-empty pairwise-disjoint subsets. These subsets are called
176clusters.  Clusters with a cardinality of one are partitions.  Each cluster is
177owned by exactly one scheduler instance.
178
179Clustered scheduling helps to control the worst-case latencies in
180multi-processor systems, see :cite:`Brandenburg:2011:SL`. The goal is to reduce
181the amount of shared state in the system and thus prevention of lock
182contention. Modern multi-processor systems tend to have several layers of data
183and instruction caches.  With clustered scheduling it is possible to honour the
184cache topology of a system and thus avoid expensive cache synchronization
185traffic.  It is easy to implement.  The problem is to provide synchronization
186primitives for inter-cluster synchronization (more than one cluster is involved
187in the synchronization process). In RTEMS there are currently four means
188available
189
190- events,
191
192- message queues,
193
194- semaphores using the :ref:`Priority Inheritance` protocol (priority
195  boosting), and
196
197- semaphores using the Multiprocessor Resource Sharing Protocol :cite:`Burns:2013:MrsP`.
198
199The clustered scheduling approach enables separation of functions with
200real-time requirements and functions that profit from fairness and high
201throughput provided the scheduler instances are fully decoupled and adequate
202inter-cluster synchronization primitives are used.  This is work in progress.
203
204For the configuration of clustered schedulers see :ref:`Configuring Clustered
205Schedulers`.
206
207To set the scheduler of a task see :ref:`SCHEDULER_IDENT - Get ID of a
208scheduler` and :ref:`TASK_SET_SCHEDULER - Set scheduler of a task`.
209
210Task Priority Queues
211--------------------
212
213Due to the support for clustered scheduling the task priority queues need
214special attention.  It makes no sense to compare the priority values of two
215different scheduler instances.  Thus, it is not possible to simply use one
216plain priority queue for tasks of different scheduler instances.
217
218One solution to this problem is to use two levels of queues.  The top level
219queue provides FIFO ordering and contains priority queues.  Each priority queue
220is associated with a scheduler instance and contains only tasks of this
221scheduler instance.  Tasks are enqueued in the priority queue corresponding to
222their scheduler instance.  In case this priority queue was empty, then it is
223appended to the FIFO.  To dequeue a task the highest priority task of the first
224priority queue in the FIFO is selected.  Then the first priority queue is
225removed from the FIFO.  In case the previously first priority queue is not
226empty, then it is appended to the FIFO.  So there is FIFO fairness with respect
227to the highest priority task of each scheduler instances. See also
228:cite:`Brandenburg:2013:OMIP`.
229
230Such a two level queue may need a considerable amount of memory if fast enqueue
231and dequeue operations are desired (depends on the scheduler instance count).
232To mitigate this problem an approch of the FreeBSD kernel was implemented in
233RTEMS.  We have the invariant that a task can be enqueued on at most one task
234queue.  Thus, we need only as many queues as we have tasks.  Each task is
235equipped with spare task queue which it can give to an object on demand.  The
236task queue uses a dedicated memory space independent of the other memory used
237for the task itself. In case a task needs to block, then there are two options
238
239- the object already has task queue, then the task enqueues itself to this
240  already present queue and the spare task queue of the task is added to a list
241  of free queues for this object, or
242
243- otherwise, then the queue of the task is given to the object and the task
244  enqueues itself to this queue.
245
246In case the task is dequeued, then there are two options
247
248- the task is the last task on the queue, then it removes this queue from the
249  object and reclaims it for its own purpose, or
250
251- otherwise, then the task removes one queue from the free list of the object
252  and reclaims it for its own purpose.
253
254Since there are usually more objects than tasks, this actually reduces the
255memory demands. In addition the objects contain only a pointer to the task
256queue structure. This helps to hide implementation details and makes it
257possible to use self-contained synchronization objects in Newlib and GCC (C++
258and OpenMP run-time support).
259
260Scheduler Helping Protocol
261--------------------------
262
263The scheduler provides a helping protocol to support locking protocols like
264*Migratory Priority Inheritance* or the *Multiprocessor Resource Sharing
265Protocol*.  Each ready task can use at least one scheduler node at a time to
266gain access to a processor.  Each scheduler node has an owner, a user and an
267optional idle task.  The owner of a scheduler node is determined a task
268creation and never changes during the life time of a scheduler node.  The user
269of a scheduler node may change due to the scheduler helping protocol.  A
270scheduler node is in one of the four scheduler help states:
271
272:dfn:`help yourself`
273    This scheduler node is solely used by the owner task.  This task owns no
274    resources using a helping protocol and thus does not take part in the
275    scheduler helping protocol.  No help will be provided for other tasks.
276
277:dfn:`help active owner`
278    This scheduler node is owned by a task actively owning a resource and can
279    be used to help out tasks.  In case this scheduler node changes its state
280    from ready to scheduled and the task executes using another node, then an
281    idle task will be provided as a user of this node to temporarily execute on
282    behalf of the owner task.  Thus lower priority tasks are denied access to
283    the processors of this scheduler instance.  In case a task actively owning
284    a resource performs a blocking operation, then an idle task will be used
285    also in case this node is in the scheduled state.
286
287:dfn:`help active rival`
288    This scheduler node is owned by a task actively obtaining a resource
289    currently owned by another task and can be used to help out tasks.  The
290    task owning this node is ready and will give away its processor in case the
291    task owning the resource asks for help.
292
293:dfn:`help passive`
294    This scheduler node is owned by a task obtaining a resource currently owned
295    by another task and can be used to help out tasks.  The task owning this
296    node is blocked.
297
298The following scheduler operations return a task in need for help
299
300- unblock,
301
302- change priority,
303
304- yield, and
305
306- ask for help.
307
308A task in need for help is a task that encounters a scheduler state change from
309scheduled to ready (this is a pre-emption by a higher priority task) or a task
310that cannot be scheduled in an unblock operation.  Such a task can ask tasks
311which depend on resources owned by this task for help.
312
313In case it is not possible to schedule a task in need for help, then the
314scheduler nodes available for the task will be placed into the set of ready
315scheduler nodes of the corresponding scheduler instances.  Once a state change
316from ready to scheduled happens for one of scheduler nodes it will be used to
317schedule the task in need for help.
318
319The ask for help scheduler operation is used to help tasks in need for help
320returned by the operations mentioned above.  This operation is also used in
321case the root of a resource sub-tree owned by a task changes.
322
323The run-time of the ask for help procedures depend on the size of the resource
324tree of the task needing help and other resource trees in case tasks in need
325for help are produced during this operation.  Thus the worst-case latency in
326the system depends on the maximum resource tree size of the application.
327
328Critical Section Techniques and SMP
329-----------------------------------
330
331As discussed earlier, SMP systems have opportunities for true parallelism which
332was not possible on uniprocessor systems. Consequently, multiple techniques
333that provided adequate critical sections on uniprocessor systems are unsafe on
334SMP systems. In this section, some of these unsafe techniques will be
335discussed.
336
337In general, applications must use proper operating system provided mutual
338exclusion mechanisms to ensure correct behavior. This primarily means the use
339of binary semaphores or mutexes to implement critical sections.
340
341Disable Interrupts and Interrupt Locks
342~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
343
344A low overhead means to ensure mutual exclusion in uni-processor configurations
345is to disable interrupts around a critical section.  This is commonly used in
346device driver code and throughout the operating system core.  On SMP
347configurations, however, disabling the interrupts on one processor has no
348effect on other processors.  So, this is insufficient to ensure system wide
349mutual exclusion.  The macros
350
351- ``rtems_interrupt_disable()``,
352
353- ``rtems_interrupt_enable()``, and
354
355- ``rtems_interrupt_flush()``
356
357are disabled on SMP configurations and its use will lead to compiler warnings
358and linker errors.  In the unlikely case that interrupts must be disabled on
359the current processor, then the
360
361- ``rtems_interrupt_local_disable()``, and
362
363- ``rtems_interrupt_local_enable()``
364
365macros are now available in all configurations.
366
367Since disabling of interrupts is not enough to ensure system wide mutual
368exclusion on SMP, a new low-level synchronization primitive was added - the
369interrupt locks.  They are a simple API layer on top of the SMP locks used for
370low-level synchronization in the operating system core.  Currently they are
371implemented as a ticket lock.  On uni-processor configurations they degenerate
372to simple interrupt disable/enable sequences.  It is disallowed to acquire a
373single interrupt lock in a nested way.  This will result in an infinite loop
374with interrupts disabled.  While converting legacy code to interrupt locks care
375must be taken to avoid this situation.
376
377.. code-block:: c
378    :linenos:
379
380    void legacy_code_with_interrupt_disable_enable( void )
381    {
382        rtems_interrupt_level level;
383        rtems_interrupt_disable( level );
384        /* Some critical stuff */
385        rtems_interrupt_enable( level );
386    }
387
388    RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" );
389
390    void smp_ready_code_with_interrupt_lock( void )
391    {
392        rtems_interrupt_lock_context lock_context;
393        rtems_interrupt_lock_acquire( &lock, &lock_context );
394        /* Some critical stuff */
395        rtems_interrupt_lock_release( &lock, &lock_context );
396    }
397
398The ``rtems_interrupt_lock`` structure is empty on uni-processor
399configurations.  Empty structures have a different size in C
400(implementation-defined, zero in case of GCC) and C++ (implementation-defined
401non-zero value, one in case of GCC).  Thus the
402``RTEMS_INTERRUPT_LOCK_DECLARE()``, ``RTEMS_INTERRUPT_LOCK_DEFINE()``,
403``RTEMS_INTERRUPT_LOCK_MEMBER()``, and ``RTEMS_INTERRUPT_LOCK_REFERENCE()``
404macros are provided to ensure ABI compatibility.
405
406Highest Priority Task Assumption
407~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
408
409On a uniprocessor system, it is safe to assume that when the highest priority
410task in an application executes, it will execute without being preempted until
411it voluntarily blocks. Interrupts may occur while it is executing, but there
412will be no context switch to another task unless the highest priority task
413voluntarily initiates it.
414
415Given the assumption that no other tasks will have their execution interleaved
416with the highest priority task, it is possible for this task to be constructed
417such that it does not need to acquire a binary semaphore or mutex for protected
418access to shared data.
419
420In an SMP system, it cannot be assumed there will never be a single task
421executing. It should be assumed that every processor is executing another
422application task. Further, those tasks will be ones which would not have been
423executed in a uniprocessor configuration and should be assumed to have data
424synchronization conflicts with what was formerly the highest priority task
425which executed without conflict.
426
427Disable Preemption
428~~~~~~~~~~~~~~~~~~
429
430On a uniprocessor system, disabling preemption in a task is very similar to
431making the highest priority task assumption. While preemption is disabled, no
432task context switches will occur unless the task initiates them
433voluntarily. And, just as with the highest priority task assumption, there are
434N-1 processors also running tasks. Thus the assumption that no other tasks will
435run while the task has preemption disabled is violated.
436
437Task Unique Data and SMP
438------------------------
439
440Per task variables are a service commonly provided by real-time operating
441systems for application use. They work by allowing the application to specify a
442location in memory (typically a ``void *``) which is logically added to the
443context of a task. On each task switch, the location in memory is stored and
444each task can have a unique value in the same memory location. This memory
445location is directly accessed as a variable in a program.
446
447This works well in a uniprocessor environment because there is one task
448executing and one memory location containing a task-specific value. But it is
449fundamentally broken on an SMP system because there are always N tasks
450executing. With only one location in memory, N-1 tasks will not have the
451correct value.
452
453This paradigm for providing task unique data values is fundamentally broken on
454SMP systems.
455
456Classic API Per Task Variables
457~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
458
459The Classic API provides three directives to support per task variables. These are:
460
461- ``rtems_task_variable_add`` - Associate per task variable
462
463- ``rtems_task_variable_get`` - Obtain value of a a per task variable
464
465- ``rtems_task_variable_delete`` - Remove per task variable
466
467As task variables are unsafe for use on SMP systems, the use of these services
468must be eliminated in all software that is to be used in an SMP environment.
469The task variables API is disabled on SMP. Its use will lead to compile-time
470and link-time errors. It is recommended that the application developer consider
471the use of POSIX Keys or Thread Local Storage (TLS). POSIX Keys are available
472in all RTEMS configurations.  For the availablity of TLS on a particular
473architecture please consult the *RTEMS CPU Architecture Supplement*.
474
475The only remaining user of task variables in the RTEMS code base is the Ada
476support.  So basically Ada is not available on RTEMS SMP.
477
478OpenMP
479------
480
481OpenMP support for RTEMS is available via the GCC provided libgomp.  There is
482libgomp support for RTEMS in the POSIX configuration of libgomp since GCC 4.9
483(requires a Newlib snapshot after 2015-03-12). In GCC 6.1 or later (requires a
484Newlib snapshot after 2015-07-30 for <sys/lock.h> provided self-contained
485synchronization objects) there is a specialized libgomp configuration for RTEMS
486which offers a significantly better performance compared to the POSIX
487configuration of libgomp.  In addition application configurable thread pools
488for each scheduler instance are available in GCC 6.1 or later.
489
490The run-time configuration of libgomp is done via environment variables
491documented in the `libgomp manual <https://gcc.gnu.org/onlinedocs/libgomp/>`_.
492The environment variables are evaluated in a constructor function which
493executes in the context of the first initialization task before the actual
494initialization task function is called (just like a global C++ constructor).
495To set application specific values, a higher priority constructor function must
496be used to set up the environment variables.
497
498.. code-block:: c
499
500    #include <stdlib.h>
501    void __attribute__((constructor(1000))) config_libgomp( void )
502    {
503        setenv( "OMP_DISPLAY_ENV", "VERBOSE", 1 );
504        setenv( "GOMP_SPINCOUNT", "30000", 1 );
505        setenv( "GOMP_RTEMS_THREAD_POOLS", "1$2@SCHD", 1 );
506    }
507
508The environment variable ``GOMP_RTEMS_THREAD_POOLS`` is RTEMS-specific.  It
509determines the thread pools for each scheduler instance.  The format for
510``GOMP_RTEMS_THREAD_POOLS`` is a list of optional
511``<thread-pool-count>[$<priority>]@<scheduler-name>`` configurations separated
512by ``:`` where:
513
514- ``<thread-pool-count>`` is the thread pool count for this scheduler instance.
515
516- ``$<priority>`` is an optional priority for the worker threads of a thread
517  pool according to ``pthread_setschedparam``.  In case a priority value is
518  omitted, then a worker thread will inherit the priority of the OpenMP master
519  thread that created it.  The priority of the worker thread is not changed by
520  libgomp after creation, even if a new OpenMP master thread using the worker
521  has a different priority.
522
523- ``@<scheduler-name>`` is the scheduler instance name according to the RTEMS
524  application configuration.
525
526In case no thread pool configuration is specified for a scheduler instance,
527then each OpenMP master thread of this scheduler instance will use its own
528dynamically allocated thread pool.  To limit the worker thread count of the
529thread pools, each OpenMP master thread must call ``omp_set_num_threads``.
530
531Lets suppose we have three scheduler instances ``IO``, ``WRK0``, and ``WRK1``
532with ``GOMP_RTEMS_THREAD_POOLS`` set to ``"1@WRK0:3$4@WRK1"``.  Then there are
533no thread pool restrictions for scheduler instance ``IO``.  In the scheduler
534instance ``WRK0`` there is one thread pool available.  Since no priority is
535specified for this scheduler instance, the worker thread inherits the priority
536of the OpenMP master thread that created it.  In the scheduler instance
537``WRK1`` there are three thread pools available and their worker threads run at
538priority four.
539
540Thread Dispatch Details
541-----------------------
542
543This section gives background information to developers interested in the
544interrupt latencies introduced by thread dispatching.  A thread dispatch
545consists of all work which must be done to stop the currently executing thread
546on a processor and hand over this processor to an heir thread.
547
548In SMP systems, scheduling decisions on one processor must be propagated
549to other processors through inter-processor interrupts.  A thread dispatch
550which must be carried out on another processor does not happen instantaneously.
551Thus, several thread dispatch requests might be in the air and it is possible
552that some of them may be out of date before the corresponding processor has
553time to deal with them.  The thread dispatch mechanism uses three per-processor
554variables,
555
556- the executing thread,
557
558- the heir thread, and
559
560- a boolean flag indicating if a thread dispatch is necessary or not.
561
562Updates of the heir thread are done via a normal store operation.  The thread
563dispatch necessary indicator of another processor is set as a side-effect of an
564inter-processor interrupt.  So, this change notification works without the use
565of locks.  The thread context is protected by a TTAS lock embedded in the
566context to ensure that it is used on at most one processor at a time.
567Normally, only thread-specific or per-processor locks are used during a thread
568dispatch.  This implementation turned out to be quite efficient and no lock
569contention was observed in the testsuite.  The heavy-weight thread dispatch
570sequence is only entered in case the thread dispatch indicator is set.
571
572The context-switch is performed with interrupts enabled.  During the transition
573from the executing to the heir thread neither the stack of the executing nor
574the heir thread must be used during interrupt processing.  For this purpose a
575temporary per-processor stack is set up which may be used by the interrupt
576prologue before the stack is switched to the interrupt stack.
577
578Directives
579==========
580
581This section details the symmetric multiprocessing services.  A subsection is
582dedicated to each of these services and describes the calling sequence, related
583constants, usage, and status codes.
584
585.. raw:: latex
586
587   \clearpage
588
589.. _rtems_get_processor_count:
590
591GET_PROCESSOR_COUNT - Get processor count
592-----------------------------------------
593
594CALLING SEQUENCE:
595    .. code-block:: c
596
597        uint32_t rtems_get_processor_count(void);
598
599DIRECTIVE STATUS CODES:
600    The count of processors in the system.
601
602DESCRIPTION:
603    On uni-processor configurations a value of one will be returned.
604
605    On SMP configurations this returns the value of a global variable set
606    during system initialization to indicate the count of utilized processors.
607    The processor count depends on the physically or virtually available
608    processors and application configuration.  The value will always be less
609    than or equal to the maximum count of application configured processors.
610
611NOTES:
612    None.
613
614.. raw:: latex
615
616   \clearpage
617
618.. _rtems_get_current_processor:
619
620GET_CURRENT_PROCESSOR - Get current processor index
621---------------------------------------------------
622
623CALLING SEQUENCE:
624    .. code-block:: c
625
626        uint32_t rtems_get_current_processor(void);
627
628DIRECTIVE STATUS CODES:
629    The index of the current processor.
630
631DESCRIPTION:
632    On uni-processor configurations a value of zero will be returned.
633
634    On SMP configurations an architecture specific method is used to obtain the
635    index of the current processor in the system.  The set of processor indices
636    is the range of integers starting with zero up to the processor count minus
637    one.
638
639    Outside of sections with disabled thread dispatching the current processor
640    index may change after every instruction since the thread may migrate from
641    one processor to another.  Sections with disabled interrupts are sections
642    with thread dispatching disabled.
643
644NOTES:
645    None.
646
647.. raw:: latex
648
649   \clearpage
650
651.. _rtems_scheduler_ident:
652
653SCHEDULER_IDENT - Get ID of a scheduler
654---------------------------------------
655
656CALLING SEQUENCE:
657    .. code-block:: c
658
659        rtems_status_code rtems_scheduler_ident(
660            rtems_name  name,
661            rtems_id   *id
662        );
663
664DIRECTIVE STATUS CODES:
665    .. list-table::
666     :class: rtems-table
667
668     * - ``RTEMS_SUCCESSFUL``
669       - Successful operation.
670     * - ``RTEMS_INVALID_ADDRESS``
671       - The ``id`` parameter is ``NULL``.
672     * - ``RTEMS_INVALID_NAME``
673       - Invalid scheduler name.
674
675DESCRIPTION:
676    Identifies a scheduler by its name.  The scheduler name is determined by
677    the scheduler configuration.  See :ref:`Configuring Clustered Schedulers`
678    and :ref:`Configuring a Scheduler Name`.
679
680NOTES:
681    None.
682
683.. raw:: latex
684
685   \clearpage
686
687.. _rtems_scheduler_get_processor_set:
688
689SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler
690--------------------------------------------------------------
691
692CALLING SEQUENCE:
693    .. code-block:: c
694
695        rtems_status_code rtems_scheduler_get_processor_set(
696            rtems_id   scheduler_id,
697            size_t     cpusetsize,
698            cpu_set_t *cpuset
699        );
700
701DIRECTIVE STATUS CODES:
702    .. list-table::
703     :class: rtems-table
704
705     * - ``RTEMS_SUCCESSFUL``
706       - Successful operation.
707     * - ``RTEMS_INVALID_ID``
708       - Invalid scheduler instance identifier.
709     * - ``RTEMS_INVALID_ADDRESS``
710       - The ``cpuset`` parameter is ``NULL``.
711     * - ``RTEMS_INVALID_NUMBER``
712       - The processor set buffer is too small for the set of processors owned
713         by the scheduler instance.
714
715DESCRIPTION:
716    Returns the processor set owned by the scheduler instance in ``cpuset``.  A
717    set bit in the processor set means that this processor is owned by the
718    scheduler instance and a cleared bit means the opposite.
719
720NOTES:
721    None.
722
723.. raw:: latex
724
725   \clearpage
726
727.. _rtems_scheduler_add_processor:
728
729SCHEDULER_ADD_PROCESSOR - Add processor to a scheduler
730------------------------------------------------------
731
732CALLING SEQUENCE:
733    .. code-block:: c
734
735        rtems_status_code rtems_scheduler_add_processor(
736            rtems_id scheduler_id,
737            uint32_t cpu_index
738        );
739
740DIRECTIVE STATUS CODES:
741    .. list-table::
742     :class: rtems-table
743
744     * - ``RTEMS_SUCCESSFUL``
745       - Successful operation.
746     * - ``RTEMS_INVALID_ID``
747       - Invalid scheduler instance identifier.
748     * - ``RTEMS_NOT_CONFIGURED``
749       - The processor is not configured to be used by the application.
750     * - ``RTEMS_INCORRECT_STATE``
751       - The processor is configured to be used by the application, however, it
752         is not online.
753     * - ``RTEMS_RESOURCE_IN_USE``
754       - The processor is already assigned to a scheduler instance.
755
756DESCRIPTION:
757    Adds a processor to the set of processors owned by the specified scheduler
758    instance.
759
760NOTES:
761    Must be called from task context.  This operation obtains and releases the
762    objects allocator lock.
763
764.. raw:: latex
765
766   \clearpage
767
768.. _rtems_scheduler_remove_processor:
769
770SCHEDULER_REMOVE_PROCESSOR - Remove processor from a scheduler
771--------------------------------------------------------------
772
773CALLING SEQUENCE:
774    .. code-block:: c
775
776        rtems_status_code rtems_scheduler_remove_processor(
777            rtems_id scheduler_id,
778            uint32_t cpu_index
779        );
780
781DIRECTIVE STATUS CODES:
782    .. list-table::
783     :class: rtems-table
784
785     * - ``RTEMS_SUCCESSFUL``
786       - Successful operation.
787     * - ``RTEMS_INVALID_ID``
788       - Invalid scheduler instance identifier.
789     * - ``RTEMS_INVALID_NUMBER``
790       - The processor is not owned by the specified scheduler instance.
791     * - ``RTEMS_RESOURCE_IN_USE``
792       - The set of processors owned by the specified scheduler instance would
793         be empty after the processor removal and there exists a non-idle task
794         that uses this scheduler instance as its home scheduler instance.
795
796DESCRIPTION:
797    Removes a processor from set of processors owned by the specified scheduler
798    instance.
799
800NOTES:
801    Must be called from task context.  This operation obtains and releases the
802    objects allocator lock.  Removing a processor from a scheduler is a complex
803    operation that involves all tasks of the system.
Note: See TracBrowser for help on using the repository browser.