source: rtems/doc/user/smp.t @ dafa5d88

Last change on this file since dafa5d88 was dafa5d88, checked in by Sebastian Huber <sebastian.huber@…>, on Sep 3, 2015 at 8:27:16 AM

score: Implement priority boosting

  • Property mode set to 100644
File size: 34.8 KB
Line 
1@c
2@c  COPYRIGHT (c) 2014.
3@c  On-Line Applications Research Corporation (OAR).
4@c  All rights reserved.
5@c
6
7@chapter Symmetric Multiprocessing Services
8
9@section Introduction
10
11This chapter describes the services related to Symmetric Multiprocessing
12provided by RTEMS.
13
14The application level services currently provided are:
15
16@itemize @bullet
17@item @code{rtems_get_processor_count} - Get processor count
18@item @code{rtems_get_current_processor} - Get current processor index
19@item @code{rtems_scheduler_ident} - Get ID of a scheduler
20@item @code{rtems_scheduler_get_processor_set} - Get processor set of a scheduler
21@item @code{rtems_task_get_scheduler} - Get scheduler of a task
22@item @code{rtems_task_set_scheduler} - Set scheduler of a task
23@item @code{rtems_task_get_affinity} - Get task processor affinity
24@item @code{rtems_task_set_affinity} - Set task processor affinity
25@end itemize
26
27@c
28@c
29@c
30@section Background
31
32@subsection Uniprocessor versus SMP Parallelism
33
34Uniprocessor systems have long been used in embedded systems. In this hardware
35model, there are some system execution characteristics which have long been
36taken for granted:
37
38@itemize @bullet
39@item one task executes at a time
40@item hardware events result in interrupts
41@end itemize
42
43There is no true parallelism. Even when interrupts appear to occur
44at the same time, they are processed in largely a serial fashion.
45This is true even when the interupt service routines are allowed to
46nest.  From a tasking viewpoint,  it is the responsibility of the real-time
47operatimg system to simulate parallelism by switching between tasks.
48These task switches occur in response to hardware interrupt events and explicit
49application events such as blocking for a resource or delaying.
50
51With symmetric multiprocessing, the presence of multiple processors
52allows for true concurrency and provides for cost-effective performance
53improvements. Uniprocessors tend to increase performance by increasing
54clock speed and complexity. This tends to lead to hot, power hungry
55microprocessors which are poorly suited for many embedded applications.
56
57The true concurrency is in sharp contrast to the single task and
58interrupt model of uniprocessor systems. This results in a fundamental
59change to uniprocessor system characteristics listed above. Developers
60are faced with a different set of characteristics which, in turn, break
61some existing assumptions and result in new challenges. In an SMP system
62with N processors, these are the new execution characteristics.
63
64@itemize @bullet
65@item N tasks execute in parallel
66@item hardware events result in interrupts
67@end itemize
68
69There is true parallelism with a task executing on each processor and
70the possibility of interrupts occurring on each processor. Thus in contrast
71to their being one task and one interrupt to consider on a uniprocessor,
72there are N tasks and potentially N simultaneous interrupts to consider
73on an SMP system.
74
75This increase in hardware complexity and presence of true parallelism
76results in the application developer needing to be even more cautious
77about mutual exclusion and shared data access than in a uniprocessor
78embedded system. Race conditions that never or rarely happened when an
79application executed on a uniprocessor system, become much more likely
80due to multiple threads executing in parallel. On a uniprocessor system,
81these race conditions would only happen when a task switch occurred at
82just the wrong moment. Now there are N-1 tasks executing in parallel
83all the time and this results in many more opportunities for small
84windows in critical sections to be hit.
85
86@subsection Task Affinity
87
88@cindex task affinity
89@cindex thread affinity
90
91RTEMS provides services to manipulate the affinity of a task. Affinity
92is used to specify the subset of processors in an SMP system on which
93a particular task can execute.
94
95By default, tasks have an affinity which allows them to execute on any
96available processor.
97
98Task affinity is a possible feature to be supported by SMP-aware
99schedulers. However, only a subset of the available schedulers support
100affinity. Although the behavior is scheduler specific, if the scheduler
101does not support affinity, it is likely to ignore all attempts to set
102affinity.
103
104@subsection Task Migration
105
106@cindex task migration
107@cindex thread migration
108
109With more than one processor in the system tasks can migrate from one processor
110to another.  There are three reasons why tasks migrate in RTEMS.
111
112@itemize @bullet
113@item The scheduler changes explicitly via @code{rtems_task_set_scheduler()} or
114similar directives.
115@item The task resumes execution after a blocking operation.  On a priority
116based scheduler it will evict the lowest priority task currently assigned to a
117processor in the processor set managed by the scheduler instance.
118@item The task moves temporarily to another scheduler instance due to locking
119protocols like @cite{Migratory Priority Inheritance} or the
120@cite{Multiprocessor Resource Sharing Protocol}.
121@end itemize
122
123Task migration should be avoided so that the working set of a task can stay on
124the most local cache level.
125
126The current implementation of task migration in RTEMS has some implications
127with respect to the interrupt latency.  It is crucial to preserve the system
128invariant that a task can execute on at most one processor in the system at a
129time.  This is accomplished with a boolean indicator in the task context.  The
130processor architecture specific low-level task context switch code will mark
131that a task context is no longer executing and waits that the heir context
132stopped execution before it restores the heir context and resumes execution of
133the heir task.  So there is one point in time in which a processor is without a
134task.  This is essential to avoid cyclic dependencies in case multiple tasks
135migrate at once.  Otherwise some supervising entity is necessary to prevent
136life-locks.  Such a global supervisor would lead to scalability problems so
137this approach is not used.  Currently the thread dispatch is performed with
138interrupts disabled.  So in case the heir task is currently executing on
139another processor then this prolongs the time of disabled interrupts since one
140processor has to wait for another processor to make progress.
141
142It is difficult to avoid this issue with the interrupt latency since interrupts
143normally store the context of the interrupted task on its stack.  In case a
144task is marked as not executing we must not use its task stack to store such an
145interrupt context.  We cannot use the heir stack before it stopped execution on
146another processor.  So if we enable interrupts during this transition we have
147to provide an alternative task independent stack for this time frame.  This
148issue needs further investigation.
149
150@subsection Clustered Scheduling
151
152We have clustered scheduling in case the set of processors of a system is
153partitioned into non-empty pairwise-disjoint subsets. These subsets are called
154clusters.  Clusters with a cardinality of one are partitions.  Each cluster is
155owned by exactly one scheduler instance.
156
157Clustered scheduling helps to control the worst-case latencies in
158multi-processor systems, see @cite{Brandenburg, Björn B.: Scheduling and
159Locking in Multiprocessor Real-Time Operating Systems. PhD thesis, 2011.
160@uref{http://www.cs.unc.edu/~bbb/diss/brandenburg-diss.pdf}}.  The goal is to
161reduce the amount of shared state in the system and thus prevention of lock
162contention. Modern multi-processor systems tend to have several layers of data
163and instruction caches.  With clustered scheduling it is possible to honour the
164cache topology of a system and thus avoid expensive cache synchronization
165traffic.  It is easy to implement.  The problem is to provide synchronization
166primitives for inter-cluster synchronization (more than one cluster is involved
167in the synchronization process). In RTEMS there are currently four means
168available
169
170@itemize @bullet
171@item events,
172@item message queues,
173@item semaphores using the @ref{Semaphore Manager Priority Inheritance}
174protocol (priority boosting), and
175@item semaphores using the @ref{Semaphore Manager Multiprocessor Resource
176Sharing Protocol} (MrsP).
177@end itemize
178
179The clustered scheduling approach enables separation of functions with
180real-time requirements and functions that profit from fairness and high
181throughput provided the scheduler instances are fully decoupled and adequate
182inter-cluster synchronization primitives are used.  This is work in progress.
183
184For the configuration of clustered schedulers see @ref{Configuring a System
185Configuring Clustered Schedulers}.
186
187To set the scheduler of a task see @ref{Symmetric Multiprocessing Services
188SCHEDULER_IDENT - Get ID of a scheduler} and @ref{Symmetric Multiprocessing
189Services TASK_SET_SCHEDULER - Set scheduler of a task}.
190
191@subsection Task Priority Queues
192
193Due to the support for clustered scheduling the task priority queues need
194special attention.  It makes no sense to compare the priority values of two
195different scheduler instances.  Thus, it is not possible to simply use one
196plain priority queue for tasks of different scheduler instances.
197
198One solution to this problem is to use two levels of queues.  The top level
199queue provides FIFO ordering and contains priority queues.  Each priority queue
200is associated with a scheduler instance and contains only tasks of this
201scheduler instance.  Tasks are enqueued in the priority queue corresponding to
202their scheduler instance.  In case this priority queue was empty, then it is
203appended to the FIFO.  To dequeue a task the highest priority task of the first
204priority queue in the FIFO is selected.  Then the first priority queue is
205removed from the FIFO.  In case the previously first priority queue is not
206empty, then it is appended to the FIFO.  So there is FIFO fairness with respect
207to the highest priority task of each scheduler instances. See also @cite{
208Brandenburg, Björn B.: A fully preemptive multiprocessor semaphore protocol for
209latency-sensitive real-time applications. In Proceedings of the 25th Euromicro
210Conference on Real-Time Systems (ECRTS 2013), pages 292–302, 2013.
211@uref{http://www.mpi-sws.org/~bbb/papers/pdf/ecrts13b.pdf}}.
212
213Such a two level queue may need a considerable amount of memory if fast enqueue
214and dequeue operations are desired (depends on the scheduler instance count).
215To mitigate this problem an approch of the FreeBSD kernel was implemented in
216RTEMS.  We have the invariant that a task can be enqueued on at most one task
217queue.  Thus, we need only as many queues as we have tasks.  Each task is
218equipped with spare task queue which it can give to an object on demand.  The
219task queue uses a dedicated memory space independent of the other memory used
220for the task itself. In case a task needs to block, then there are two options
221
222@itemize @bullet
223@item the object already has task queue, then the task enqueues itself to this
224already present queue and the spare task queue of the task is added to a list
225of free queues for this object, or
226@item otherwise, then the queue of the task is given to the object and the task
227enqueues itself to this queue.
228@end itemize
229
230In case the task is dequeued, then there are two options
231
232@itemize @bullet
233@item the task is the last task on the queue, then it removes this queue from
234the object and reclaims it for its own purpose, or
235@item otherwise, then the task removes one queue from the free list of the
236object and reclaims it for its own purpose.
237@end itemize
238
239Since there are usually more objects than tasks, this actually reduces the
240memory demands. In addition the objects contain only a pointer to the task
241queue structure. This helps to hide implementation details and makes it
242possible to use self-contained synchronization objects in Newlib and GCC (C++
243and OpenMP run-time support).
244
245@subsection Scheduler Helping Protocol
246
247The scheduler provides a helping protocol to support locking protocols like
248@cite{Migratory Priority Inheritance} or the @cite{Multiprocessor Resource
249Sharing Protocol}.  Each ready task can use at least one scheduler node at a
250time to gain access to a processor.  Each scheduler node has an owner, a user
251and an optional idle task.  The owner of a scheduler node is determined a task
252creation and never changes during the life time of a scheduler node.  The user
253of a scheduler node may change due to the scheduler helping protocol.  A
254scheduler node is in one of the four scheduler help states:
255
256@table @dfn
257
258@item help yourself
259
260This scheduler node is solely used by the owner task.  This task owns no
261resources using a helping protocol and thus does not take part in the scheduler
262helping protocol.  No help will be provided for other tasks.
263
264@item help active owner
265
266This scheduler node is owned by a task actively owning a resource and can be
267used to help out tasks.
268
269In case this scheduler node changes its state from ready to scheduled and the
270task executes using another node, then an idle task will be provided as a user
271of this node to temporarily execute on behalf of the owner task.  Thus lower
272priority tasks are denied access to the processors of this scheduler instance.
273
274In case a task actively owning a resource performs a blocking operation, then
275an idle task will be used also in case this node is in the scheduled state.
276
277@item help active rival
278
279This scheduler node is owned by a task actively obtaining a resource currently
280owned by another task and can be used to help out tasks.
281
282The task owning this node is ready and will give away its processor in case the
283task owning the resource asks for help.
284
285@item help passive
286
287This scheduler node is owned by a task obtaining a resource currently owned by
288another task and can be used to help out tasks.
289
290The task owning this node is blocked.
291
292@end table
293
294The following scheduler operations return a task in need for help
295
296@itemize @bullet
297@item unblock,
298@item change priority,
299@item yield, and
300@item ask for help.
301@end itemize
302
303A task in need for help is a task that encounters a scheduler state change from
304scheduled to ready (this is a pre-emption by a higher priority task) or a task
305that cannot be scheduled in an unblock operation.  Such a task can ask tasks
306which depend on resources owned by this task for help.
307
308In case it is not possible to schedule a task in need for help, then the
309scheduler nodes available for the task will be placed into the set of ready
310scheduler nodes of the corresponding scheduler instances.  Once a state change
311from ready to scheduled happens for one of scheduler nodes it will be used to
312schedule the task in need for help.
313
314The ask for help scheduler operation is used to help tasks in need for help
315returned by the operations mentioned above.  This operation is also used in
316case the root of a resource sub-tree owned by a task changes.
317
318The run-time of the ask for help procedures depend on the size of the resource
319tree of the task needing help and other resource trees in case tasks in need
320for help are produced during this operation.  Thus the worst-case latency in
321the system depends on the maximum resource tree size of the application.
322
323@subsection Critical Section Techniques and SMP
324
325As discussed earlier, SMP systems have opportunities for true parallelism
326which was not possible on uniprocessor systems. Consequently, multiple
327techniques that provided adequate critical sections on uniprocessor
328systems are unsafe on SMP systems. In this section, some of these
329unsafe techniques will be discussed.
330
331In general, applications must use proper operating system provided mutual
332exclusion mechanisms to ensure correct behavior. This primarily means
333the use of binary semaphores or mutexes to implement critical sections.
334
335@subsubsection Disable Interrupts and Interrupt Locks
336
337A low overhead means to ensure mutual exclusion in uni-processor configurations
338is to disable interrupts around a critical section.  This is commonly used in
339device driver code and throughout the operating system core.  On SMP
340configurations, however, disabling the interrupts on one processor has no
341effect on other processors.  So, this is insufficient to ensure system wide
342mutual exclusion.  The macros
343@itemize @bullet
344@item @code{rtems_interrupt_disable()},
345@item @code{rtems_interrupt_enable()}, and
346@item @code{rtems_interrupt_flush()}
347@end itemize
348are disabled on SMP configurations and its use will lead to compiler warnings
349and linker errors.  In the unlikely case that interrupts must be disabled on
350the current processor, then the
351@itemize @bullet
352@item @code{rtems_interrupt_local_disable()}, and
353@item @code{rtems_interrupt_local_enable()}
354@end itemize
355macros are now available in all configurations.
356
357Since disabling of interrupts is not enough to ensure system wide mutual
358exclusion on SMP, a new low-level synchronization primitive was added - the
359interrupt locks.  They are a simple API layer on top of the SMP locks used for
360low-level synchronization in the operating system core.  Currently they are
361implemented as a ticket lock.  On uni-processor configurations they degenerate
362to simple interrupt disable/enable sequences.  It is disallowed to acquire a
363single interrupt lock in a nested way.  This will result in an infinite loop
364with interrupts disabled.  While converting legacy code to interrupt locks care
365must be taken to avoid this situation.
366
367@example
368@group
369void legacy_code_with_interrupt_disable_enable( void )
370@{
371  rtems_interrupt_level level;
372
373  rtems_interrupt_disable( level );
374  /* Some critical stuff */
375  rtems_interrupt_enable( level );
376@}
377
378RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" )
379
380void smp_ready_code_with_interrupt_lock( void )
381@{
382  rtems_interrupt_lock_context lock_context;
383
384  rtems_interrupt_lock_acquire( &lock, &lock_context );
385  /* Some critical stuff */
386  rtems_interrupt_lock_release( &lock, &lock_context );
387@}
388@end group
389@end example
390
391The @code{rtems_interrupt_lock} structure is empty on uni-processor
392configurations.  Empty structures have a different size in C
393(implementation-defined, zero in case of GCC) and C++ (implementation-defined
394non-zero value, one in case of GCC).  Thus the
395@code{RTEMS_INTERRUPT_LOCK_DECLARE()}, @code{RTEMS_INTERRUPT_LOCK_DEFINE()},
396@code{RTEMS_INTERRUPT_LOCK_MEMBER()}, and
397@code{RTEMS_INTERRUPT_LOCK_REFERENCE()} macros are provided to ensure ABI
398compatibility.
399
400@subsubsection Highest Priority Task Assumption
401
402On a uniprocessor system, it is safe to assume that when the highest
403priority task in an application executes, it will execute without being
404preempted until it voluntarily blocks. Interrupts may occur while it is
405executing, but there will be no context switch to another task unless
406the highest priority task voluntarily initiates it.
407
408Given the assumption that no other tasks will have their execution
409interleaved with the highest priority task, it is possible for this
410task to be constructed such that it does not need to acquire a binary
411semaphore or mutex for protected access to shared data.
412
413In an SMP system, it cannot be assumed there will never be a single task
414executing. It should be assumed that every processor is executing another
415application task. Further, those tasks will be ones which would not have
416been executed in a uniprocessor configuration and should be assumed to
417have data synchronization conflicts with what was formerly the highest
418priority task which executed without conflict.
419
420@subsubsection Disable Preemption
421
422On a uniprocessor system, disabling preemption in a task is very similar
423to making the highest priority task assumption. While preemption is
424disabled, no task context switches will occur unless the task initiates
425them voluntarily. And, just as with the highest priority task assumption,
426there are N-1 processors also running tasks. Thus the assumption that no
427other tasks will run while the task has preemption disabled is violated.
428
429@subsection Task Unique Data and SMP
430
431Per task variables are a service commonly provided by real-time operating
432systems for application use. They work by allowing the application
433to specify a location in memory (typically a @code{void *}) which is
434logically added to the context of a task. On each task switch, the
435location in memory is stored and each task can have a unique value in
436the same memory location. This memory location is directly accessed as a
437variable in a program.
438
439This works well in a uniprocessor environment because there is one task
440executing and one memory location containing a task-specific value. But
441it is fundamentally broken on an SMP system because there are always N
442tasks executing. With only one location in memory, N-1 tasks will not
443have the correct value.
444
445This paradigm for providing task unique data values is fundamentally
446broken on SMP systems.
447
448@subsubsection Classic API Per Task Variables
449
450The Classic API provides three directives to support per task variables. These are:
451
452@itemize @bullet
453@item @code{@value{DIRPREFIX}task_variable_add} - Associate per task variable
454@item @code{@value{DIRPREFIX}task_variable_get} - Obtain value of a a per task variable
455@item @code{@value{DIRPREFIX}task_variable_delete} - Remove per task variable
456@end itemize
457
458As task variables are unsafe for use on SMP systems, the use of these services
459must be eliminated in all software that is to be used in an SMP environment.
460The task variables API is disabled on SMP. Its use will lead to compile-time
461and link-time errors. It is recommended that the application developer consider
462the use of POSIX Keys or Thread Local Storage (TLS). POSIX Keys are available
463in all RTEMS configurations.  For the availablity of TLS on a particular
464architecture please consult the @cite{RTEMS CPU Architecture Supplement}.
465
466The only remaining user of task variables in the RTEMS code base is the Ada
467support.  So basically Ada is not available on RTEMS SMP.
468
469@subsection Thread Dispatch Details
470
471This section gives background information to developers interested in the
472interrupt latencies introduced by thread dispatching.  A thread dispatch
473consists of all work which must be done to stop the currently executing thread
474on a processor and hand over this processor to an heir thread.
475
476On SMP systems, scheduling decisions on one processor must be propagated to
477other processors through inter-processor interrupts.  So, a thread dispatch
478which must be carried out on another processor happens not instantaneous.  Thus
479several thread dispatch requests might be in the air and it is possible that
480some of them may be out of date before the corresponding processor has time to
481deal with them.  The thread dispatch mechanism uses three per-processor
482variables,
483@itemize @bullet
484@item the executing thread,
485@item the heir thread, and
486@item an boolean flag indicating if a thread dispatch is necessary or not.
487@end itemize
488Updates of the heir thread and the thread dispatch necessary indicator are
489synchronized via explicit memory barriers without the use of locks.  A thread
490can be an heir thread on at most one processor in the system.  The thread context
491is protected by a TTAS lock embedded in the context to ensure that it is used
492on at most one processor at a time.  The thread post-switch actions use a
493per-processor lock.  This implementation turned out to be quite efficient and
494no lock contention was observed in the test suite.
495
496The current implementation of thread dispatching has some implications with
497respect to the interrupt latency.  It is crucial to preserve the system
498invariant that a thread can execute on at most one processor in the system at a
499time.  This is accomplished with a boolean indicator in the thread context.
500The processor architecture specific context switch code will mark that a thread
501context is no longer executing and waits that the heir context stopped
502execution before it restores the heir context and resumes execution of the heir
503thread (the boolean indicator is basically a TTAS lock).  So, there is one
504point in time in which a processor is without a thread.  This is essential to
505avoid cyclic dependencies in case multiple threads migrate at once.  Otherwise
506some supervising entity is necessary to prevent deadlocks.  Such a global
507supervisor would lead to scalability problems so this approach is not used.
508Currently the context switch is performed with interrupts disabled.  Thus in
509case the heir thread is currently executing on another processor, the time of
510disabled interrupts is prolonged since one processor has to wait for another
511processor to make progress.
512
513It is difficult to avoid this issue with the interrupt latency since interrupts
514normally store the context of the interrupted thread on its stack.  In case a
515thread is marked as not executing, we must not use its thread stack to store
516such an interrupt context.  We cannot use the heir stack before it stopped
517execution on another processor.  If we enable interrupts during this
518transition, then we have to provide an alternative thread independent stack for
519interrupts in this time frame.  This issue needs further investigation.
520
521The problematic situation occurs in case we have a thread which executes with
522thread dispatching disabled and should execute on another processor (e.g. it is
523an heir thread on another processor).  In this case the interrupts on this
524other processor are disabled until the thread enables thread dispatching and
525starts the thread dispatch sequence.  The scheduler (an exception is the
526scheduler with thread processor affinity support) tries to avoid such a
527situation and checks if a new scheduled thread already executes on a processor.
528In case the assigned processor differs from the processor on which the thread
529already executes and this processor is a member of the processor set managed by
530this scheduler instance, it will reassign the processors to keep the already
531executing thread in place.  Therefore normal scheduler requests will not lead
532to such a situation.  Explicit thread migration requests, however, can lead to
533this situation.  Explicit thread migrations may occur due to the scheduler
534helping protocol or explicit scheduler instance changes.  The situation can
535also be provoked by interrupts which suspend and resume threads multiple times
536and produce stale asynchronous thread dispatch requests in the system.
537
538@c
539@c
540@c
541@section Operations
542
543@subsection Setting Affinity to a Single Processor
544
545On some embedded applications targeting SMP systems, it may be beneficial to
546lock individual tasks to specific processors.  In this way, one can designate a
547processor for I/O tasks, another for computation, etc..  The following
548illustrates the code sequence necessary to assign a task an affinity for
549processor with index @code{processor_index}.
550
551@example
552@group
553#include <rtems.h>
554#include <assert.h>
555
556void pin_to_processor(rtems_id task_id, int processor_index)
557@{
558  rtems_status_code sc;
559  cpu_set_t         cpuset;
560
561  CPU_ZERO(&cpuset);
562  CPU_SET(processor_index, &cpuset);
563
564  sc = rtems_task_set_affinity(task_id, sizeof(cpuset), &cpuset);
565  assert(sc == RTEMS_SUCCESSFUL);
566@}
567@end group
568@end example
569
570It is important to note that the @code{cpuset} is not validated until the
571@code{@value{DIRPREFIX}task_set_affinity} call is made. At that point,
572it is validated against the current system configuration.
573
574@c
575@c
576@c
577@section Directives
578
579This section details the symmetric multiprocessing services.  A subsection
580is dedicated to each of these services and describes the calling sequence,
581related constants, usage, and status codes.
582
583@c
584@c rtems_get_processor_count
585@c
586@page
587@subsection GET_PROCESSOR_COUNT - Get processor count
588
589@subheading CALLING SEQUENCE:
590
591@ifset is-C
592@example
593uint32_t rtems_get_processor_count(void);
594@end example
595@end ifset
596
597@ifset is-Ada
598@end ifset
599
600@subheading DIRECTIVE STATUS CODES:
601
602The count of processors in the system.
603
604@subheading DESCRIPTION:
605
606On uni-processor configurations a value of one will be returned.
607
608On SMP configurations this returns the value of a global variable set during
609system initialization to indicate the count of utilized processors.  The
610processor count depends on the physically or virtually available processors and
611application configuration.  The value will always be less than or equal to the
612maximum count of application configured processors.
613
614@subheading NOTES:
615
616None.
617
618@c
619@c rtems_get_current_processor
620@c
621@page
622@subsection GET_CURRENT_PROCESSOR - Get current processor index
623
624@subheading CALLING SEQUENCE:
625
626@ifset is-C
627@example
628uint32_t rtems_get_current_processor(void);
629@end example
630@end ifset
631
632@ifset is-Ada
633@end ifset
634
635@subheading DIRECTIVE STATUS CODES:
636
637The index of the current processor.
638
639@subheading DESCRIPTION:
640
641On uni-processor configurations a value of zero will be returned.
642
643On SMP configurations an architecture specific method is used to obtain the
644index of the current processor in the system.  The set of processor indices is
645the range of integers starting with zero up to the processor count minus one.
646
647Outside of sections with disabled thread dispatching the current processor
648index may change after every instruction since the thread may migrate from one
649processor to another.  Sections with disabled interrupts are sections with
650thread dispatching disabled.
651
652@subheading NOTES:
653
654None.
655
656@c
657@c rtems_scheduler_ident
658@c
659@page
660@subsection SCHEDULER_IDENT - Get ID of a scheduler
661
662@subheading CALLING SEQUENCE:
663
664@ifset is-C
665@example
666rtems_status_code rtems_scheduler_ident(
667  rtems_name  name,
668  rtems_id   *id
669);
670@end example
671@end ifset
672
673@ifset is-Ada
674@end ifset
675
676@subheading DIRECTIVE STATUS CODES:
677
678@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
679@code{@value{RPREFIX}INVALID_ADDRESS} - @code{id} is NULL@*
680@code{@value{RPREFIX}INVALID_NAME} - invalid scheduler name@*
681@code{@value{RPREFIX}UNSATISFIED} - - a scheduler with this name exists, but
682the processor set of this scheduler is empty
683
684@subheading DESCRIPTION:
685
686Identifies a scheduler by its name.  The scheduler name is determined by the
687scheduler configuration.  @xref{Configuring a System Configuring Clustered
688Schedulers}.
689
690@subheading NOTES:
691
692None.
693
694@c
695@c rtems_scheduler_get_processor_set
696@c
697@page
698@subsection SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler
699
700@subheading CALLING SEQUENCE:
701
702@ifset is-C
703@example
704rtems_status_code rtems_scheduler_get_processor_set(
705  rtems_id   scheduler_id,
706  size_t     cpusetsize,
707  cpu_set_t *cpuset
708);
709@end example
710@end ifset
711
712@ifset is-Ada
713@end ifset
714
715@subheading DIRECTIVE STATUS CODES:
716
717@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
718@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
719@code{@value{RPREFIX}INVALID_ID} - invalid scheduler id@*
720@code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for
721set of processors owned by the scheduler
722
723@subheading DESCRIPTION:
724
725Returns the processor set owned by the scheduler in @code{cpuset}.  A set bit
726in the processor set means that this processor is owned by the scheduler and a
727cleared bit means the opposite.
728
729@subheading NOTES:
730
731None.
732
733@c
734@c rtems_task_get_scheduler
735@c
736@page
737@subsection TASK_GET_SCHEDULER - Get scheduler of a task
738
739@subheading CALLING SEQUENCE:
740
741@ifset is-C
742@example
743rtems_status_code rtems_task_get_scheduler(
744  rtems_id  task_id,
745  rtems_id *scheduler_id
746);
747@end example
748@end ifset
749
750@ifset is-Ada
751@end ifset
752
753@subheading DIRECTIVE STATUS CODES:
754
755@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
756@code{@value{RPREFIX}INVALID_ADDRESS} - @code{scheduler_id} is NULL@*
757@code{@value{RPREFIX}INVALID_ID} - invalid task id
758
759@subheading DESCRIPTION:
760
761Returns the scheduler identifier of a task identified by @code{task_id} in
762@code{scheduler_id}.
763
764@subheading NOTES:
765
766None.
767
768@c
769@c rtems_task_set_scheduler
770@c
771@page
772@subsection TASK_SET_SCHEDULER - Set scheduler of a task
773
774@subheading CALLING SEQUENCE:
775
776@ifset is-C
777@example
778rtems_status_code rtems_task_set_scheduler(
779  rtems_id task_id,
780  rtems_id scheduler_id
781);
782@end example
783@end ifset
784
785@ifset is-Ada
786@end ifset
787
788@subheading DIRECTIVE STATUS CODES:
789
790@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
791@code{@value{RPREFIX}INVALID_ID} - invalid task or scheduler id@*
792@code{@value{RPREFIX}INCORRECT_STATE} - the task is in the wrong state to
793perform a scheduler change
794
795@subheading DESCRIPTION:
796
797Sets the scheduler of a task identified by @code{task_id} to the scheduler
798identified by @code{scheduler_id}.  The scheduler of a task is initialized to
799the scheduler of the task that created it.
800
801@subheading NOTES:
802
803None.
804
805@subheading EXAMPLE:
806
807@example
808@group
809#include <rtems.h>
810#include <assert.h>
811
812void task(rtems_task_argument arg);
813
814void example(void)
815@{
816  rtems_status_code sc;
817  rtems_id          task_id;
818  rtems_id          scheduler_id;
819  rtems_name        scheduler_name;
820
821  scheduler_name = rtems_build_name('W', 'O', 'R', 'K');
822
823  sc = rtems_scheduler_ident(scheduler_name, &scheduler_id);
824  assert(sc == RTEMS_SUCCESSFUL);
825
826  sc = rtems_task_create(
827    rtems_build_name('T', 'A', 'S', 'K'),
828    1,
829    RTEMS_MINIMUM_STACK_SIZE,
830    RTEMS_DEFAULT_MODES,
831    RTEMS_DEFAULT_ATTRIBUTES,
832    &task_id
833  );
834  assert(sc == RTEMS_SUCCESSFUL);
835
836  sc = rtems_task_set_scheduler(task_id, scheduler_id);
837  assert(sc == RTEMS_SUCCESSFUL);
838
839  sc = rtems_task_start(task_id, task, 0);
840  assert(sc == RTEMS_SUCCESSFUL);
841@}
842@end group
843@end example
844
845@c
846@c rtems_task_get_affinity
847@c
848@page
849@subsection TASK_GET_AFFINITY - Get task processor affinity
850
851@subheading CALLING SEQUENCE:
852
853@ifset is-C
854@example
855rtems_status_code rtems_task_get_affinity(
856  rtems_id   id,
857  size_t     cpusetsize,
858  cpu_set_t *cpuset
859);
860@end example
861@end ifset
862
863@ifset is-Ada
864@end ifset
865
866@subheading DIRECTIVE STATUS CODES:
867
868@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
869@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
870@code{@value{RPREFIX}INVALID_ID} - invalid task id@*
871@code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for
872the current processor affinity set of the task
873
874@subheading DESCRIPTION:
875
876Returns the current processor affinity set of the task in @code{cpuset}.  A set
877bit in the affinity set means that the task can execute on this processor and a
878cleared bit means the opposite.
879
880@subheading NOTES:
881
882None.
883
884@c
885@c rtems_task_set_affinity
886@c
887@page
888@subsection TASK_SET_AFFINITY - Set task processor affinity
889
890@subheading CALLING SEQUENCE:
891
892@ifset is-C
893@example
894rtems_status_code rtems_task_set_affinity(
895  rtems_id         id,
896  size_t           cpusetsize,
897  const cpu_set_t *cpuset
898);
899@end example
900@end ifset
901
902@ifset is-Ada
903@end ifset
904
905@subheading DIRECTIVE STATUS CODES:
906
907@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
908@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
909@code{@value{RPREFIX}INVALID_ID} - invalid task id@*
910@code{@value{RPREFIX}INVALID_NUMBER} - invalid processor affinity set
911
912@subheading DESCRIPTION:
913
914Sets the processor affinity set for the task specified by @code{cpuset}.  A set
915bit in the affinity set means that the task can execute on this processor and a
916cleared bit means the opposite.
917
918@subheading NOTES:
919
920This function will not change the scheduler of the task.  The intersection of
921the processor affinity set and the set of processors owned by the scheduler of
922the task must be non-empty.  It is not an error if the processor affinity set
923contains processors that are not part of the set of processors owned by the
924scheduler instance of the task.  A task will simply not run under normal
925circumstances on these processors since the scheduler ignores them.  Some
926locking protocols may temporarily use processors that are not included in the
927processor affinity set of the task.  It is also not an error if the processor
928affinity set contains processors that are not part of the system.
Note: See TracBrowser for help on using the repository browser.