source: rtems/doc/user/smp.t @ 1bdf578e

4.11
Last change on this file since 1bdf578e was 1bdf578e, checked in by Sebastian Huber <sebastian.huber@…>, on 01/11/16 at 07:29:08

Clearly mark SMP support as experimental in 4.11

  • Property mode set to 100644
File size: 31.1 KB
Line 
1@c
2@c  COPYRIGHT (c) 2014.
3@c  On-Line Applications Research Corporation (OAR).
4@c  All rights reserved.
5@c
6
7@chapter Symmetric Multiprocessing Services
8
9@section Introduction
10
11The Symmetric Multiprocessing (SMP) support of the RTEMS @value{VERSION} is
12available on
13
14@itemize @bullet
15@item ARM,
16@item PowerPC, and
17@item SPARC.
18@end itemize
19
20It must be explicitly enabled via the @code{--enable-experimental-smp}
21configure command line option.  To enable SMP in the application configuration
22see @ref{Configuring a System Enable SMP Support for Applications}.  The
23default scheduler for SMP applications supports up to 32 processors and is a
24global fixed priority scheduler, see also @ref{Configuring a System Configuring
25Clustered/Partitioned Schedulers}.  For example applications see
26@file{testsuites/smptests}.
27
28@strong{WARNING: The SMP support in RTEMS 4.11 is highly experimental and
29incomplete.  Due to the use of the Giant lock and other implementation
30shortcomings it is unsuitable for systems that need a predictable timing
31behaviour.  Some issues are already fixed in the development version of RTEMS
324.12.  There are no plans to fix them in RTEMS 4.11.  Before you start using
33this RTEMS version for SMP ask on the RTEMS mailing list.  The SMP support in
344.11 is good enough for simple demonstration purposes.  It is work in progress
35and a future version of RTEMS will provide proper SMP support.}
36
37This chapter describes the services related to Symmetric Multiprocessing
38provided by RTEMS.
39
40The application level services currently provided are:
41
42@itemize @bullet
43@item @code{rtems_get_processor_count} - Get processor count
44@item @code{rtems_get_current_processor} - Get current processor index
45@item @code{rtems_scheduler_ident} - Get ID of a scheduler
46@item @code{rtems_scheduler_get_processor_set} - Get processor set of a scheduler
47@item @code{rtems_task_get_scheduler} - Get scheduler of a task
48@item @code{rtems_task_set_scheduler} - Set scheduler of a task
49@item @code{rtems_task_get_affinity} - Get task processor affinity
50@item @code{rtems_task_set_affinity} - Set task processor affinity
51@end itemize
52
53@c
54@c
55@c
56@section Background
57
58@subsection Uniprocessor versus SMP Parallelism
59
60Uniprocessor systems have long been used in embedded systems. In this hardware
61model, there are some system execution characteristics which have long been
62taken for granted:
63
64@itemize @bullet
65@item one task executes at a time
66@item hardware events result in interrupts
67@end itemize
68
69There is no true parallelism. Even when interrupts appear to occur
70at the same time, they are processed in largely a serial fashion.
71This is true even when the interupt service routines are allowed to
72nest.  From a tasking viewpoint,  it is the responsibility of the real-time
73operatimg system to simulate parallelism by switching between tasks.
74These task switches occur in response to hardware interrupt events and explicit
75application events such as blocking for a resource or delaying.
76
77With symmetric multiprocessing, the presence of multiple processors
78allows for true concurrency and provides for cost-effective performance
79improvements. Uniprocessors tend to increase performance by increasing
80clock speed and complexity. This tends to lead to hot, power hungry
81microprocessors which are poorly suited for many embedded applications.
82
83The true concurrency is in sharp contrast to the single task and
84interrupt model of uniprocessor systems. This results in a fundamental
85change to uniprocessor system characteristics listed above. Developers
86are faced with a different set of characteristics which, in turn, break
87some existing assumptions and result in new challenges. In an SMP system
88with N processors, these are the new execution characteristics.
89
90@itemize @bullet
91@item N tasks execute in parallel
92@item hardware events result in interrupts
93@end itemize
94
95There is true parallelism with a task executing on each processor and
96the possibility of interrupts occurring on each processor. Thus in contrast
97to their being one task and one interrupt to consider on a uniprocessor,
98there are N tasks and potentially N simultaneous interrupts to consider
99on an SMP system.
100
101This increase in hardware complexity and presence of true parallelism
102results in the application developer needing to be even more cautious
103about mutual exclusion and shared data access than in a uniprocessor
104embedded system. Race conditions that never or rarely happened when an
105application executed on a uniprocessor system, become much more likely
106due to multiple threads executing in parallel. On a uniprocessor system,
107these race conditions would only happen when a task switch occurred at
108just the wrong moment. Now there are N-1 tasks executing in parallel
109all the time and this results in many more opportunities for small
110windows in critical sections to be hit.
111
112@subsection Task Affinity
113
114@cindex task affinity
115@cindex thread affinity
116
117RTEMS provides services to manipulate the affinity of a task. Affinity
118is used to specify the subset of processors in an SMP system on which
119a particular task can execute.
120
121By default, tasks have an affinity which allows them to execute on any
122available processor.
123
124Task affinity is a possible feature to be supported by SMP-aware
125schedulers. However, only a subset of the available schedulers support
126affinity. Although the behavior is scheduler specific, if the scheduler
127does not support affinity, it is likely to ignore all attempts to set
128affinity.
129
130@subsection Task Migration
131
132@cindex task migration
133@cindex thread migration
134
135With more than one processor in the system tasks can migrate from one processor
136to another.  There are three reasons why tasks migrate in RTEMS.
137
138@itemize @bullet
139@item The scheduler changes explicitly via @code{rtems_task_set_scheduler()} or
140similar directives.
141@item The task resumes execution after a blocking operation.  On a priority
142based scheduler it will evict the lowest priority task currently assigned to a
143processor in the processor set managed by the scheduler instance.
144@item The task moves temporarily to another scheduler instance due to locking
145protocols like @cite{Migratory Priority Inheritance} or the
146@cite{Multiprocessor Resource Sharing Protocol}.
147@end itemize
148
149Task migration should be avoided so that the working set of a task can stay on
150the most local cache level.
151
152The current implementation of task migration in RTEMS has some implications
153with respect to the interrupt latency.  It is crucial to preserve the system
154invariant that a task can execute on at most one processor in the system at a
155time.  This is accomplished with a boolean indicator in the task context.  The
156processor architecture specific low-level task context switch code will mark
157that a task context is no longer executing and waits that the heir context
158stopped execution before it restores the heir context and resumes execution of
159the heir task.  So there is one point in time in which a processor is without a
160task.  This is essential to avoid cyclic dependencies in case multiple tasks
161migrate at once.  Otherwise some supervising entity is necessary to prevent
162life-locks.  Such a global supervisor would lead to scalability problems so
163this approach is not used.  Currently the thread dispatch is performed with
164interrupts disabled.  So in case the heir task is currently executing on
165another processor then this prolongs the time of disabled interrupts since one
166processor has to wait for another processor to make progress.
167
168It is difficult to avoid this issue with the interrupt latency since interrupts
169normally store the context of the interrupted task on its stack.  In case a
170task is marked as not executing we must not use its task stack to store such an
171interrupt context.  We cannot use the heir stack before it stopped execution on
172another processor.  So if we enable interrupts during this transition we have
173to provide an alternative task independent stack for this time frame.  This
174issue needs further investigation.
175
176@subsection Scheduler Helping Protocol
177
178The scheduler provides a helping protocol to support locking protocols like
179@cite{Migratory Priority Inheritance} or the @cite{Multiprocessor Resource
180Sharing Protocol}.  Each ready task can use at least one scheduler node at a
181time to gain access to a processor.  Each scheduler node has an owner, a user
182and an optional idle task.  The owner of a scheduler node is determined a task
183creation and never changes during the life time of a scheduler node.  The user
184of a scheduler node may change due to the scheduler helping protocol.  A
185scheduler node is in one of the four scheduler help states:
186
187@table @dfn
188
189@item help yourself
190
191This scheduler node is solely used by the owner task.  This task owns no
192resources using a helping protocol and thus does not take part in the scheduler
193helping protocol.  No help will be provided for other tasks.
194
195@item help active owner
196
197This scheduler node is owned by a task actively owning a resource and can be
198used to help out tasks.
199
200In case this scheduler node changes its state from ready to scheduled and the
201task executes using another node, then an idle task will be provided as a user
202of this node to temporarily execute on behalf of the owner task.  Thus lower
203priority tasks are denied access to the processors of this scheduler instance.
204
205In case a task actively owning a resource performs a blocking operation, then
206an idle task will be used also in case this node is in the scheduled state.
207
208@item help active rival
209
210This scheduler node is owned by a task actively obtaining a resource currently
211owned by another task and can be used to help out tasks.
212
213The task owning this node is ready and will give away its processor in case the
214task owning the resource asks for help.
215
216@item help passive
217
218This scheduler node is owned by a task obtaining a resource currently owned by
219another task and can be used to help out tasks.
220
221The task owning this node is blocked.
222
223@end table
224
225The following scheduler operations return a task in need for help
226
227@itemize @bullet
228@item unblock,
229@item change priority,
230@item yield, and
231@item ask for help.
232@end itemize
233
234A task in need for help is a task that encounters a scheduler state change from
235scheduled to ready (this is a pre-emption by a higher priority task) or a task
236that cannot be scheduled in an unblock operation.  Such a task can ask tasks
237which depend on resources owned by this task for help.
238
239In case it is not possible to schedule a task in need for help, then the
240scheduler nodes available for the task will be placed into the set of ready
241scheduler nodes of the corresponding scheduler instances.  Once a state change
242from ready to scheduled happens for one of scheduler nodes it will be used to
243schedule the task in need for help.
244
245The ask for help scheduler operation is used to help tasks in need for help
246returned by the operations mentioned above.  This operation is also used in
247case the root of a resource sub-tree owned by a task changes.
248
249The run-time of the ask for help procedures depend on the size of the resource
250tree of the task needing help and other resource trees in case tasks in need
251for help are produced during this operation.  Thus the worst-case latency in
252the system depends on the maximum resource tree size of the application.
253
254@subsection Critical Section Techniques and SMP
255
256As discussed earlier, SMP systems have opportunities for true parallelism
257which was not possible on uniprocessor systems. Consequently, multiple
258techniques that provided adequate critical sections on uniprocessor
259systems are unsafe on SMP systems. In this section, some of these
260unsafe techniques will be discussed.
261
262In general, applications must use proper operating system provided mutual
263exclusion mechanisms to ensure correct behavior. This primarily means
264the use of binary semaphores or mutexes to implement critical sections.
265
266@subsubsection Disable Interrupts and Interrupt Locks
267
268A low overhead means to ensure mutual exclusion in uni-processor configurations
269is to disable interrupts around a critical section.  This is commonly used in
270device driver code and throughout the operating system core.  On SMP
271configurations, however, disabling the interrupts on one processor has no
272effect on other processors.  So, this is insufficient to ensure system wide
273mutual exclusion.  The macros
274@itemize @bullet
275@item @code{rtems_interrupt_disable()},
276@item @code{rtems_interrupt_enable()}, and
277@item @code{rtems_interrupt_flush()}
278@end itemize
279are disabled on SMP configurations and its use will lead to compiler warnings
280and linker errors.  In the unlikely case that interrupts must be disabled on
281the current processor, then the
282@itemize @bullet
283@item @code{rtems_interrupt_local_disable()}, and
284@item @code{rtems_interrupt_local_enable()}
285@end itemize
286macros are now available in all configurations.
287
288Since disabling of interrupts is not enough to ensure system wide mutual
289exclusion on SMP, a new low-level synchronization primitive was added - the
290interrupt locks.  They are a simple API layer on top of the SMP locks used for
291low-level synchronization in the operating system core.  Currently they are
292implemented as a ticket lock.  On uni-processor configurations they degenerate
293to simple interrupt disable/enable sequences.  It is disallowed to acquire a
294single interrupt lock in a nested way.  This will result in an infinite loop
295with interrupts disabled.  While converting legacy code to interrupt locks care
296must be taken to avoid this situation.
297
298@example
299@group
300void legacy_code_with_interrupt_disable_enable( void )
301@{
302  rtems_interrupt_level level;
303
304  rtems_interrupt_disable( level );
305  /* Some critical stuff */
306  rtems_interrupt_enable( level );
307@}
308
309RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" )
310
311void smp_ready_code_with_interrupt_lock( void )
312@{
313  rtems_interrupt_lock_context lock_context;
314
315  rtems_interrupt_lock_acquire( &lock, &lock_context );
316  /* Some critical stuff */
317  rtems_interrupt_lock_release( &lock, &lock_context );
318@}
319@end group
320@end example
321
322The @code{rtems_interrupt_lock} structure is empty on uni-processor
323configurations.  Empty structures have a different size in C
324(implementation-defined, zero in case of GCC) and C++ (implementation-defined
325non-zero value, one in case of GCC).  Thus the
326@code{RTEMS_INTERRUPT_LOCK_DECLARE()}, @code{RTEMS_INTERRUPT_LOCK_DEFINE()},
327@code{RTEMS_INTERRUPT_LOCK_MEMBER()}, and
328@code{RTEMS_INTERRUPT_LOCK_REFERENCE()} macros are provided to ensure ABI
329compatibility.
330
331@subsubsection Highest Priority Task Assumption
332
333On a uniprocessor system, it is safe to assume that when the highest
334priority task in an application executes, it will execute without being
335preempted until it voluntarily blocks. Interrupts may occur while it is
336executing, but there will be no context switch to another task unless
337the highest priority task voluntarily initiates it.
338
339Given the assumption that no other tasks will have their execution
340interleaved with the highest priority task, it is possible for this
341task to be constructed such that it does not need to acquire a binary
342semaphore or mutex for protected access to shared data.
343
344In an SMP system, it cannot be assumed there will never be a single task
345executing. It should be assumed that every processor is executing another
346application task. Further, those tasks will be ones which would not have
347been executed in a uniprocessor configuration and should be assumed to
348have data synchronization conflicts with what was formerly the highest
349priority task which executed without conflict.
350
351@subsubsection Disable Preemption
352
353On a uniprocessor system, disabling preemption in a task is very similar
354to making the highest priority task assumption. While preemption is
355disabled, no task context switches will occur unless the task initiates
356them voluntarily. And, just as with the highest priority task assumption,
357there are N-1 processors also running tasks. Thus the assumption that no
358other tasks will run while the task has preemption disabled is violated.
359
360@subsection Task Unique Data and SMP
361
362Per task variables are a service commonly provided by real-time operating
363systems for application use. They work by allowing the application
364to specify a location in memory (typically a @code{void *}) which is
365logically added to the context of a task. On each task switch, the
366location in memory is stored and each task can have a unique value in
367the same memory location. This memory location is directly accessed as a
368variable in a program.
369
370This works well in a uniprocessor environment because there is one task
371executing and one memory location containing a task-specific value. But
372it is fundamentally broken on an SMP system because there are always N
373tasks executing. With only one location in memory, N-1 tasks will not
374have the correct value.
375
376This paradigm for providing task unique data values is fundamentally
377broken on SMP systems.
378
379@subsubsection Classic API Per Task Variables
380
381The Classic API provides three directives to support per task variables. These are:
382
383@itemize @bullet
384@item @code{@value{DIRPREFIX}task_variable_add} - Associate per task variable
385@item @code{@value{DIRPREFIX}task_variable_get} - Obtain value of a a per task variable
386@item @code{@value{DIRPREFIX}task_variable_delete} - Remove per task variable
387@end itemize
388
389As task variables are unsafe for use on SMP systems, the use of these services
390must be eliminated in all software that is to be used in an SMP environment.
391The task variables API is disabled on SMP. Its use will lead to compile-time
392and link-time errors. It is recommended that the application developer consider
393the use of POSIX Keys or Thread Local Storage (TLS). POSIX Keys are available
394in all RTEMS configurations.  For the availablity of TLS on a particular
395architecture please consult the @cite{RTEMS CPU Architecture Supplement}.
396
397The only remaining user of task variables in the RTEMS code base is the Ada
398support.  So basically Ada is not available on RTEMS SMP.
399
400@subsection Thread Dispatch Details
401
402This section gives background information to developers interested in the
403interrupt latencies introduced by thread dispatching.  A thread dispatch
404consists of all work which must be done to stop the currently executing thread
405on a processor and hand over this processor to an heir thread.
406
407On SMP systems, scheduling decisions on one processor must be propagated to
408other processors through inter-processor interrupts.  So, a thread dispatch
409which must be carried out on another processor happens not instantaneous.  Thus
410several thread dispatch requests might be in the air and it is possible that
411some of them may be out of date before the corresponding processor has time to
412deal with them.  The thread dispatch mechanism uses three per-processor
413variables,
414@itemize @bullet
415@item the executing thread,
416@item the heir thread, and
417@item an boolean flag indicating if a thread dispatch is necessary or not.
418@end itemize
419Updates of the heir thread and the thread dispatch necessary indicator are
420synchronized via explicit memory barriers without the use of locks.  A thread
421can be an heir thread on at most one processor in the system.  The thread context
422is protected by a TTAS lock embedded in the context to ensure that it is used
423on at most one processor at a time.  The thread post-switch actions use a
424per-processor lock.  This implementation turned out to be quite efficient and
425no lock contention was observed in the test suite.
426
427The current implementation of thread dispatching has some implications with
428respect to the interrupt latency.  It is crucial to preserve the system
429invariant that a thread can execute on at most one processor in the system at a
430time.  This is accomplished with a boolean indicator in the thread context.
431The processor architecture specific context switch code will mark that a thread
432context is no longer executing and waits that the heir context stopped
433execution before it restores the heir context and resumes execution of the heir
434thread (the boolean indicator is basically a TTAS lock).  So, there is one
435point in time in which a processor is without a thread.  This is essential to
436avoid cyclic dependencies in case multiple threads migrate at once.  Otherwise
437some supervising entity is necessary to prevent deadlocks.  Such a global
438supervisor would lead to scalability problems so this approach is not used.
439Currently the context switch is performed with interrupts disabled.  Thus in
440case the heir thread is currently executing on another processor, the time of
441disabled interrupts is prolonged since one processor has to wait for another
442processor to make progress.
443
444It is difficult to avoid this issue with the interrupt latency since interrupts
445normally store the context of the interrupted thread on its stack.  In case a
446thread is marked as not executing, we must not use its thread stack to store
447such an interrupt context.  We cannot use the heir stack before it stopped
448execution on another processor.  If we enable interrupts during this
449transition, then we have to provide an alternative thread independent stack for
450interrupts in this time frame.  This issue needs further investigation.
451
452The problematic situation occurs in case we have a thread which executes with
453thread dispatching disabled and should execute on another processor (e.g. it is
454an heir thread on another processor).  In this case the interrupts on this
455other processor are disabled until the thread enables thread dispatching and
456starts the thread dispatch sequence.  The scheduler (an exception is the
457scheduler with thread processor affinity support) tries to avoid such a
458situation and checks if a new scheduled thread already executes on a processor.
459In case the assigned processor differs from the processor on which the thread
460already executes and this processor is a member of the processor set managed by
461this scheduler instance, it will reassign the processors to keep the already
462executing thread in place.  Therefore normal scheduler requests will not lead
463to such a situation.  Explicit thread migration requests, however, can lead to
464this situation.  Explicit thread migrations may occur due to the scheduler
465helping protocol or explicit scheduler instance changes.  The situation can
466also be provoked by interrupts which suspend and resume threads multiple times
467and produce stale asynchronous thread dispatch requests in the system.
468
469@c
470@c
471@c
472@section Operations
473
474@subsection Setting Affinity to a Single Processor
475
476On some embedded applications targeting SMP systems, it may be beneficial to
477lock individual tasks to specific processors.  In this way, one can designate a
478processor for I/O tasks, another for computation, etc..  The following
479illustrates the code sequence necessary to assign a task an affinity for
480processor with index @code{processor_index}.
481
482@example
483@group
484#include <rtems.h>
485#include <assert.h>
486
487void pin_to_processor(rtems_id task_id, int processor_index)
488@{
489  rtems_status_code sc;
490  cpu_set_t         cpuset;
491
492  CPU_ZERO(&cpuset);
493  CPU_SET(processor_index, &cpuset);
494
495  sc = rtems_task_set_affinity(task_id, sizeof(cpuset), &cpuset);
496  assert(sc == RTEMS_SUCCESSFUL);
497@}
498@end group
499@end example
500
501It is important to note that the @code{cpuset} is not validated until the
502@code{@value{DIRPREFIX}task_set_affinity} call is made. At that point,
503it is validated against the current system configuration.
504
505@c
506@c
507@c
508@section Directives
509
510This section details the symmetric multiprocessing services.  A subsection
511is dedicated to each of these services and describes the calling sequence,
512related constants, usage, and status codes.
513
514@c
515@c rtems_get_processor_count
516@c
517@page
518@subsection GET_PROCESSOR_COUNT - Get processor count
519
520@subheading CALLING SEQUENCE:
521
522@ifset is-C
523@example
524uint32_t rtems_get_processor_count(void);
525@end example
526@end ifset
527
528@ifset is-Ada
529@end ifset
530
531@subheading DIRECTIVE STATUS CODES:
532
533The count of processors in the system.
534
535@subheading DESCRIPTION:
536
537On uni-processor configurations a value of one will be returned.
538
539On SMP configurations this returns the value of a global variable set during
540system initialization to indicate the count of utilized processors.  The
541processor count depends on the physically or virtually available processors and
542application configuration.  The value will always be less than or equal to the
543maximum count of application configured processors.
544
545@subheading NOTES:
546
547None.
548
549@c
550@c rtems_get_current_processor
551@c
552@page
553@subsection GET_CURRENT_PROCESSOR - Get current processor index
554
555@subheading CALLING SEQUENCE:
556
557@ifset is-C
558@example
559uint32_t rtems_get_current_processor(void);
560@end example
561@end ifset
562
563@ifset is-Ada
564@end ifset
565
566@subheading DIRECTIVE STATUS CODES:
567
568The index of the current processor.
569
570@subheading DESCRIPTION:
571
572On uni-processor configurations a value of zero will be returned.
573
574On SMP configurations an architecture specific method is used to obtain the
575index of the current processor in the system.  The set of processor indices is
576the range of integers starting with zero up to the processor count minus one.
577
578Outside of sections with disabled thread dispatching the current processor
579index may change after every instruction since the thread may migrate from one
580processor to another.  Sections with disabled interrupts are sections with
581thread dispatching disabled.
582
583@subheading NOTES:
584
585None.
586
587@c
588@c rtems_scheduler_ident
589@c
590@page
591@subsection SCHEDULER_IDENT - Get ID of a scheduler
592
593@subheading CALLING SEQUENCE:
594
595@ifset is-C
596@example
597rtems_status_code rtems_scheduler_ident(
598  rtems_name  name,
599  rtems_id   *id
600);
601@end example
602@end ifset
603
604@ifset is-Ada
605@end ifset
606
607@subheading DIRECTIVE STATUS CODES:
608
609@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
610@code{@value{RPREFIX}INVALID_ADDRESS} - @code{id} is NULL@*
611@code{@value{RPREFIX}INVALID_NAME} - invalid scheduler name@*
612@code{@value{RPREFIX}UNSATISFIED} - - a scheduler with this name exists, but
613the processor set of this scheduler is empty
614
615@subheading DESCRIPTION:
616
617Identifies a scheduler by its name.  The scheduler name is determined by the
618scheduler configuration.  @xref{Configuring a System Configuring
619Clustered/Partitioned Schedulers}.
620
621@subheading NOTES:
622
623None.
624
625@c
626@c rtems_scheduler_get_processor_set
627@c
628@page
629@subsection SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler
630
631@subheading CALLING SEQUENCE:
632
633@ifset is-C
634@example
635rtems_status_code rtems_scheduler_get_processor_set(
636  rtems_id   scheduler_id,
637  size_t     cpusetsize,
638  cpu_set_t *cpuset
639);
640@end example
641@end ifset
642
643@ifset is-Ada
644@end ifset
645
646@subheading DIRECTIVE STATUS CODES:
647
648@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
649@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
650@code{@value{RPREFIX}INVALID_ID} - invalid scheduler id@*
651@code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for
652set of processors owned by the scheduler
653
654@subheading DESCRIPTION:
655
656Returns the processor set owned by the scheduler in @code{cpuset}.  A set bit
657in the processor set means that this processor is owned by the scheduler and a
658cleared bit means the opposite.
659
660@subheading NOTES:
661
662None.
663
664@c
665@c rtems_task_get_scheduler
666@c
667@page
668@subsection TASK_GET_SCHEDULER - Get scheduler of a task
669
670@subheading CALLING SEQUENCE:
671
672@ifset is-C
673@example
674rtems_status_code rtems_task_get_scheduler(
675  rtems_id  task_id,
676  rtems_id *scheduler_id
677);
678@end example
679@end ifset
680
681@ifset is-Ada
682@end ifset
683
684@subheading DIRECTIVE STATUS CODES:
685
686@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
687@code{@value{RPREFIX}INVALID_ADDRESS} - @code{scheduler_id} is NULL@*
688@code{@value{RPREFIX}INVALID_ID} - invalid task id
689
690@subheading DESCRIPTION:
691
692Returns the scheduler identifier of a task identified by @code{task_id} in
693@code{scheduler_id}.
694
695@subheading NOTES:
696
697None.
698
699@c
700@c rtems_task_set_scheduler
701@c
702@page
703@subsection TASK_SET_SCHEDULER - Set scheduler of a task
704
705@subheading CALLING SEQUENCE:
706
707@ifset is-C
708@example
709rtems_status_code rtems_task_set_scheduler(
710  rtems_id task_id,
711  rtems_id scheduler_id
712);
713@end example
714@end ifset
715
716@ifset is-Ada
717@end ifset
718
719@subheading DIRECTIVE STATUS CODES:
720
721@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
722@code{@value{RPREFIX}INVALID_ID} - invalid task or scheduler id@*
723@code{@value{RPREFIX}INCORRECT_STATE} - the task is in the wrong state to
724perform a scheduler change
725
726@subheading DESCRIPTION:
727
728Sets the scheduler of a task identified by @code{task_id} to the scheduler
729identified by @code{scheduler_id}.  The scheduler of a task is initialized to
730the scheduler of the task that created it.
731
732@subheading NOTES:
733
734None.
735
736@subheading EXAMPLE:
737
738@example
739@group
740#include <rtems.h>
741#include <assert.h>
742
743void task(rtems_task_argument arg);
744
745void example(void)
746@{
747  rtems_status_code sc;
748  rtems_id          task_id;
749  rtems_id          scheduler_id;
750  rtems_name        scheduler_name;
751
752  scheduler_name = rtems_build_name('W', 'O', 'R', 'K');
753
754  sc = rtems_scheduler_ident(scheduler_name, &scheduler_id);
755  assert(sc == RTEMS_SUCCESSFUL);
756
757  sc = rtems_task_create(
758    rtems_build_name('T', 'A', 'S', 'K'),
759    1,
760    RTEMS_MINIMUM_STACK_SIZE,
761    RTEMS_DEFAULT_MODES,
762    RTEMS_DEFAULT_ATTRIBUTES,
763    &task_id
764  );
765  assert(sc == RTEMS_SUCCESSFUL);
766
767  sc = rtems_task_set_scheduler(task_id, scheduler_id);
768  assert(sc == RTEMS_SUCCESSFUL);
769
770  sc = rtems_task_start(task_id, task, 0);
771  assert(sc == RTEMS_SUCCESSFUL);
772@}
773@end group
774@end example
775
776@c
777@c rtems_task_get_affinity
778@c
779@page
780@subsection TASK_GET_AFFINITY - Get task processor affinity
781
782@subheading CALLING SEQUENCE:
783
784@ifset is-C
785@example
786rtems_status_code rtems_task_get_affinity(
787  rtems_id   id,
788  size_t     cpusetsize,
789  cpu_set_t *cpuset
790);
791@end example
792@end ifset
793
794@ifset is-Ada
795@end ifset
796
797@subheading DIRECTIVE STATUS CODES:
798
799@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
800@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
801@code{@value{RPREFIX}INVALID_ID} - invalid task id@*
802@code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for
803the current processor affinity set of the task
804
805@subheading DESCRIPTION:
806
807Returns the current processor affinity set of the task in @code{cpuset}.  A set
808bit in the affinity set means that the task can execute on this processor and a
809cleared bit means the opposite.
810
811@subheading NOTES:
812
813None.
814
815@c
816@c rtems_task_set_affinity
817@c
818@page
819@subsection TASK_SET_AFFINITY - Set task processor affinity
820
821@subheading CALLING SEQUENCE:
822
823@ifset is-C
824@example
825rtems_status_code rtems_task_set_affinity(
826  rtems_id         id,
827  size_t           cpusetsize,
828  const cpu_set_t *cpuset
829);
830@end example
831@end ifset
832
833@ifset is-Ada
834@end ifset
835
836@subheading DIRECTIVE STATUS CODES:
837
838@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
839@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
840@code{@value{RPREFIX}INVALID_ID} - invalid task id@*
841@code{@value{RPREFIX}INVALID_NUMBER} - invalid processor affinity set
842
843@subheading DESCRIPTION:
844
845Sets the processor affinity set for the task specified by @code{cpuset}.  A set
846bit in the affinity set means that the task can execute on this processor and a
847cleared bit means the opposite.
848
849@subheading NOTES:
850
851This function will not change the scheduler of the task.  The intersection of
852the processor affinity set and the set of processors owned by the scheduler of
853the task must be non-empty.  It is not an error if the processor affinity set
854contains processors that are not part of the set of processors owned by the
855scheduler instance of the task.  A task will simply not run under normal
856circumstances on these processors since the scheduler ignores them.  Some
857locking protocols may temporarily use processors that are not included in the
858processor affinity set of the task.  It is also not an error if the processor
859affinity set contains processors that are not part of the system.
Note: See TracBrowser for help on using the repository browser.