source: rtems/doc/user/smp.t @ 9154c3f9

4.11
Last change on this file since 9154c3f9 was 9154c3f9, checked in by Sebastian Huber <sebastian.huber@…>, on Jul 17, 2015 at 8:53:22 AM

doc: Add thread dispatch details for SMP

  • Property mode set to 100644
File size: 29.9 KB
Line 
1@c
2@c  COPYRIGHT (c) 2014.
3@c  On-Line Applications Research Corporation (OAR).
4@c  All rights reserved.
5@c
6
7@chapter Symmetric Multiprocessing Services
8
9@section Introduction
10
11This chapter describes the services related to Symmetric Multiprocessing
12provided by RTEMS.
13
14The application level services currently provided are:
15
16@itemize @bullet
17@item @code{rtems_get_processor_count} - Get processor count
18@item @code{rtems_get_current_processor} - Get current processor index
19@item @code{rtems_scheduler_ident} - Get ID of a scheduler
20@item @code{rtems_scheduler_get_processor_set} - Get processor set of a scheduler
21@item @code{rtems_task_get_scheduler} - Get scheduler of a task
22@item @code{rtems_task_set_scheduler} - Set scheduler of a task
23@item @code{rtems_task_get_affinity} - Get task processor affinity
24@item @code{rtems_task_set_affinity} - Set task processor affinity
25@end itemize
26
27@c
28@c
29@c
30@section Background
31
32@subsection Uniprocessor versus SMP Parallelism
33
34Uniprocessor systems have long been used in embedded systems. In this hardware
35model, there are some system execution characteristics which have long been
36taken for granted:
37
38@itemize @bullet
39@item one task executes at a time
40@item hardware events result in interrupts
41@end itemize
42
43There is no true parallelism. Even when interrupts appear to occur
44at the same time, they are processed in largely a serial fashion.
45This is true even when the interupt service routines are allowed to
46nest.  From a tasking viewpoint,  it is the responsibility of the real-time
47operatimg system to simulate parallelism by switching between tasks.
48These task switches occur in response to hardware interrupt events and explicit
49application events such as blocking for a resource or delaying.
50
51With symmetric multiprocessing, the presence of multiple processors
52allows for true concurrency and provides for cost-effective performance
53improvements. Uniprocessors tend to increase performance by increasing
54clock speed and complexity. This tends to lead to hot, power hungry
55microprocessors which are poorly suited for many embedded applications.
56
57The true concurrency is in sharp contrast to the single task and
58interrupt model of uniprocessor systems. This results in a fundamental
59change to uniprocessor system characteristics listed above. Developers
60are faced with a different set of characteristics which, in turn, break
61some existing assumptions and result in new challenges. In an SMP system
62with N processors, these are the new execution characteristics.
63
64@itemize @bullet
65@item N tasks execute in parallel
66@item hardware events result in interrupts
67@end itemize
68
69There is true parallelism with a task executing on each processor and
70the possibility of interrupts occurring on each processor. Thus in contrast
71to their being one task and one interrupt to consider on a uniprocessor,
72there are N tasks and potentially N simultaneous interrupts to consider
73on an SMP system.
74
75This increase in hardware complexity and presence of true parallelism
76results in the application developer needing to be even more cautious
77about mutual exclusion and shared data access than in a uniprocessor
78embedded system. Race conditions that never or rarely happened when an
79application executed on a uniprocessor system, become much more likely
80due to multiple threads executing in parallel. On a uniprocessor system,
81these race conditions would only happen when a task switch occurred at
82just the wrong moment. Now there are N-1 tasks executing in parallel
83all the time and this results in many more opportunities for small
84windows in critical sections to be hit.
85
86@subsection Task Affinity
87
88@cindex task affinity
89@cindex thread affinity
90
91RTEMS provides services to manipulate the affinity of a task. Affinity
92is used to specify the subset of processors in an SMP system on which
93a particular task can execute.
94
95By default, tasks have an affinity which allows them to execute on any
96available processor.
97
98Task affinity is a possible feature to be supported by SMP-aware
99schedulers. However, only a subset of the available schedulers support
100affinity. Although the behavior is scheduler specific, if the scheduler
101does not support affinity, it is likely to ignore all attempts to set
102affinity.
103
104@subsection Task Migration
105
106@cindex task migration
107@cindex thread migration
108
109With more than one processor in the system tasks can migrate from one processor
110to another.  There are three reasons why tasks migrate in RTEMS.
111
112@itemize @bullet
113@item The scheduler changes explicitly via @code{rtems_task_set_scheduler()} or
114similar directives.
115@item The task resumes execution after a blocking operation.  On a priority
116based scheduler it will evict the lowest priority task currently assigned to a
117processor in the processor set managed by the scheduler instance.
118@item The task moves temporarily to another scheduler instance due to locking
119protocols like @cite{Migratory Priority Inheritance} or the
120@cite{Multiprocessor Resource Sharing Protocol}.
121@end itemize
122
123Task migration should be avoided so that the working set of a task can stay on
124the most local cache level.
125
126The current implementation of task migration in RTEMS has some implications
127with respect to the interrupt latency.  It is crucial to preserve the system
128invariant that a task can execute on at most one processor in the system at a
129time.  This is accomplished with a boolean indicator in the task context.  The
130processor architecture specific low-level task context switch code will mark
131that a task context is no longer executing and waits that the heir context
132stopped execution before it restores the heir context and resumes execution of
133the heir task.  So there is one point in time in which a processor is without a
134task.  This is essential to avoid cyclic dependencies in case multiple tasks
135migrate at once.  Otherwise some supervising entity is necessary to prevent
136life-locks.  Such a global supervisor would lead to scalability problems so
137this approach is not used.  Currently the thread dispatch is performed with
138interrupts disabled.  So in case the heir task is currently executing on
139another processor then this prolongs the time of disabled interrupts since one
140processor has to wait for another processor to make progress.
141
142It is difficult to avoid this issue with the interrupt latency since interrupts
143normally store the context of the interrupted task on its stack.  In case a
144task is marked as not executing we must not use its task stack to store such an
145interrupt context.  We cannot use the heir stack before it stopped execution on
146another processor.  So if we enable interrupts during this transition we have
147to provide an alternative task independent stack for this time frame.  This
148issue needs further investigation.
149
150@subsection Scheduler Helping Protocol
151
152The scheduler provides a helping protocol to support locking protocols like
153@cite{Migratory Priority Inheritance} or the @cite{Multiprocessor Resource
154Sharing Protocol}.  Each ready task can use at least one scheduler node at a
155time to gain access to a processor.  Each scheduler node has an owner, a user
156and an optional idle task.  The owner of a scheduler node is determined a task
157creation and never changes during the life time of a scheduler node.  The user
158of a scheduler node may change due to the scheduler helping protocol.  A
159scheduler node is in one of the four scheduler help states:
160
161@table @dfn
162
163@item help yourself
164
165This scheduler node is solely used by the owner task.  This task owns no
166resources using a helping protocol and thus does not take part in the scheduler
167helping protocol.  No help will be provided for other tasks.
168
169@item help active owner
170
171This scheduler node is owned by a task actively owning a resource and can be
172used to help out tasks.
173
174In case this scheduler node changes its state from ready to scheduled and the
175task executes using another node, then an idle task will be provided as a user
176of this node to temporarily execute on behalf of the owner task.  Thus lower
177priority tasks are denied access to the processors of this scheduler instance.
178
179In case a task actively owning a resource performs a blocking operation, then
180an idle task will be used also in case this node is in the scheduled state.
181
182@item help active rival
183
184This scheduler node is owned by a task actively obtaining a resource currently
185owned by another task and can be used to help out tasks.
186
187The task owning this node is ready and will give away its processor in case the
188task owning the resource asks for help.
189
190@item help passive
191
192This scheduler node is owned by a task obtaining a resource currently owned by
193another task and can be used to help out tasks.
194
195The task owning this node is blocked.
196
197@end table
198
199The following scheduler operations return a task in need for help
200
201@itemize @bullet
202@item unblock,
203@item change priority,
204@item yield, and
205@item ask for help.
206@end itemize
207
208A task in need for help is a task that encounters a scheduler state change from
209scheduled to ready (this is a pre-emption by a higher priority task) or a task
210that cannot be scheduled in an unblock operation.  Such a task can ask tasks
211which depend on resources owned by this task for help.
212
213In case it is not possible to schedule a task in need for help, then the
214scheduler nodes available for the task will be placed into the set of ready
215scheduler nodes of the corresponding scheduler instances.  Once a state change
216from ready to scheduled happens for one of scheduler nodes it will be used to
217schedule the task in need for help.
218
219The ask for help scheduler operation is used to help tasks in need for help
220returned by the operations mentioned above.  This operation is also used in
221case the root of a resource sub-tree owned by a task changes.
222
223The run-time of the ask for help procedures depend on the size of the resource
224tree of the task needing help and other resource trees in case tasks in need
225for help are produced during this operation.  Thus the worst-case latency in
226the system depends on the maximum resource tree size of the application.
227
228@subsection Critical Section Techniques and SMP
229
230As discussed earlier, SMP systems have opportunities for true parallelism
231which was not possible on uniprocessor systems. Consequently, multiple
232techniques that provided adequate critical sections on uniprocessor
233systems are unsafe on SMP systems. In this section, some of these
234unsafe techniques will be discussed.
235
236In general, applications must use proper operating system provided mutual
237exclusion mechanisms to ensure correct behavior. This primarily means
238the use of binary semaphores or mutexes to implement critical sections.
239
240@subsubsection Disable Interrupts and Interrupt Locks
241
242A low overhead means to ensure mutual exclusion in uni-processor configurations
243is to disable interrupts around a critical section.  This is commonly used in
244device driver code and throughout the operating system core.  On SMP
245configurations, however, disabling the interrupts on one processor has no
246effect on other processors.  So, this is insufficient to ensure system wide
247mutual exclusion.  The macros
248@itemize @bullet
249@item @code{rtems_interrupt_disable()},
250@item @code{rtems_interrupt_enable()}, and
251@item @code{rtems_interrupt_flush()}
252@end itemize
253are disabled on SMP configurations and its use will lead to compiler warnings
254and linker errors.  In the unlikely case that interrupts must be disabled on
255the current processor, then the
256@itemize @bullet
257@item @code{rtems_interrupt_local_disable()}, and
258@item @code{rtems_interrupt_local_enable()}
259@end itemize
260macros are now available in all configurations.
261
262Since disabling of interrupts is not enough to ensure system wide mutual
263exclusion on SMP, a new low-level synchronization primitive was added - the
264interrupt locks.  They are a simple API layer on top of the SMP locks used for
265low-level synchronization in the operating system core.  Currently they are
266implemented as a ticket lock.  On uni-processor configurations they degenerate
267to simple interrupt disable/enable sequences.  It is disallowed to acquire a
268single interrupt lock in a nested way.  This will result in an infinite loop
269with interrupts disabled.  While converting legacy code to interrupt locks care
270must be taken to avoid this situation.
271
272@example
273@group
274void legacy_code_with_interrupt_disable_enable( void )
275@{
276  rtems_interrupt_level level;
277
278  rtems_interrupt_disable( level );
279  /* Some critical stuff */
280  rtems_interrupt_enable( level );
281@}
282
283RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" )
284
285void smp_ready_code_with_interrupt_lock( void )
286@{
287  rtems_interrupt_lock_context lock_context;
288
289  rtems_interrupt_lock_acquire( &lock, &lock_context );
290  /* Some critical stuff */
291  rtems_interrupt_lock_release( &lock, &lock_context );
292@}
293@end group
294@end example
295
296The @code{rtems_interrupt_lock} structure is empty on uni-processor
297configurations.  Empty structures have a different size in C
298(implementation-defined, zero in case of GCC) and C++ (implementation-defined
299non-zero value, one in case of GCC).  Thus the
300@code{RTEMS_INTERRUPT_LOCK_DECLARE()}, @code{RTEMS_INTERRUPT_LOCK_DEFINE()},
301@code{RTEMS_INTERRUPT_LOCK_MEMBER()}, and
302@code{RTEMS_INTERRUPT_LOCK_REFERENCE()} macros are provided to ensure ABI
303compatibility.
304
305@subsubsection Highest Priority Task Assumption
306
307On a uniprocessor system, it is safe to assume that when the highest
308priority task in an application executes, it will execute without being
309preempted until it voluntarily blocks. Interrupts may occur while it is
310executing, but there will be no context switch to another task unless
311the highest priority task voluntarily initiates it.
312
313Given the assumption that no other tasks will have their execution
314interleaved with the highest priority task, it is possible for this
315task to be constructed such that it does not need to acquire a binary
316semaphore or mutex for protected access to shared data.
317
318In an SMP system, it cannot be assumed there will never be a single task
319executing. It should be assumed that every processor is executing another
320application task. Further, those tasks will be ones which would not have
321been executed in a uniprocessor configuration and should be assumed to
322have data synchronization conflicts with what was formerly the highest
323priority task which executed without conflict.
324
325@subsubsection Disable Preemption
326
327On a uniprocessor system, disabling preemption in a task is very similar
328to making the highest priority task assumption. While preemption is
329disabled, no task context switches will occur unless the task initiates
330them voluntarily. And, just as with the highest priority task assumption,
331there are N-1 processors also running tasks. Thus the assumption that no
332other tasks will run while the task has preemption disabled is violated.
333
334@subsection Task Unique Data and SMP
335
336Per task variables are a service commonly provided by real-time operating
337systems for application use. They work by allowing the application
338to specify a location in memory (typically a @code{void *}) which is
339logically added to the context of a task. On each task switch, the
340location in memory is stored and each task can have a unique value in
341the same memory location. This memory location is directly accessed as a
342variable in a program.
343
344This works well in a uniprocessor environment because there is one task
345executing and one memory location containing a task-specific value. But
346it is fundamentally broken on an SMP system because there are always N
347tasks executing. With only one location in memory, N-1 tasks will not
348have the correct value.
349
350This paradigm for providing task unique data values is fundamentally
351broken on SMP systems.
352
353@subsubsection Classic API Per Task Variables
354
355The Classic API provides three directives to support per task variables. These are:
356
357@itemize @bullet
358@item @code{@value{DIRPREFIX}task_variable_add} - Associate per task variable
359@item @code{@value{DIRPREFIX}task_variable_get} - Obtain value of a a per task variable
360@item @code{@value{DIRPREFIX}task_variable_delete} - Remove per task variable
361@end itemize
362
363As task variables are unsafe for use on SMP systems, the use of these services
364must be eliminated in all software that is to be used in an SMP environment.
365The task variables API is disabled on SMP. Its use will lead to compile-time
366and link-time errors. It is recommended that the application developer consider
367the use of POSIX Keys or Thread Local Storage (TLS). POSIX Keys are available
368in all RTEMS configurations.  For the availablity of TLS on a particular
369architecture please consult the @cite{RTEMS CPU Architecture Supplement}.
370
371The only remaining user of task variables in the RTEMS code base is the Ada
372support.  So basically Ada is not available on RTEMS SMP.
373
374@subsection Thread Dispatch Details
375
376This section gives background information to developers interested in the
377interrupt latencies introduced by thread dispatching.  A thread dispatch
378consists of all work which must be done to stop the currently executing thread
379on a processor and hand over this processor to an heir thread.
380
381On SMP systems, scheduling decisions on one processor must be propagated to
382other processors through inter-processor interrupts.  So, a thread dispatch
383which must be carried out on another processor happens not instantaneous.  Thus
384several thread dispatch requests might be in the air and it is possible that
385some of them may be out of date before the corresponding processor has time to
386deal with them.  The thread dispatch mechanism uses three per-processor
387variables,
388@itemize @bullet
389@item the executing thread,
390@item the heir thread, and
391@item an boolean flag indicating if a thread dispatch is necessary or not.
392@end itemize
393Updates of the heir thread and the thread dispatch necessary indicator are
394synchronized via explicit memory barriers without the use of locks.  A thread
395can be an heir thread on at most one processor in the system.  The thread context
396is protected by a TTAS lock embedded in the context to ensure that it is used
397on at most one processor at a time.  The thread post-switch actions use a
398per-processor lock.  This implementation turned out to be quite efficient and
399no lock contention was observed in the test suite.
400
401The current implementation of thread dispatching has some implications with
402respect to the interrupt latency.  It is crucial to preserve the system
403invariant that a thread can execute on at most one processor in the system at a
404time.  This is accomplished with a boolean indicator in the thread context.
405The processor architecture specific context switch code will mark that a thread
406context is no longer executing and waits that the heir context stopped
407execution before it restores the heir context and resumes execution of the heir
408thread (the boolean indicator is basically a TTAS lock).  So, there is one
409point in time in which a processor is without a thread.  This is essential to
410avoid cyclic dependencies in case multiple threads migrate at once.  Otherwise
411some supervising entity is necessary to prevent deadlocks.  Such a global
412supervisor would lead to scalability problems so this approach is not used.
413Currently the context switch is performed with interrupts disabled.  Thus in
414case the heir thread is currently executing on another processor, the time of
415disabled interrupts is prolonged since one processor has to wait for another
416processor to make progress.
417
418It is difficult to avoid this issue with the interrupt latency since interrupts
419normally store the context of the interrupted thread on its stack.  In case a
420thread is marked as not executing, we must not use its thread stack to store
421such an interrupt context.  We cannot use the heir stack before it stopped
422execution on another processor.  If we enable interrupts during this
423transition, then we have to provide an alternative thread independent stack for
424interrupts in this time frame.  This issue needs further investigation.
425
426The problematic situation occurs in case we have a thread which executes with
427thread dispatching disabled and should execute on another processor (e.g. it is
428an heir thread on another processor).  In this case the interrupts on this
429other processor are disabled until the thread enables thread dispatching and
430starts the thread dispatch sequence.  The scheduler (an exception is the
431scheduler with thread processor affinity support) tries to avoid such a
432situation and checks if a new scheduled thread already executes on a processor.
433In case the assigned processor differs from the processor on which the thread
434already executes and this processor is a member of the processor set managed by
435this scheduler instance, it will reassign the processors to keep the already
436executing thread in place.  Therefore normal scheduler requests will not lead
437to such a situation.  Explicit thread migration requests, however, can lead to
438this situation.  Explicit thread migrations may occur due to the scheduler
439helping protocol or explicit scheduler instance changes.  The situation can
440also be provoked by interrupts which suspend and resume threads multiple times
441and produce stale asynchronous thread dispatch requests in the system.
442
443@c
444@c
445@c
446@section Operations
447
448@subsection Setting Affinity to a Single Processor
449
450On some embedded applications targeting SMP systems, it may be beneficial to
451lock individual tasks to specific processors.  In this way, one can designate a
452processor for I/O tasks, another for computation, etc..  The following
453illustrates the code sequence necessary to assign a task an affinity for
454processor with index @code{processor_index}.
455
456@example
457@group
458#include <rtems.h>
459#include <assert.h>
460
461void pin_to_processor(rtems_id task_id, int processor_index)
462@{
463  rtems_status_code sc;
464  cpu_set_t         cpuset;
465
466  CPU_ZERO(&cpuset);
467  CPU_SET(processor_index, &cpuset);
468
469  sc = rtems_task_set_affinity(task_id, sizeof(cpuset), &cpuset);
470  assert(sc == RTEMS_SUCCESSFUL);
471@}
472@end group
473@end example
474
475It is important to note that the @code{cpuset} is not validated until the
476@code{@value{DIRPREFIX}task_set_affinity} call is made. At that point,
477it is validated against the current system configuration.
478
479@c
480@c
481@c
482@section Directives
483
484This section details the symmetric multiprocessing services.  A subsection
485is dedicated to each of these services and describes the calling sequence,
486related constants, usage, and status codes.
487
488@c
489@c rtems_get_processor_count
490@c
491@page
492@subsection GET_PROCESSOR_COUNT - Get processor count
493
494@subheading CALLING SEQUENCE:
495
496@ifset is-C
497@example
498uint32_t rtems_get_processor_count(void);
499@end example
500@end ifset
501
502@ifset is-Ada
503@end ifset
504
505@subheading DIRECTIVE STATUS CODES:
506
507The count of processors in the system.
508
509@subheading DESCRIPTION:
510
511On uni-processor configurations a value of one will be returned.
512
513On SMP configurations this returns the value of a global variable set during
514system initialization to indicate the count of utilized processors.  The
515processor count depends on the physically or virtually available processors and
516application configuration.  The value will always be less than or equal to the
517maximum count of application configured processors.
518
519@subheading NOTES:
520
521None.
522
523@c
524@c rtems_get_current_processor
525@c
526@page
527@subsection GET_CURRENT_PROCESSOR - Get current processor index
528
529@subheading CALLING SEQUENCE:
530
531@ifset is-C
532@example
533uint32_t rtems_get_current_processor(void);
534@end example
535@end ifset
536
537@ifset is-Ada
538@end ifset
539
540@subheading DIRECTIVE STATUS CODES:
541
542The index of the current processor.
543
544@subheading DESCRIPTION:
545
546On uni-processor configurations a value of zero will be returned.
547
548On SMP configurations an architecture specific method is used to obtain the
549index of the current processor in the system.  The set of processor indices is
550the range of integers starting with zero up to the processor count minus one.
551
552Outside of sections with disabled thread dispatching the current processor
553index may change after every instruction since the thread may migrate from one
554processor to another.  Sections with disabled interrupts are sections with
555thread dispatching disabled.
556
557@subheading NOTES:
558
559None.
560
561@c
562@c rtems_scheduler_ident
563@c
564@page
565@subsection SCHEDULER_IDENT - Get ID of a scheduler
566
567@subheading CALLING SEQUENCE:
568
569@ifset is-C
570@example
571rtems_status_code rtems_scheduler_ident(
572  rtems_name  name,
573  rtems_id   *id
574);
575@end example
576@end ifset
577
578@ifset is-Ada
579@end ifset
580
581@subheading DIRECTIVE STATUS CODES:
582
583@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
584@code{@value{RPREFIX}INVALID_ADDRESS} - @code{id} is NULL@*
585@code{@value{RPREFIX}INVALID_NAME} - invalid scheduler name@*
586@code{@value{RPREFIX}UNSATISFIED} - - a scheduler with this name exists, but
587the processor set of this scheduler is empty
588
589@subheading DESCRIPTION:
590
591Identifies a scheduler by its name.  The scheduler name is determined by the
592scheduler configuration.  @xref{Configuring a System Configuring
593Clustered/Partitioned Schedulers}.
594
595@subheading NOTES:
596
597None.
598
599@c
600@c rtems_scheduler_get_processor_set
601@c
602@page
603@subsection SCHEDULER_GET_PROCESSOR_SET - Get processor set of a scheduler
604
605@subheading CALLING SEQUENCE:
606
607@ifset is-C
608@example
609rtems_status_code rtems_scheduler_get_processor_set(
610  rtems_id   scheduler_id,
611  size_t     cpusetsize,
612  cpu_set_t *cpuset
613);
614@end example
615@end ifset
616
617@ifset is-Ada
618@end ifset
619
620@subheading DIRECTIVE STATUS CODES:
621
622@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
623@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
624@code{@value{RPREFIX}INVALID_ID} - invalid scheduler id@*
625@code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for
626set of processors owned by the scheduler
627
628@subheading DESCRIPTION:
629
630Returns the processor set owned by the scheduler in @code{cpuset}.  A set bit
631in the processor set means that this processor is owned by the scheduler and a
632cleared bit means the opposite.
633
634@subheading NOTES:
635
636None.
637
638@c
639@c rtems_task_get_scheduler
640@c
641@page
642@subsection TASK_GET_SCHEDULER - Get scheduler of a task
643
644@subheading CALLING SEQUENCE:
645
646@ifset is-C
647@example
648rtems_status_code rtems_task_get_scheduler(
649  rtems_id  task_id,
650  rtems_id *scheduler_id
651);
652@end example
653@end ifset
654
655@ifset is-Ada
656@end ifset
657
658@subheading DIRECTIVE STATUS CODES:
659
660@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
661@code{@value{RPREFIX}INVALID_ADDRESS} - @code{scheduler_id} is NULL@*
662@code{@value{RPREFIX}INVALID_ID} - invalid task id
663
664@subheading DESCRIPTION:
665
666Returns the scheduler identifier of a task identified by @code{task_id} in
667@code{scheduler_id}.
668
669@subheading NOTES:
670
671None.
672
673@c
674@c rtems_task_set_scheduler
675@c
676@page
677@subsection TASK_SET_SCHEDULER - Set scheduler of a task
678
679@subheading CALLING SEQUENCE:
680
681@ifset is-C
682@example
683rtems_status_code rtems_task_set_scheduler(
684  rtems_id task_id,
685  rtems_id scheduler_id
686);
687@end example
688@end ifset
689
690@ifset is-Ada
691@end ifset
692
693@subheading DIRECTIVE STATUS CODES:
694
695@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
696@code{@value{RPREFIX}INVALID_ID} - invalid task or scheduler id@*
697@code{@value{RPREFIX}INCORRECT_STATE} - the task is in the wrong state to
698perform a scheduler change
699
700@subheading DESCRIPTION:
701
702Sets the scheduler of a task identified by @code{task_id} to the scheduler
703identified by @code{scheduler_id}.  The scheduler of a task is initialized to
704the scheduler of the task that created it.
705
706@subheading NOTES:
707
708None.
709
710@subheading EXAMPLE:
711
712@example
713@group
714#include <rtems.h>
715#include <assert.h>
716
717void task(rtems_task_argument arg);
718
719void example(void)
720@{
721  rtems_status_code sc;
722  rtems_id          task_id;
723  rtems_id          scheduler_id;
724  rtems_name        scheduler_name;
725
726  scheduler_name = rtems_build_name('W', 'O', 'R', 'K');
727
728  sc = rtems_scheduler_ident(scheduler_name, &scheduler_id);
729  assert(sc == RTEMS_SUCCESSFUL);
730
731  sc = rtems_task_create(
732    rtems_build_name('T', 'A', 'S', 'K'),
733    1,
734    RTEMS_MINIMUM_STACK_SIZE,
735    RTEMS_DEFAULT_MODES,
736    RTEMS_DEFAULT_ATTRIBUTES,
737    &task_id
738  );
739  assert(sc == RTEMS_SUCCESSFUL);
740
741  sc = rtems_task_set_scheduler(task_id, scheduler_id);
742  assert(sc == RTEMS_SUCCESSFUL);
743
744  sc = rtems_task_start(task_id, task, 0);
745  assert(sc == RTEMS_SUCCESSFUL);
746@}
747@end group
748@end example
749
750@c
751@c rtems_task_get_affinity
752@c
753@page
754@subsection TASK_GET_AFFINITY - Get task processor affinity
755
756@subheading CALLING SEQUENCE:
757
758@ifset is-C
759@example
760rtems_status_code rtems_task_get_affinity(
761  rtems_id   id,
762  size_t     cpusetsize,
763  cpu_set_t *cpuset
764);
765@end example
766@end ifset
767
768@ifset is-Ada
769@end ifset
770
771@subheading DIRECTIVE STATUS CODES:
772
773@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
774@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
775@code{@value{RPREFIX}INVALID_ID} - invalid task id@*
776@code{@value{RPREFIX}INVALID_NUMBER} - the affinity set buffer is too small for
777the current processor affinity set of the task
778
779@subheading DESCRIPTION:
780
781Returns the current processor affinity set of the task in @code{cpuset}.  A set
782bit in the affinity set means that the task can execute on this processor and a
783cleared bit means the opposite.
784
785@subheading NOTES:
786
787None.
788
789@c
790@c rtems_task_set_affinity
791@c
792@page
793@subsection TASK_SET_AFFINITY - Set task processor affinity
794
795@subheading CALLING SEQUENCE:
796
797@ifset is-C
798@example
799rtems_status_code rtems_task_set_affinity(
800  rtems_id         id,
801  size_t           cpusetsize,
802  const cpu_set_t *cpuset
803);
804@end example
805@end ifset
806
807@ifset is-Ada
808@end ifset
809
810@subheading DIRECTIVE STATUS CODES:
811
812@code{@value{RPREFIX}SUCCESSFUL} - successful operation@*
813@code{@value{RPREFIX}INVALID_ADDRESS} - @code{cpuset} is NULL@*
814@code{@value{RPREFIX}INVALID_ID} - invalid task id@*
815@code{@value{RPREFIX}INVALID_NUMBER} - invalid processor affinity set
816
817@subheading DESCRIPTION:
818
819Sets the processor affinity set for the task specified by @code{cpuset}.  A set
820bit in the affinity set means that the task can execute on this processor and a
821cleared bit means the opposite.
822
823@subheading NOTES:
824
825This function will not change the scheduler of the task.  The intersection of
826the processor affinity set and the set of processors owned by the scheduler of
827the task must be non-empty.  It is not an error if the processor affinity set
828contains processors that are not part of the set of processors owned by the
829scheduler instance of the task.  A task will simply not run under normal
830circumstances on these processors since the scheduler ignores them.  Some
831locking protocols may temporarily use processors that are not included in the
832processor affinity set of the task.  It is also not an error if the processor
833affinity set contains processors that are not part of the system.
Note: See TracBrowser for help on using the repository browser.