source: rtems-docs/c-user/symmetric_multiprocessing_services.rst @ 39773ce

5
Last change on this file since 39773ce was 39773ce, checked in by Sebastian Huber <sebastian.huber@…>, on 05/11/17 at 07:25:04

c-user: Replace pre-emption with preemption

  • Property mode set to 100644
File size: 33.5 KB
Line 
1.. comment SPDX-License-Identifier: CC-BY-SA-4.0
2
3.. COMMENT: COPYRIGHT (c) 2014.
4.. COMMENT: On-Line Applications Research Corporation (OAR).
5.. COMMENT: Copyright (c) 2017 embedded brains GmbH.
6.. COMMENT: All rights reserved.
7
8Symmetric Multiprocessing (SMP)
9*******************************
10
11Introduction
12============
13
14The Symmetric Multiprocessing (SMP) support of the RTEMS 4.12 is available on
15
16- ARMv7-A,
17
18- PowerPC, and
19
20- SPARC.
21
22.. warning::
23
24   The SMP support must be explicitly enabled via the ``--enable-smp``
25   configure command line option for the :term:`BSP` build.
26
27RTEMS is supposed to be a real-time operating system.  What does this mean in
28the context of SMP?  The RTEMS interpretation of real-time on SMP is the
29support for :ref:`ClusteredScheduling` with priority based schedulers and
30adequate locking protocols.  One aim is to enable a schedulability analysis
31under the sporadic task model :cite:`Brandenburg:2011:SL`
32:cite:`Burns:2013:MrsP`.
33
34The directives provided by the SMP support are:
35
36- rtems_get_processor_count_ - Get processor count
37
38- rtems_get_current_processor_ - Get current processor index
39
40Background
41==========
42
43Application Configuration
44-------------------------
45
46By default, the maximum processor count is set to one in the application
47configuration.  To enable SMP, the application configuration option
48:ref:`CONFIGURE_MAXIMUM_PROCESSORS <CONFIGURE_MAXIMUM_PROCESSORS>` must be
49defined to a value greater than one.  It is recommended to use the smallest
50value suitable for the application in order to save memory.  Each processor
51needs an idle thread and interrupt stack for example.
52
53The default scheduler for SMP applications supports up to 32 processors and is
54a global fixed priority scheduler, see also :ref:`Configuring Clustered
55Schedulers`.
56
57The following compile-time test can be used to check if the SMP support is
58available or not.
59
60.. code-block:: c
61
62    #include <rtems.h>
63
64    #ifdef RTEMS_SMP
65    #warning "SMP support is enabled"
66    #else
67    #warning "SMP support is disabled"
68    #endif
69
70Examples
71--------
72
73For example applications see `testsuites/smptests
74<https://git.rtems.org/rtems/tree/testsuites/smptests>`_.
75
76Uniprocessor versus SMP Parallelism
77-----------------------------------
78
79Uniprocessor systems have long been used in embedded systems. In this hardware
80model, there are some system execution characteristics which have long been
81taken for granted:
82
83- one task executes at a time
84
85- hardware events result in interrupts
86
87There is no true parallelism. Even when interrupts appear to occur at the same
88time, they are processed in largely a serial fashion.  This is true even when
89the interupt service routines are allowed to nest.  From a tasking viewpoint,
90it is the responsibility of the real-time operatimg system to simulate
91parallelism by switching between tasks.  These task switches occur in response
92to hardware interrupt events and explicit application events such as blocking
93for a resource or delaying.
94
95With symmetric multiprocessing, the presence of multiple processors allows for
96true concurrency and provides for cost-effective performance
97improvements. Uniprocessors tend to increase performance by increasing clock
98speed and complexity. This tends to lead to hot, power hungry microprocessors
99which are poorly suited for many embedded applications.
100
101The true concurrency is in sharp contrast to the single task and interrupt
102model of uniprocessor systems. This results in a fundamental change to
103uniprocessor system characteristics listed above. Developers are faced with a
104different set of characteristics which, in turn, break some existing
105assumptions and result in new challenges. In an SMP system with N processors,
106these are the new execution characteristics.
107
108- N tasks execute in parallel
109
110- hardware events result in interrupts
111
112There is true parallelism with a task executing on each processor and the
113possibility of interrupts occurring on each processor. Thus in contrast to
114their being one task and one interrupt to consider on a uniprocessor, there are
115N tasks and potentially N simultaneous interrupts to consider on an SMP system.
116
117This increase in hardware complexity and presence of true parallelism results
118in the application developer needing to be even more cautious about mutual
119exclusion and shared data access than in a uniprocessor embedded system. Race
120conditions that never or rarely happened when an application executed on a
121uniprocessor system, become much more likely due to multiple threads executing
122in parallel. On a uniprocessor system, these race conditions would only happen
123when a task switch occurred at just the wrong moment. Now there are N-1 tasks
124executing in parallel all the time and this results in many more opportunities
125for small windows in critical sections to be hit.
126
127Task Affinity
128-------------
129.. index:: task affinity
130.. index:: thread affinity
131
132RTEMS provides services to manipulate the affinity of a task. Affinity is used
133to specify the subset of processors in an SMP system on which a particular task
134can execute.
135
136By default, tasks have an affinity which allows them to execute on any
137available processor.
138
139Task affinity is a possible feature to be supported by SMP-aware
140schedulers. However, only a subset of the available schedulers support
141affinity. Although the behavior is scheduler specific, if the scheduler does
142not support affinity, it is likely to ignore all attempts to set affinity.
143
144The scheduler with support for arbitary processor affinities uses a proof of
145concept implementation.  See https://devel.rtems.org/ticket/2510.
146
147Task Migration
148--------------
149.. index:: task migration
150.. index:: thread migration
151
152With more than one processor in the system tasks can migrate from one processor
153to another.  There are four reasons why tasks migrate in RTEMS.
154
155- The scheduler changes explicitly via
156  :ref:`rtems_task_set_scheduler() <rtems_task_set_scheduler>` or similar
157  directives.
158
159- The task processor affinity changes explicitly via
160  :ref:`rtems_task_set_affinity() <rtems_task_set_affinity>` or similar
161  directives.
162
163- The task resumes execution after a blocking operation.  On a priority based
164  scheduler it will evict the lowest priority task currently assigned to a
165  processor in the processor set managed by the scheduler instance.
166
167- The task moves temporarily to another scheduler instance due to locking
168  protocols like the :ref:`MrsP` or the :ref:`OMIP`.
169
170Task migration should be avoided so that the working set of a task can stay on
171the most local cache level.
172
173.. _ClusteredScheduling:
174
175Clustered Scheduling
176--------------------
177
178The scheduler is responsible to assign processors to some of the threads which
179are ready to execute.  Trouble starts if more ready threads than processors
180exist at the same time.  There are various rules how the processor assignment
181can be performed attempting to fulfill additional constraints or yield some
182overall system properties.  As a matter of fact it is impossible to meet all
183requirements at the same time.  The way a scheduler works distinguishes
184real-time operating systems from general purpose operating systems.
185
186We have clustered scheduling in case the set of processors of a system is
187partitioned into non-empty pairwise-disjoint subsets of processors.  These
188subsets are called clusters.  Clusters with a cardinality of one are
189partitions.  Each cluster is owned by exactly one scheduler instance.  In case
190the cluster size equals the processor count, it is called global scheduling.
191
192Modern SMP systems have multi-layer caches.  An operating system which neglects
193cache constraints in the scheduler will not yield good performance.  Real-time
194operating systems usually provide priority (fixed or job-level) based
195schedulers so that each of the highest priority threads is assigned to a
196processor.  Priority based schedulers have difficulties in providing cache
197locality for threads and may suffer from excessive thread migrations
198:cite:`Brandenburg:2011:SL` :cite:`Compagnin:2014:RUN`.  Schedulers that use local run
199queues and some sort of load-balancing to improve the cache utilization may not
200fulfill global constraints :cite:`Gujarati:2013:LPP` and are more difficult to
201implement than one would normally expect :cite:`Lozi:2016:LSDWC`.
202
203Clustered scheduling was implemented for RTEMS SMP to best use the cache
204topology of a system and to keep the worst-case latencies under control.  The
205low-level SMP locks use FIFO ordering.  So, the worst-case run-time of
206operations increases with each processor involved.  The scheduler configuration
207is quite flexible and done at link-time, see :ref:`Configuring Clustered
208Schedulers`.  It is possible to re-assign processors to schedulers during
209run-time via :ref:`rtems_scheduler_add_processor()
210<rtems_scheduler_add_processor>` and :ref:`rtems_scheduler_remove_processor()
211<rtems_scheduler_remove_processor>`.  The schedulers are implemented in an
212object-oriented fashion.
213
214The problem is to provide synchronization
215primitives for inter-cluster synchronization (more than one cluster is involved
216in the synchronization process). In RTEMS there are currently some means
217available
218
219- events,
220
221- message queues,
222
223- mutexes using the :ref:`OMIP`,
224
225- mutexes using the :ref:`MrsP`, and
226
227- binary and counting semaphores.
228
229The clustered scheduling approach enables separation of functions with
230real-time requirements and functions that profit from fairness and high
231throughput provided the scheduler instances are fully decoupled and adequate
232inter-cluster synchronization primitives are used.
233
234To set the scheduler of a task see :ref:`rtems_scheduler_ident()
235<rtems_scheduler_ident>` and :ref:`rtems_task_set_scheduler()
236<rtems_task_set_scheduler>`.
237
238OpenMP
239------
240
241OpenMP support for RTEMS is available via the GCC provided libgomp.  There is
242libgomp support for RTEMS in the POSIX configuration of libgomp since GCC 4.9
243(requires a Newlib snapshot after 2015-03-12). In GCC 6.1 or later (requires a
244Newlib snapshot after 2015-07-30 for <sys/lock.h> provided self-contained
245synchronization objects) there is a specialized libgomp configuration for RTEMS
246which offers a significantly better performance compared to the POSIX
247configuration of libgomp.  In addition application configurable thread pools
248for each scheduler instance are available in GCC 6.1 or later.
249
250The run-time configuration of libgomp is done via environment variables
251documented in the `libgomp manual <https://gcc.gnu.org/onlinedocs/libgomp/>`_.
252The environment variables are evaluated in a constructor function which
253executes in the context of the first initialization task before the actual
254initialization task function is called (just like a global C++ constructor).
255To set application specific values, a higher priority constructor function must
256be used to set up the environment variables.
257
258.. code-block:: c
259
260    #include <stdlib.h>
261    void __attribute__((constructor(1000))) config_libgomp( void )
262    {
263        setenv( "OMP_DISPLAY_ENV", "VERBOSE", 1 );
264        setenv( "GOMP_SPINCOUNT", "30000", 1 );
265        setenv( "GOMP_RTEMS_THREAD_POOLS", "1$2@SCHD", 1 );
266    }
267
268The environment variable ``GOMP_RTEMS_THREAD_POOLS`` is RTEMS-specific.  It
269determines the thread pools for each scheduler instance.  The format for
270``GOMP_RTEMS_THREAD_POOLS`` is a list of optional
271``<thread-pool-count>[$<priority>]@<scheduler-name>`` configurations separated
272by ``:`` where:
273
274- ``<thread-pool-count>`` is the thread pool count for this scheduler instance.
275
276- ``$<priority>`` is an optional priority for the worker threads of a thread
277  pool according to ``pthread_setschedparam``.  In case a priority value is
278  omitted, then a worker thread will inherit the priority of the OpenMP master
279  thread that created it.  The priority of the worker thread is not changed by
280  libgomp after creation, even if a new OpenMP master thread using the worker
281  has a different priority.
282
283- ``@<scheduler-name>`` is the scheduler instance name according to the RTEMS
284  application configuration.
285
286In case no thread pool configuration is specified for a scheduler instance,
287then each OpenMP master thread of this scheduler instance will use its own
288dynamically allocated thread pool.  To limit the worker thread count of the
289thread pools, each OpenMP master thread must call ``omp_set_num_threads``.
290
291Lets suppose we have three scheduler instances ``IO``, ``WRK0``, and ``WRK1``
292with ``GOMP_RTEMS_THREAD_POOLS`` set to ``"1@WRK0:3$4@WRK1"``.  Then there are
293no thread pool restrictions for scheduler instance ``IO``.  In the scheduler
294instance ``WRK0`` there is one thread pool available.  Since no priority is
295specified for this scheduler instance, the worker thread inherits the priority
296of the OpenMP master thread that created it.  In the scheduler instance
297``WRK1`` there are three thread pools available and their worker threads run at
298priority four.
299
300Application Issues
301==================
302
303Most operating system services provided by the uni-processor RTEMS are
304available in SMP configurations as well.  However, applications designed for an
305uni-processor environment may need some changes to correctly run in an SMP
306configuration.
307
308As discussed earlier, SMP systems have opportunities for true parallelism which
309was not possible on uni-processor systems. Consequently, multiple techniques
310that provided adequate critical sections on uni-processor systems are unsafe on
311SMP systems. In this section, some of these unsafe techniques will be
312discussed.
313
314In general, applications must use proper operating system provided mutual
315exclusion mechanisms to ensure correct behavior.
316
317Task variables
318--------------
319
320Task variables are ordinary global variables with a dedicated value for each
321thread.  During a context switch from the executing thread to the heir thread,
322the value of each task variable is saved to the thread control block of the
323executing thread and restored from the thread control block of the heir thread.
324This is inherently broken if more than one executing thread exists.
325Alternatives to task variables are POSIX keys and :ref:`TLS <TLS>`.  All use
326cases of task variables in the RTEMS code base were replaced with alternatives.
327The task variable API has been removed in RTEMS 4.12.
328
329Highest Priority Thread Never Walks Alone
330-----------------------------------------
331
332On a uni-processor system, it is safe to assume that when the highest priority
333task in an application executes, it will execute without being preempted until
334it voluntarily blocks. Interrupts may occur while it is executing, but there
335will be no context switch to another task unless the highest priority task
336voluntarily initiates it.
337
338Given the assumption that no other tasks will have their execution interleaved
339with the highest priority task, it is possible for this task to be constructed
340such that it does not need to acquire a mutex for protected access to shared
341data.
342
343In an SMP system, it cannot be assumed there will never be a single task
344executing. It should be assumed that every processor is executing another
345application task. Further, those tasks will be ones which would not have been
346executed in a uni-processor configuration and should be assumed to have data
347synchronization conflicts with what was formerly the highest priority task
348which executed without conflict.
349
350Disabling of Thread Preemption
351------------------------------
352
353A thread which disables preemption prevents that a higher priority thread gets
354hold of its processor involuntarily.  In uni-processor configurations, this can
355be used to ensure mutual exclusion at thread level.  In SMP configurations,
356however, more than one executing thread may exist.  Thus, it is impossible to
357ensure mutual exclusion using this mechanism.  In order to prevent that
358applications using preemption for this purpose, would show inappropriate
359behaviour, this feature is disabled in SMP configurations and its use would
360case run-time errors.
361
362Disabling of Interrupts
363-----------------------
364
365A low overhead means that ensures mutual exclusion in uni-processor
366configurations is the disabling of interrupts around a critical section.  This
367is commonly used in device driver code.  In SMP configurations, however,
368disabling the interrupts on one processor has no effect on other processors.
369So, this is insufficient to ensure system-wide mutual exclusion.  The macros
370
371* :ref:`rtems_interrupt_disable() <rtems_interrupt_disable>`,
372
373* :ref:`rtems_interrupt_enable() <rtems_interrupt_enable>`, and
374
375* :ref:`rtems_interrupt_flash() <rtems_interrupt_flash>`.
376
377are disabled in SMP configurations and its use will cause compile-time warnings
378and link-time errors.  In the unlikely case that interrupts must be disabled on
379the current processor, the
380
381* :ref:`rtems_interrupt_local_disable() <rtems_interrupt_local_disable>`, and
382
383* :ref:`rtems_interrupt_local_enable() <rtems_interrupt_local_enable>`.
384
385macros are now available in all configurations.
386
387Since disabling of interrupts is insufficient to ensure system-wide mutual
388exclusion on SMP a new low-level synchronization primitive was added --
389interrupt locks.  The interrupt locks are a simple API layer on top of the SMP
390locks used for low-level synchronization in the operating system core.
391Currently, they are implemented as a ticket lock.  In uni-processor
392configurations, they degenerate to simple interrupt disable/enable sequences by
393means of the C pre-processor.  It is disallowed to acquire a single interrupt
394lock in a nested way.  This will result in an infinite loop with interrupts
395disabled.  While converting legacy code to interrupt locks, care must be taken
396to avoid this situation to happen.
397
398.. code-block:: c
399    :linenos:
400
401    #include <rtems.h>
402
403    void legacy_code_with_interrupt_disable_enable( void )
404    {
405      rtems_interrupt_level level;
406
407      rtems_interrupt_disable( level );
408      /* Critical section */
409      rtems_interrupt_enable( level );
410    }
411
412    RTEMS_INTERRUPT_LOCK_DEFINE( static, lock, "Name" )
413
414    void smp_ready_code_with_interrupt_lock( void )
415    {
416      rtems_interrupt_lock_context lock_context;
417
418      rtems_interrupt_lock_acquire( &lock, &lock_context );
419      /* Critical section */
420      rtems_interrupt_lock_release( &lock, &lock_context );
421    }
422
423An alternative to the RTEMS-specific interrupt locks are POSIX spinlocks.  The
424:c:type:`pthread_spinlock_t` is defined as a self-contained object, e.g. the
425user must provide the storage for this synchronization object.
426
427.. code-block:: c
428    :linenos:
429
430    #include <assert.h>
431    #include <pthread.h>
432
433    pthread_spinlock_t lock;
434
435    void smp_ready_code_with_posix_spinlock( void )
436    {
437      int error;
438
439      error = pthread_spin_lock( &lock );
440      assert( error == 0 );
441      /* Critical section */
442      error = pthread_spin_unlock( &lock );
443      assert( error == 0 );
444    }
445
446In contrast to POSIX spinlock implementation on Linux or FreeBSD, it is not
447allowed to call blocking operating system services inside the critical section.
448A recursive lock attempt is a severe usage error resulting in an infinite loop
449with interrupts disabled.  Nesting of different locks is allowed.  The user
450must ensure that no deadlock can occur.  As a non-portable feature the locks
451are zero-initialized, e.g. statically initialized global locks reside in the
452``.bss`` section and there is no need to call :c:func:`pthread_spin_init`.
453
454Interrupt Service Routines Execute in Parallel With Threads
455-----------------------------------------------------------
456
457On a machine with more than one processor, interrupt service routines (this
458includes timer service routines installed via :ref:`rtems_timer_fire_after()
459<rtems_timer_fire_after>`) and threads can execute in parallel.  Interrupt
460service routines must take this into account and use proper locking mechanisms
461to protect critical sections from interference by threads (interrupt locks or
462POSIX spinlocks).  This likely requires code modifications in legacy device
463drivers.
464
465Timers Do Not Stop Immediately
466------------------------------
467
468Timer service routines run in the context of the clock interrupt.  On
469uni-processor configurations, it is sufficient to disable interrupts and remove
470a timer from the set of active timers to stop it.  In SMP configurations,
471however, the timer service routine may already run and wait on an SMP lock
472owned by the thread which is about to stop the timer.  This opens the door to
473subtle synchronization issues.  During destruction of objects, special care
474must be taken to ensure that timer service routines cannot access (partly or
475fully) destroyed objects.
476
477False Sharing of Cache Lines Due to Objects Table
478-------------------------------------------------
479
480The Classic API and most POSIX API objects are indirectly accessed via an
481object identifier.  The user-level functions validate the object identifier and
482map it to the actual object structure which resides in a global objects table
483for each object class.  So, unrelated objects are packed together in a table.
484This may result in false sharing of cache lines.  The effect of false sharing
485of cache lines can be observed with the `TMFINE 1
486<https://git.rtems.org/rtems/tree/testsuites/tmtests/tmfine01>`_ test program
487on a suitable platform, e.g. QorIQ T4240.  High-performance SMP applications
488need full control of the object storage :cite:`Drepper:2007:Memory`.
489Therefore, self-contained synchronization objects are now available for RTEMS.
490
491Directives
492==========
493
494This section details the symmetric multiprocessing services.  A subsection is
495dedicated to each of these services and describes the calling sequence, related
496constants, usage, and status codes.
497
498.. raw:: latex
499
500   \clearpage
501
502.. _rtems_get_processor_count:
503
504GET_PROCESSOR_COUNT - Get processor count
505-----------------------------------------
506
507CALLING SEQUENCE:
508    .. code-block:: c
509
510        uint32_t rtems_get_processor_count(void);
511
512DIRECTIVE STATUS CODES:
513    The count of processors in the system.
514
515DESCRIPTION:
516    In uni-processor configurations, a value of one will be returned.
517
518    In SMP configurations, this returns the value of a global variable set
519    during system initialization to indicate the count of utilized processors.
520    The processor count depends on the physically or virtually available
521    processors and application configuration.  The value will always be less
522    than or equal to the maximum count of application configured processors.
523
524NOTES:
525    None.
526
527.. raw:: latex
528
529   \clearpage
530
531.. _rtems_get_current_processor:
532
533GET_CURRENT_PROCESSOR - Get current processor index
534---------------------------------------------------
535
536CALLING SEQUENCE:
537    .. code-block:: c
538
539        uint32_t rtems_get_current_processor(void);
540
541DIRECTIVE STATUS CODES:
542    The index of the current processor.
543
544DESCRIPTION:
545    In uni-processor configurations, a value of zero will be returned.
546
547    In SMP configurations, an architecture specific method is used to obtain the
548    index of the current processor in the system.  The set of processor indices
549    is the range of integers starting with zero up to the processor count minus
550    one.
551
552    Outside of sections with disabled thread dispatching the current processor
553    index may change after every instruction since the thread may migrate from
554    one processor to another.  Sections with disabled interrupts are sections
555    with thread dispatching disabled.
556
557NOTES:
558    None.
559
560Implementation Details
561======================
562
563This section covers some implementation details of the RTEMS SMP support.
564
565Low-Level Synchronization
566-------------------------
567
568All low-level synchronization primitives are implemented using :term:`C11`
569atomic operations, so no target-specific hand-written assembler code is
570necessary.  Four synchronization primitives are currently available
571
572* ticket locks (mutual exclusion),
573
574* :term:`MCS` locks (mutual exclusion),
575
576* barriers, implemented as a sense barrier, and
577
578* sequence locks :cite:`Boehm:2012:Seqlock`.
579
580A vital requirement for low-level mutual exclusion is :term:`FIFO` fairness
581since we are interested in a predictable system and not maximum throughput.
582With this requirement, there are only few options to resolve this problem.  For
583reasons of simplicity, the ticket lock algorithm was chosen to implement the
584SMP locks.  However, the API is capable to support MCS locks, which may be
585interesting in the future for systems with a processor count in the range of 32
586or more, e.g.  :term:`NUMA`, many-core systems.
587
588The test program `SMPLOCK 1
589<https://git.rtems.org/rtems/tree/testsuites/smptests/smplock01>`_ can be used
590to gather performance and fairness data for several scenarios.  The SMP lock
591performance and fairness measured on the QorIQ T4240 follows as an example.
592This chip contains three L2 caches.  Each L2 cache is shared by eight
593processors.
594
595.. image:: ../images/c_user/smplock01perf-t4240.*
596   :width: 400
597   :align: center
598
599.. image:: ../images/c_user/smplock01fair-t4240.*
600   :width: 400
601   :align: center
602
603Internal Locking
604----------------
605
606In SMP configurations, the operating system uses non-recursive SMP locks for
607low-level mutual exclusion.  The locking domains are roughly
608
609* a particular data structure,
610* the thread queue operations,
611* the thread state changes, and
612* the scheduler operations.
613
614For a good average-case performance it is vital that every high-level
615synchronization object, e.g. mutex, has its own SMP lock.  In the average-case,
616only this SMP lock should be involved to carry out a specific operation, e.g.
617obtain/release a mutex.  In general, the high-level synchronization objects
618have a thread queue embedded and use its SMP lock.
619
620In case a thread must block on a thread queue, then things get complicated.
621The executing thread first acquires the SMP lock of the thread queue and then
622figures out that it needs to block.  The procedure to block the thread on this
623particular thread queue involves state changes of the thread itself and for
624this thread-specific SMP locks must be used.
625
626In order to determine if a thread is blocked on a thread queue or not
627thread-specific SMP locks must be used.  A thread priority change must
628propagate this to the thread queue (possibly recursively).  Care must be taken
629to not have a lock order reversal between thread queue and thread-specific SMP
630locks.
631
632Each scheduler instance has its own SMP lock.  For the scheduler helping
633protocol multiple scheduler instances may be in charge of a thread.  It is not
634possible to acquire two scheduler instance SMP locks at the same time,
635otherwise deadlocks would happen.  A thread-specific SMP lock is used to
636synchronize the thread data shared by different scheduler instances.
637
638The thread state SMP lock protects various things, e.g. the thread state, join
639operations, signals, post-switch actions, the home scheduler instance, etc.
640
641Profiling
642---------
643
644To identify the bottlenecks in the system, support for profiling of low-level
645synchronization is optionally available.  The profiling support is a BSP build
646time configuration option (``--enable-profiling``) and is implemented with an
647acceptable overhead, even for production systems.  A low-overhead counter for
648short time intervals must be provided by the hardware.
649
650Profiling reports are generated in XML for most test programs of the RTEMS
651testsuite (more than 500 test programs).  This gives a good sample set for
652statistics.  For example the maximum thread dispatch disable time, the maximum
653interrupt latency or lock contention can be determined.
654
655.. code-block:: xml
656
657   <ProfilingReport name="SMPMIGRATION 1">
658     <PerCPUProfilingReport processorIndex="0">
659       <MaxThreadDispatchDisabledTime unit="ns">36636</MaxThreadDispatchDisabledTime>
660       <MeanThreadDispatchDisabledTime unit="ns">5065</MeanThreadDispatchDisabledTime>
661       <TotalThreadDispatchDisabledTime unit="ns">3846635988
662         </TotalThreadDispatchDisabledTime>
663       <ThreadDispatchDisabledCount>759395</ThreadDispatchDisabledCount>
664       <MaxInterruptDelay unit="ns">8772</MaxInterruptDelay>
665       <MaxInterruptTime unit="ns">13668</MaxInterruptTime>
666       <MeanInterruptTime unit="ns">6221</MeanInterruptTime>
667       <TotalInterruptTime unit="ns">6757072</TotalInterruptTime>
668       <InterruptCount>1086</InterruptCount>
669     </PerCPUProfilingReport>
670     <PerCPUProfilingReport processorIndex="1">
671       <MaxThreadDispatchDisabledTime unit="ns">39408</MaxThreadDispatchDisabledTime>
672       <MeanThreadDispatchDisabledTime unit="ns">5060</MeanThreadDispatchDisabledTime>
673       <TotalThreadDispatchDisabledTime unit="ns">3842749508
674         </TotalThreadDispatchDisabledTime>
675       <ThreadDispatchDisabledCount>759391</ThreadDispatchDisabledCount>
676       <MaxInterruptDelay unit="ns">8412</MaxInterruptDelay>
677       <MaxInterruptTime unit="ns">15868</MaxInterruptTime>
678       <MeanInterruptTime unit="ns">3525</MeanInterruptTime>
679       <TotalInterruptTime unit="ns">3814476</TotalInterruptTime>
680       <InterruptCount>1082</InterruptCount>
681     </PerCPUProfilingReport>
682     <!-- more reports omitted --->
683     <SMPLockProfilingReport name="Scheduler">
684       <MaxAcquireTime unit="ns">7092</MaxAcquireTime>
685       <MaxSectionTime unit="ns">10984</MaxSectionTime>
686       <MeanAcquireTime unit="ns">2320</MeanAcquireTime>
687       <MeanSectionTime unit="ns">199</MeanSectionTime>
688       <TotalAcquireTime unit="ns">3523939244</TotalAcquireTime>
689       <TotalSectionTime unit="ns">302545596</TotalSectionTime>
690       <UsageCount>1518758</UsageCount>
691       <ContentionCount initialQueueLength="0">759399</ContentionCount>
692       <ContentionCount initialQueueLength="1">759359</ContentionCount>
693       <ContentionCount initialQueueLength="2">0</ContentionCount>
694       <ContentionCount initialQueueLength="3">0</ContentionCount>
695     </SMPLockProfilingReport>
696   </ProfilingReport>
697
698Scheduler Helping Protocol
699--------------------------
700
701The scheduler provides a helping protocol to support locking protocols like the
702:ref:`OMIP` or the :ref:`MrsP`.  Each thread has a scheduler node for each
703scheduler instance in the system which are located in its :term:`TCB`.  A
704thread has exactly one home scheduler instance which is set during thread
705creation.  The home scheduler instance can be changed with
706:ref:`rtems_task_set_scheduler() <rtems_task_set_scheduler>`.  Due to the
707locking protocols a thread may gain access to scheduler nodes of other
708scheduler instances.  This allows the thread to temporarily migrate to another
709scheduler instance in case of preemption.
710
711The scheduler infrastructure is based on an object-oriented design.  The
712scheduler operations for a thread are defined as virtual functions.  For the
713scheduler helping protocol the following operations must be implemented by an
714SMP-aware scheduler
715
716* ask a scheduler node for help,
717* reconsider the help request of a scheduler node,
718* withdraw a schedule node.
719
720All currently available SMP-aware schedulers use a framework which is
721customized via inline functions.  This eases the implementation of scheduler
722variants.  Up to now, only priority-based schedulers are implemented.
723
724In case a thread is allowed to use more than one scheduler node it will ask
725these nodes for help
726
727* in case of preemption, or
728* an unblock did not schedule the thread, or
729* a yield  was successful.
730
731The actual ask for help scheduler operations are carried out as a side-effect
732of the thread dispatch procedure.  Once a need for help is recognized, a help
733request is registered in one of the processors related to the thread and a
734thread dispatch is issued.  This indirection leads to a better decoupling of
735scheduler instances.  Unrelated processors are not burdened with extra work for
736threads which participate in resource sharing.  Each ask for help operation
737indicates if it could help or not.  The procedure stops after the first
738successful ask for help.  Unsuccessful ask for help operations will register
739this need in the scheduler context.
740
741After a thread dispatch the reconsider help request operation is used to clean
742up stale help registrations in the scheduler contexts.
743
744The withdraw operation takes away scheduler nodes once the thread is no longer
745allowed to use them, e.g. it released a mutex.  The availability of scheduler
746nodes for a thread is controlled by the thread queues.
747
748Thread Dispatch Details
749-----------------------
750
751This section gives background information to developers interested in the
752interrupt latencies introduced by thread dispatching.  A thread dispatch
753consists of all work which must be done to stop the currently executing thread
754on a processor and hand over this processor to an heir thread.
755
756In SMP systems, scheduling decisions on one processor must be propagated
757to other processors through inter-processor interrupts.  A thread dispatch
758which must be carried out on another processor does not happen instantaneously.
759Thus, several thread dispatch requests might be in the air and it is possible
760that some of them may be out of date before the corresponding processor has
761time to deal with them.  The thread dispatch mechanism uses three per-processor
762variables,
763
764- the executing thread,
765
766- the heir thread, and
767
768- a boolean flag indicating if a thread dispatch is necessary or not.
769
770Updates of the heir thread are done via a normal store operation.  The thread
771dispatch necessary indicator of another processor is set as a side-effect of an
772inter-processor interrupt.  So, this change notification works without the use
773of locks.  The thread context is protected by a :term:`TTAS` lock embedded in
774the context to ensure that it is used on at most one processor at a time.
775Normally, only thread-specific or per-processor locks are used during a thread
776dispatch.  This implementation turned out to be quite efficient and no lock
777contention was observed in the testsuite.  The heavy-weight thread dispatch
778sequence is only entered in case the thread dispatch indicator is set.
779
780The context-switch is performed with interrupts enabled.  During the transition
781from the executing to the heir thread neither the stack of the executing nor
782the heir thread must be used during interrupt processing.  For this purpose a
783temporary per-processor stack is set up which may be used by the interrupt
784prologue before the stack is switched to the interrupt stack.
Note: See TracBrowser for help on using the repository browser.