source: rtems-docs/c-user/key_concepts.rst @ 12dccfe

5
Last change on this file since 12dccfe was 12dccfe, checked in by Sebastian Huber <sebastian.huber@…>, on 01/09/19 at 15:14:05

Remove superfluous "All rights reserved."

  • Property mode set to 100644
File size: 23.0 KB
Line 
1.. comment SPDX-License-Identifier: CC-BY-SA-4.0
2
3.. Copyright (C) 1988, 2008 On-Line Applications Research Corporation (OAR)
4
5Key Concepts
6************
7
8Introduction
9============
10
11The facilities provided by RTEMS are built upon a foundation of very powerful
12concepts.  These concepts must be understood before the application developer
13can efficiently utilize RTEMS.  The purpose of this chapter is to familiarize
14one with these concepts.
15
16.. _objects:
17
18.. index:: objects
19
20Objects
21=======
22
23RTEMS provides directives which can be used to dynamically create, delete, and
24manipulate a set of predefined object types.  These types include tasks,
25message queues, semaphores, memory regions, memory partitions, timers, ports,
26and rate monotonic periods.  The object-oriented nature of RTEMS encourages the
27creation of modular applications built upon re-usable "building block"
28routines.
29
30All objects are created on the local node as required by the application and
31have an RTEMS assigned ID.  All objects have a user-assigned name.  Although a
32relationship exists between an object's name and its RTEMS assigned ID, the
33name and ID are not identical.  Object names are completely arbitrary and
34selected by the user as a meaningful "tag" which may commonly reflect the
35object's use in the application.  Conversely, object IDs are designed to
36facilitate efficient object manipulation by the executive.
37
38.. index:: object name
39.. index:: rtems_name
40
41Object Names
42------------
43
44An object name is an unsigned thirty-two bit entity associated with the object
45by the user.  The data type ``rtems_name`` is used to store object
46names.
47
48.. index:: rtems_build_name
49
50Although not required by RTEMS, object names are often composed of four ASCII
51characters which help identify that object.  For example, a task which causes a
52light to blink might be called "LITE".  The ``rtems_build_name`` routine is
53provided to build an object name from four ASCII characters.  The following
54example illustrates this:
55
56.. code-block:: c
57
58    rtems_name my_name;
59    my_name = rtems_build_name( 'L', 'I', 'T', 'E' );
60
61However, it is not required that the application use ASCII characters to build
62object names.  For example, if an application requires one-hundred tasks, it
63would be difficult to assign meaningful ASCII names to each task.  A more
64convenient approach would be to name them the binary values one through
65one-hundred, respectively.
66
67.. index:: rtems_object_get_name
68
69RTEMS provides a helper routine, ``rtems_object_get_name``, which can be used
70to obtain the name of any RTEMS object using just its ID.  This routine
71attempts to convert the name into a printable string.
72
73The following example illustrates the use of this method to print an object
74name:
75
76.. code-block:: c
77
78    #include <rtems.h>
79    #include <rtems/bspIo.h>
80    void print_name(rtems_id id)
81    {
82        char  buffer[10];   /* name assumed to be 10 characters or less */
83        char *result;
84        result = rtems_object_get_name( id, sizeof(buffer), buffer );
85        printk( "ID=0x%08x name=%s\n", id, ((result) ? result : "no name") );
86    }
87
88.. index:: object ID
89.. index:: object ID composition
90.. index:: rtems_id
91
92Object IDs
93----------
94
95An object ID is a unique 32-bit unsigned integer value which uniquely
96identifies an object instance.  Object IDs are passed as arguments to many
97directives in RTEMS and RTEMS translates the ID to an internal object pointer.
98The efficient manipulation of object IDs is critical to the performance of some
99RTEMS services.
100
101Object ID Format
102~~~~~~~~~~~~~~~~
103
104The thirty-two bit format for an object ID is composed of four parts: API,
105object class, node, and index.  The data type ``rtems_id`` is used to store
106object IDs.
107
108.. code-block:: c
109
110    31      27 26   24 23          16 15                             0
111    +---------+-------+--------------+-------------------------------+
112    |         |       |              |                               |
113    |  Class  |  API  |     Node     |             Index             |
114    |         |       |              |                               |
115    +---------+-------+--------------+-------------------------------+
116
117The most significant five bits are the object class.  The next three bits
118indicate the API to which the object class belongs.  The next eight bits
119(16-23) are the number of the node on which this object was created.  The node
120number is always one (1) in a single processor system.  The least significant
121sixteen bits form an identifier within a particular object type.  This
122identifier, called the object index, ranges in value from 1 to the maximum
123number of objects configured for this object type.
124
125Object ID Description
126---------------------
127
128The components of an object ID make it possible to quickly locate any object in
129even the most complicated multiprocessor system.  Object ID's are associated
130with an object by RTEMS when the object is created and the corresponding ID is
131returned by the appropriate object create directive.  The object ID is required
132as input to all directives involving objects, except those which create an
133object or obtain the ID of an object.
134
135The object identification directives can be used to dynamically obtain a
136particular object's ID given its name.  This mapping is accomplished by
137searching the name table associated with this object type.  If the name is
138non-unique, then the ID associated with the first occurrence of the name will
139be returned to the application.  Since object IDs are returned when the object
140is created, the object identification directives are not necessary in a
141properly designed single processor application.
142
143In addition, services are provided to portably examine the subcomponents of an
144RTEMS ID.  These services are described in detail later in this manual but are
145prototyped as follows:
146
147.. index:: obtaining class from object ID
148.. index:: obtaining node from object ID
149.. index:: obtaining index from object ID
150.. index:: get class from object ID
151.. index:: get node from object ID
152.. index:: get index from object ID
153.. index:: rtems_object_id_get_api
154.. index:: rtems_object_id_get_class
155.. index:: rtems_object_id_get_node
156.. index:: rtems_object_id_get_index
157
158.. code-block:: c
159
160    uint32_t rtems_object_id_get_api( rtems_id );
161    uint32_t rtems_object_id_get_class( rtems_id );
162    uint32_t rtems_object_id_get_node( rtems_id );
163    uint32_t rtems_object_id_get_index( rtems_id );
164
165An object control block is a data structure defined by RTEMS which contains the
166information necessary to manage a particular object type.  For efficiency
167reasons, the format of each object type's control block is different.  However,
168many of the fields are similar in function.  The number of each type of control
169block is application dependent and determined by the values specified in the
170user's Configuration Table.  An object control block is allocated at object
171create time and freed when the object is deleted.  With the exception of user
172extension routines, object control blocks are not directly manipulated by user
173applications.
174
175.. index:: communication and synchronization
176
177Communication and Synchronization
178=================================
179
180In real-time multitasking applications, the ability for cooperating execution
181threads to communicate and synchronize with each other is imperative.  A
182real-time executive should provide an application with the following
183capabilities:
184
185- Data transfer between cooperating tasks
186
187- Data transfer between tasks and ISRs
188
189- Synchronization of cooperating tasks
190
191- Synchronization of tasks and ISRs
192
193Most RTEMS managers can be used to provide some form of communication and/or
194synchronization.  However, managers dedicated specifically to communication and
195synchronization provide well established mechanisms which directly map to the
196application's varying needs.  This level of flexibility allows the application
197designer to match the features of a particular manager with the complexity of
198communication and synchronization required.  The following managers were
199specifically designed for communication and synchronization:
200
201- Semaphore
202
203- Message Queue
204
205- Event
206
207- Signal
208
209The semaphore manager supports mutual exclusion involving the synchronization
210of access to one or more shared user resources.  Binary semaphores may utilize
211the optional priority inheritance algorithm to avoid the problem of priority
212inversion.  The message manager supports both communication and
213synchronization, while the event manager primarily provides a high performance
214synchronization mechanism.  The signal manager supports only asynchronous
215communication and is typically used for exception handling.
216
217.. index:: locking protocols
218
219Locking Protocols
220=================
221
222RTEMS supports the four locking protocols
223
224* :ref:`PriorityCeiling`,
225
226* :ref:`PriorityInheritance`,
227
228* :ref:`MrsP`, and
229
230* :ref:`OMIP`
231
232for synchronization objects providing mutual-exclusion (mutex).  The OMIP is
233only available in SMP configurations and replaces the priority inheritance
234protocol in this case.  One aim of the locking protocols is to avoid priority
235inversion.
236
237Since RTEMS 5.1, priority updates due to the locking protocols take place
238immediately and are propagated recursively.  The mutex owner and wait for mutex
239relationships define a directed acyclic graph (DAG).  The run-time of the mutex
240obtain, release and timeout operations depend on the complexity of this
241resource dependency graph.
242
243.. index:: priority inversion
244
245.. _PriorityInversion:
246
247Priority Inversion
248------------------
249
250Priority inversion is a form of indefinite postponement which is common in
251multitasking, preemptive executives with shared resources.  Priority inversion
252occurs when a high priority tasks requests access to shared resource which is
253currently allocated to a low priority task.  The high priority task must block
254until the low priority task releases the resource.  This problem is exacerbated
255when the low priority task is prevented from executing by one or more medium
256priority tasks.  Because the low priority task is not executing, it cannot
257complete its interaction with the resource and release that resource.  The high
258priority task is effectively prevented from executing by lower priority tasks.
259
260.. index:: priority ceiling protocol
261.. index:: immediate ceiling priority protocol
262
263.. _PriorityCeiling:
264
265Immediate Ceiling Priority Protocol (ICPP)
266------------------------------------------
267
268Each mutex using the Immediate Ceiling Priority Protocol (ICPP) has a ceiling
269priority.  The priority of the mutex owner is immediately raised to the ceiling
270priority of the mutex.  In case the thread owning the mutex releases the mutex,
271then the normal priority of the thread is restored.  This locking protocol is
272beneficial for schedulability analysis, see also
273:cite:`Burns:2001:RealTimeSystems`.
274
275This protocol avoids the possibility of changing the priority of the mutex
276owner multiple times since the ceiling priority must be set to the one of
277highest priority thread which will ever attempt to acquire that mutex.  This
278requires an overall knowledge of the application as a whole.  The need to
279identify the highest priority thread which will attempt to obtain a particular
280mutex can be a difficult task in a large, complicated system.  Although the
281priority ceiling protocol is more efficient than the priority inheritance
282protocol with respect to the maximum number of thread priority changes which
283may occur while a thread owns a particular mutex, the priority inheritance
284protocol is more forgiving in that it does not require this apriori
285information.
286
287.. index:: priority inheritance protocol
288
289.. _PriorityInheritance:
290
291Priority Inheritance Protocol
292-----------------------------
293
294The priority of the mutex owner is raised to the highest priority of all
295threads that currently wait for ownership of this mutex :cite:`Sha:1990:PI`.
296Since RTEMS 5.1, priority updates due to the priority inheritance protocol
297take place immediately and are propagated recursively.
298
299.. index:: Multiprocessor Resource Sharing Protocol (MrsP)
300
301.. _MrsP:
302
303Multiprocessor Resource Sharing Protocol (MrsP)
304-----------------------------------------------
305
306The Multiprocessor Resource Sharing Protocol (MrsP) is a generalization of the
307priority ceiling protocol to clustered scheduling :cite:`Burns:2013:MrsP`.  One
308of the design goals of MrsP is to enable an effective schedulability analysis
309using the sporadic task model.  Each mutex using the MrsP has a ceiling
310priority for each scheduler instance.  The priority of the mutex owner is
311immediately raised to the ceiling priority of the mutex defined for its home
312scheduler instance.  In case the thread owning the mutex releases the mutex,
313then the normal priority of the thread is restored.  Threads that wait for
314mutex ownership are not blocked with respect to the scheduler and instead
315perform a busy wait.  The MrsP uses temporary thread migrations to foreign
316scheduler instances in case of a preemption of the mutex owner.  This locking
317protocol is available since RTEMS 4.11. It was re-implemented in RTEMS 5.1 to
318overcome some shortcomings of the original implementation
319:cite:`Catellani:2015:MrsP`.
320
321.. index:: O(m) Independence-Preserving Protocol (OMIP)
322
323.. _OMIP:
324
325O(m) Independence-Preserving Protocol (OMIP)
326----------------------------------------------------
327
328The :math:`O(m)` Independence-Preserving Protocol (OMIP) is a generalization of
329the priority inheritance protocol to clustered scheduling which avoids the
330non-preemptive sections present with priority boosting
331:cite:`Brandenburg:2013:OMIP`.  The :math:`m` denotes the number of processors
332in the system.  Similar to the uniprocessor priority inheritance protocol, the
333OMIP mutexes do not need any external configuration data, e.g. a ceiling
334priority.  This makes them a good choice for general purpose libraries that
335need internal locking.  The complex part of the implementation is contained in
336the thread queues and shared with the MrsP support.  This locking protocol is
337available since RTEMS 5.1.
338
339.. index:: thread queues
340
341Thread Queues
342=============
343
344In case more than one :term:`thread` may wait on a synchronization object, e.g.
345a semaphore or a message queue, then the waiting threads are added to a data
346structure called the thread queue.  Thread queues are named task wait queues in
347the Classic API.  There are two thread queuing disciplines available which
348define the order of the threads on a particular thread queue.  Threads can wait
349in FIFO or priority order.
350
351In uniprocessor configurations, the priority queuing discipline just orders
352the threads according to their current priority and in FIFO order in case of
353equal priorities.  However, in SMP configurations, the situation is a bit more
354difficult due to the support for clustered scheduling.  It makes no sense to
355compare the priority values of two different scheduler instances.  Thus, it is
356impossible to simply use one plain priority queue for threads of different
357clusters.  Two levels of queues can be used as one way to solve the problem.
358The top-level queue provides FIFO ordering and contains priority queues.  Each
359priority queue is associated with a scheduler instance and contains only
360threads of this scheduler instance.  Threads are enqueued in the priority
361queues corresponding to their scheduler instances.  To dequeue a thread, the
362highest priority thread of the first priority queue is selected.  Once this is
363done, the first priority queue is appended to the top-level FIFO queue.  This
364guarantees fairness with respect to the scheduler instances.
365
366Such a two-level queue needs a considerable amount of memory if fast enqueue
367and dequeue operations are desired.  Providing this storage per thread queue
368would waste a lot of memory in typical applications.  Instead, each thread has
369a queue attached which resides in a dedicated memory space independent of other
370memory used for the thread (this approach was borrowed from FreeBSD).  In case
371a thread needs to block, there are two options
372
373* the object already has a queue, then the thread enqueues itself to this
374  already present queue and the queue of the thread is added to a list of free
375  queues for this object, or
376
377* otherwise, the queue of the thread is given to the object and the thread
378  enqueues itself to this queue.
379
380In case the thread is dequeued, there are two options
381
382* the thread is the last thread in the queue, then it removes this queue
383  from the object and reclaims it for its own purpose, or
384
385* otherwise, the thread removes one queue from the free list of the object
386  and reclaims it for its own purpose.
387
388Since there are usually more objects than threads, this actually reduces the
389memory demands.  In addition the objects only contain a pointer to the queue
390structure.  This helps to hide implementation details.  Inter-cluster priority
391queues are available since RTEMS 5.1.
392
393A doubly-linked list (chain) is used to implement the FIFO queues yielding a
394:math:`O(1)` worst-case time complexity for enqueue and dequeue operations.
395
396A red-black tree is used to implement the priority queues yielding a
397:math:`O(log(n))` worst-case time complexity for enqueue and dequeue operations
398with :math:`n` being the count of threads already on the queue.
399
400.. index:: time
401
402Time
403====
404
405The development of responsive real-time applications requires an understanding
406of how RTEMS maintains and supports time-related operations.  The basic unit of
407time in RTEMS is known as a `clock tick` or simply `tick`.  The tick interval
408is defined by the application configuration option
409:ref:`CONFIGURE_MICROSECONDS_PER_TICK <CONFIGURE_MICROSECONDS_PER_TICK>`.  The
410tick interval defines the basic resolution of all interval and calendar time
411operations.  Obviously, the directives which use intervals or wall time cannot
412operate without some external mechanism which provides a periodic clock tick.
413This clock tick is provided by the clock driver.  The tick precision and
414stability depends on the clock driver and interrupt latency.  Most clock
415drivers provide a timecounter to measure the time with a higher resolution than
416the tick.
417
418.. index:: rtems_interval
419
420By tracking time in units of ticks, RTEMS is capable of supporting interval
421timing functions such as task delays, timeouts, timeslicing, the delayed
422execution of timer service routines, and the rate monotonic scheduling of
423tasks.  An interval is defined as a number of ticks relative to the current
424time.  For example, when a task delays for an interval of ten ticks, it is
425implied that the task will not execute until ten clock ticks have occurred.
426All intervals are specified using data type :c:type:`rtems_interval`.
427
428A characteristic of interval timing is that the actual interval period may be a
429fraction of a tick less than the interval requested.  This occurs because the
430time at which the delay timer is set up occurs at some time between two clock
431ticks.  Therefore, the first countdown tick occurs in less than the complete
432time interval for a tick.  This can be a problem if the tick resolution is
433large.
434
435The rate monotonic scheduling algorithm is a hard real-time scheduling
436methodology.  This methodology provides rules which allows one to guarantee
437that a set of independent periodic tasks will always meet their deadlines even
438under transient overload conditions.  The rate monotonic manager provides
439directives built upon the Clock Manager's interval timer support routines.
440
441Interval timing is not sufficient for the many applications which require that
442time be kept in wall time or true calendar form.  Consequently, RTEMS maintains
443the current date and time.  This allows selected time operations to be
444scheduled at an actual calendar date and time.  For example, a task could
445request to delay until midnight on New Year's Eve before lowering the ball at
446Times Square.  The data type :c:type:`rtems_time_of_day` is used to specify calendar
447time in RTEMS services.  See :ref:`Time and Date Data Structures`.
448
449.. index:: rtems_time_of_day
450
451Timer and Timeouts
452==================
453
454Timer and timeout services are a standard component of an operating system.
455The use cases fall roughly into two categories:
456
457* Timeouts -- used to detect if some operations need more time than expected.
458  Since the unexpected happens hopefully rarely, timeout timers are usually
459  removed before they expire. The critical operations are insert and removal.
460  For example, they are important for the performance of a network stack.
461
462* Timers -- used to carry out some work in the future. They usually expire
463  and need a high resolution. An example use case is a time driven scheduler,
464  e.g.  rate-monotonic or EDF.
465
466In RTEMS versions prior to 5.1 the timer and timeout support was implemented
467by means of delta chains.  This implementation was unfit for SMP systems due to
468several reasons.  The new implementation present since RTEMS 5.1 uses a
469red-black tree with the expiration time as the key.  This leads to
470:math:`O(log(n))` worst-case insert and removal operations for :math:`n` active
471timer or timeouts.  Each processor provides its own timer and timeout service
472point so that it scales well with the processor count of the system.  For each
473operation it is sufficient to acquire and release a dedicated SMP lock only
474once. The drawback is that a 64-bit integer type is required internally for the
475intervals to avoid a potential overflow of the key values.
476
477An alternative to the red-black tree based implementation would be the use of a
478timer wheel based algorithm :cite:`Varghese:1987:TimerWheel` which is used in
479Linux and FreeBSD :cite:`Varghese:1995:BSDCallout` for example.  A timer wheel
480based algorithm offers :math:`O(1)` worst-case time complexity for insert and
481removal operations.  The drawback is that the run-time of the clock tick
482procedure is unpredictable due to the use of a hash table or cascading.
483
484The red-black tree approach was selected for RTEMS, since it offers a more
485predictable run-time behaviour.  However, this sacrifices the constant insert
486and removal operations offered by the timer wheel algorithms.  See also
487:cite:`Gleixner:2006:Hrtimers`.  The implementation can re-use the red-black
488tree support already used in other areas, e.g. for the thread priority queues.
489Less code is a good thing for size, testing and verification.
490
491.. index:: memory management
492
493Memory Management
494=================
495
496RTEMS memory management facilities can be grouped into two classes: dynamic
497memory allocation and address translation.  Dynamic memory allocation is
498required by applications whose memory requirements vary through the
499application's course of execution.  Address translation is needed by
500applications which share memory with another CPU or an intelligent Input/Output
501processor.  The following RTEMS managers provide facilities to manage memory:
502
503- Region
504
505- Partition
506
507- Dual Ported Memory
508
509RTEMS memory management features allow an application to create simple memory
510pools of fixed size buffers and/or more complex memory pools of variable size
511segments.  The partition manager provides directives to manage and maintain
512pools of fixed size entities such as resource control blocks.  Alternatively,
513the region manager provides a more general purpose memory allocation scheme
514that supports variable size blocks of memory which are dynamically obtained and
515freed by the application.  The dual-ported memory manager provides executive
516support for address translation between internal and external dual-ported RAM
517address space.
Note: See TracBrowser for help on using the repository browser.