source: rtems-docs/c-user/multiprocessing.rst @ 12dccfe

5
Last change on this file since 12dccfe was 12dccfe, checked in by Sebastian Huber <sebastian.huber@…>, on 01/09/19 at 15:14:05

Remove superfluous "All rights reserved."

  • Property mode set to 100644
File size: 21.1 KB
Line 
1.. comment SPDX-License-Identifier: CC-BY-SA-4.0
2
3.. Copyright (C) 1988, 2008 On-Line Applications Research Corporation (OAR)
4
5.. index:: multiprocessing
6
7Multiprocessing Manager
8***********************
9
10Introduction
11============
12
13In multiprocessor real-time systems, new requirements, such as sharing data and
14global resources between processors, are introduced.  This requires an
15efficient and reliable communications vehicle which allows all processors to
16communicate with each other as necessary.  In addition, the ramifications of
17multiple processors affect each and every characteristic of a real-time system,
18almost always making them more complicated.
19
20RTEMS addresses these issues by providing simple and flexible real-time
21multiprocessing capabilities.  The executive easily lends itself to both
22tightly-coupled and loosely-coupled configurations of the target system
23hardware.  In addition, RTEMS supports systems composed of both homogeneous and
24heterogeneous mixtures of processors and target boards.
25
26A major design goal of the RTEMS executive was to transcend the physical
27boundaries of the target hardware configuration.  This goal is achieved by
28presenting the application software with a logical view of the target system
29where the boundaries between processor nodes are transparent.  As a result, the
30application developer may designate objects such as tasks, queues, events,
31signals, semaphores, and memory blocks as global objects.  These global objects
32may then be accessed by any task regardless of the physical location of the
33object and the accessing task.  RTEMS automatically determines that the object
34being accessed resides on another processor and performs the actions required
35to access the desired object.  Simply stated, RTEMS allows the entire system,
36both hardware and software, to be viewed logically as a single system.
37
38The directives provided by the  Manager are:
39
40- rtems_multiprocessing_announce_ - A multiprocessing communications packet has
41  arrived
42
43.. index:: multiprocessing topologies
44
45Background
46==========
47
48RTEMS makes no assumptions regarding the connection media or topology of a
49multiprocessor system.  The tasks which compose a particular application can be
50spread among as many processors as needed to satisfy the application's timing
51requirements.  The application tasks can interact using a subset of the RTEMS
52directives as if they were on the same processor.  These directives allow
53application tasks to exchange data, communicate, and synchronize regardless of
54which processor they reside upon.
55
56The RTEMS multiprocessor execution model is multiple instruction streams with
57multiple data streams (MIMD).  This execution model has each of the processors
58executing code independent of the other processors.  Because of this
59parallelism, the application designer can more easily guarantee deterministic
60behavior.
61
62By supporting heterogeneous environments, RTEMS allows the systems designer to
63select the most efficient processor for each subsystem of the application.
64Configuring RTEMS for a heterogeneous environment is no more difficult than for
65a homogeneous one.  In keeping with RTEMS philosophy of providing transparent
66physical node boundaries, the minimal heterogeneous processing required is
67isolated in the MPCI layer.
68
69.. index:: nodes, definition
70
71Nodes
72-----
73
74A processor in a RTEMS system is referred to as a node.  Each node is assigned
75a unique non-zero node number by the application designer.  RTEMS assumes that
76node numbers are assigned consecutively from one to the ``maximum_nodes``
77configuration parameter.  The node number, node, and the maximum number of
78nodes, ``maximum_nodes``, in a system are found in the Multiprocessor
79Configuration Table.  The ``maximum_nodes`` field and the number of global
80objects, ``maximum_global_objects``, is required to be the same on all nodes in
81a system.
82
83The node number is used by RTEMS to identify each node when performing remote
84operations.  Thus, the Multiprocessor Communications Interface Layer (MPCI)
85must be able to route messages based on the node number.
86
87.. index:: global objects, definition
88
89Global Objects
90--------------
91
92All RTEMS objects which are created with the GLOBAL attribute will be known on
93all other nodes.  Global objects can be referenced from any node in the system,
94although certain directive specific restrictions (e.g. one cannot delete a
95remote object) may apply.  A task does not have to be global to perform
96operations involving remote objects.  The maximum number of global objects is
97the system is user configurable and can be found in the maximum_global_objects
98field in the Multiprocessor Configuration Table.  The distribution of tasks to
99processors is performed during the application design phase.  Dynamic task
100relocation is not supported by RTEMS.
101
102.. index:: global objects table
103
104Global Object Table
105-------------------
106
107RTEMS maintains two tables containing object information on every node in a
108multiprocessor system: a local object table and a global object table.  The
109local object table on each node is unique and contains information for all
110objects created on this node whether those objects are local or global.  The
111global object table contains information regarding all global objects in the
112system and, consequently, is the same on every node.
113
114Since each node must maintain an identical copy of the global object table, the
115maximum number of entries in each copy of the table must be the same.  The
116maximum number of entries in each copy is determined by the
117maximum_global_objects parameter in the Multiprocessor Configuration Table.
118This parameter, as well as the maximum_nodes parameter, is required to be the
119same on all nodes.  To maintain consistency among the table copies, every node
120in the system must be informed of the creation or deletion of a global object.
121
122.. index:: MPCI and remote operations
123
124Remote Operations
125-----------------
126
127When an application performs an operation on a remote global object, RTEMS must
128generate a Remote Request (RQ) message and send it to the appropriate node.
129After completing the requested operation, the remote node will build a Remote
130Response (RR) message and send it to the originating node.  Messages generated
131as a side-effect of a directive (such as deleting a global task) are known as
132Remote Processes (RP) and do not require the receiving node to respond.
133
134Other than taking slightly longer to execute directives on remote objects, the
135application is unaware of the location of the objects it acts upon.  The exact
136amount of overhead required for a remote operation is dependent on the media
137connecting the nodes and, to a lesser degree, on the efficiency of the
138user-provided MPCI routines.
139
140The following shows the typical transaction sequence during a remote
141application:
142
143#. The application issues a directive accessing a remote global object.
144
145#. RTEMS determines the node on which the object resides.
146
147#. RTEMS calls the user-provided MPCI routine ``GET_PACKET`` to obtain a packet
148   in which to build a RQ message.
149
150#. After building a message packet, RTEMS calls the user-provided MPCI routine
151   ``SEND_PACKET`` to transmit the packet to the node on which the object
152   resides (referred to as the destination node).
153
154#. The calling task is blocked until the RR message arrives, and control of the
155   processor is transferred to another task.
156
157#. The MPCI layer on the destination node senses the arrival of a packet
158   (commonly in an ISR), and calls the ``rtems_multiprocessing_announce``
159   directive.  This directive readies the Multiprocessing Server.
160
161#. The Multiprocessing Server calls the user-provided MPCI routine
162   ``RECEIVE_PACKET``, performs the requested operation, builds an RR message,
163   and returns it to the originating node.
164
165#. The MPCI layer on the originating node senses the arrival of a packet
166   (typically via an interrupt), and calls the RTEMS
167   ``rtems_multiprocessing_announce`` directive.  This directive readies the
168   Multiprocessing Server.
169
170#. The Multiprocessing Server calls the user-provided MPCI routine
171   ``RECEIVE_PACKET``, readies the original requesting task, and blocks until
172   another packet arrives.  Control is transferred to the original task which
173   then completes processing of the directive.
174
175If an uncorrectable error occurs in the user-provided MPCI layer, the fatal
176error handler should be invoked.  RTEMS assumes the reliable transmission and
177reception of messages by the MPCI and makes no attempt to detect or correct
178errors.
179
180.. index:: proxy, definition
181
182Proxies
183-------
184
185A proxy is an RTEMS data structure which resides on a remote node and is used
186to represent a task which must block as part of a remote operation. This action
187can occur as part of the ``rtems_semaphore_obtain`` and
188``rtems_message_queue_receive`` directives.  If the object were local, the
189task's control block would be available for modification to indicate it was
190blocking on a message queue or semaphore.  However, the task's control block
191resides only on the same node as the task.  As a result, the remote node must
192allocate a proxy to represent the task until it can be readied.
193
194The maximum number of proxies is defined in the Multiprocessor Configuration
195Table.  Each node in a multiprocessor system may require a different number of
196proxies to be configured.  The distribution of proxy control blocks is
197application dependent and is different from the distribution of tasks.
198
199Multiprocessor Configuration Table
200----------------------------------
201
202The Multiprocessor Configuration Table contains information needed by RTEMS
203when used in a multiprocessor system.  This table is discussed in detail in the
204section Multiprocessor Configuration Table of the Configuring a System chapter.
205
206Multiprocessor Communications Interface Layer
207=============================================
208
209The Multiprocessor Communications Interface Layer (MPCI) is a set of
210user-provided procedures which enable the nodes in a multiprocessor system to
211communicate with one another.  These routines are invoked by RTEMS at various
212times in the preparation and processing of remote requests.  Interrupts are
213enabled when an MPCI procedure is invoked.  It is assumed that if the execution
214mode and/or interrupt level are altered by the MPCI layer, that they will be
215restored prior to returning to RTEMS.
216
217.. index:: MPCI, definition
218
219The MPCI layer is responsible for managing a pool of buffers called packets and
220for sending these packets between system nodes.  Packet buffers contain the
221messages sent between the nodes.  Typically, the MPCI layer will encapsulate
222the packet within an envelope which contains the information needed by the MPCI
223layer.  The number of packets available is dependent on the MPCI layer
224implementation.
225
226.. index:: MPCI entry points
227
228The entry points to the routines in the user's MPCI layer should be placed in
229the Multiprocessor Communications Interface Table.  The user must provide entry
230points for each of the following table entries in a multiprocessor system:
231
232.. list-table::
233 :class: rtems-table
234
235 * - initialization
236   - initialize the MPCI
237 * - get_packet
238   - obtain a packet buffer
239 * - return_packet
240   - return a packet buffer
241 * - send_packet
242   - send a packet to another node
243 * - receive_packet
244   - called to get an arrived packet
245
246A packet is sent by RTEMS in each of the following situations:
247
248- an RQ is generated on an originating node;
249
250- an RR is generated on a destination node;
251
252- a global object is created;
253
254- a global object is deleted;
255
256- a local task blocked on a remote object is deleted;
257
258- during system initialization to check for system consistency.
259
260If the target hardware supports it, the arrival of a packet at a node may
261generate an interrupt.  Otherwise, the real-time clock ISR can check for the
262arrival of a packet.  In any case, the ``rtems_multiprocessing_announce``
263directive must be called to announce the arrival of a packet.  After exiting
264the ISR, control will be passed to the Multiprocessing Server to process the
265packet.  The Multiprocessing Server will call the get_packet entry to obtain a
266packet buffer and the receive_entry entry to copy the message into the buffer
267obtained.
268
269INITIALIZATION
270--------------
271
272The INITIALIZATION component of the user-provided MPCI layer is called as part
273of the ``rtems_initialize_executive`` directive to initialize the MPCI layer
274and associated hardware.  It is invoked immediately after all of the device
275drivers have been initialized.  This component should be adhere to the
276following prototype:
277
278.. index:: rtems_mpci_entry
279
280.. code-block:: c
281
282    rtems_mpci_entry user_mpci_initialization(
283        rtems_configuration_table *configuration
284    );
285
286where configuration is the address of the user's Configuration Table.
287Operations on global objects cannot be performed until this component is
288invoked.  The INITIALIZATION component is invoked only once in the life of any
289system.  If the MPCI layer cannot be successfully initialized, the fatal error
290manager should be invoked by this routine.
291
292One of the primary functions of the MPCI layer is to provide the executive with
293packet buffers.  The INITIALIZATION routine must create and initialize a pool
294of packet buffers.  There must be enough packet buffers so RTEMS can obtain one
295whenever needed.
296
297GET_PACKET
298----------
299
300The GET_PACKET component of the user-provided MPCI layer is called when RTEMS
301must obtain a packet buffer to send or broadcast a message.  This component
302should be adhere to the following prototype:
303
304.. code-block:: c
305
306    rtems_mpci_entry user_mpci_get_packet(
307        rtems_packet_prefix **packet
308    );
309
310where packet is the address of a pointer to a packet.  This routine always
311succeeds and, upon return, packet will contain the address of a packet.  If for
312any reason, a packet cannot be successfully obtained, then the fatal error
313manager should be invoked.
314
315RTEMS has been optimized to avoid the need for obtaining a packet each time a
316message is sent or broadcast.  For example, RTEMS sends response messages (RR)
317back to the originator in the same packet in which the request message (RQ)
318arrived.
319
320RETURN_PACKET
321-------------
322
323The RETURN_PACKET component of the user-provided MPCI layer is called when
324RTEMS needs to release a packet to the free packet buffer pool.  This component
325should be adhere to the following prototype:
326
327.. code-block:: c
328
329    rtems_mpci_entry user_mpci_return_packet(
330        rtems_packet_prefix *packet
331    );
332
333where packet is the address of a packet.  If the packet cannot be successfully
334returned, the fatal error manager should be invoked.
335
336RECEIVE_PACKET
337--------------
338
339The RECEIVE_PACKET component of the user-provided MPCI layer is called when
340RTEMS needs to obtain a packet which has previously arrived.  This component
341should be adhere to the following prototype:
342
343.. code-block:: c
344
345    rtems_mpci_entry user_mpci_receive_packet(
346        rtems_packet_prefix **packet
347    );
348
349where packet is a pointer to the address of a packet to place the message from
350another node.  If a message is available, then packet will contain the address
351of the message from another node.  If no messages are available, this entry
352packet should contain NULL.
353
354SEND_PACKET
355-----------
356
357The SEND_PACKET component of the user-provided MPCI layer is called when RTEMS
358needs to send a packet containing a message to another node.  This component
359should be adhere to the following prototype:
360
361.. code-block:: c
362
363    rtems_mpci_entry user_mpci_send_packet(
364        uint32_t               node,
365        rtems_packet_prefix  **packet
366    );
367
368where node is the node number of the destination and packet is the address of a
369packet which containing a message.  If the packet cannot be successfully sent,
370the fatal error manager should be invoked.
371
372If node is set to zero, the packet is to be broadcasted to all other nodes in
373the system.  Although some MPCI layers will be built upon hardware which
374support a broadcast mechanism, others may be required to generate a copy of the
375packet for each node in the system.
376
377.. COMMENT: XXX packet_prefix structure needs to be defined in this document
378
379Many MPCI layers use the ``packet_length`` field of the ``rtems_packet_prefix``
380portion of the packet to avoid sending unnecessary data.  This is especially
381useful if the media connecting the nodes is relatively slow.
382
383The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the packet
384indicates how much of the packet in 32-bit units may require conversion in a
385heterogeneous system.
386
387.. index:: heterogeneous multiprocessing
388
389Supporting Heterogeneous Environments
390-------------------------------------
391
392Developing an MPCI layer for a heterogeneous system requires a thorough
393understanding of the differences between the processors which comprise the
394system.  One difficult problem is the varying data representation schemes used
395by different processor types.  The most pervasive data representation problem
396is the order of the bytes which compose a data entity.  Processors which place
397the least significant byte at the smallest address are classified as little
398endian processors.  Little endian byte-ordering is shown below:
399
400.. code-block:: c
401
402    +---------------+----------------+---------------+----------------+
403    |               |                |               |                |
404    |    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
405    |               |                |               |                |
406    +---------------+----------------+---------------+----------------+
407
408Conversely, processors which place the most significant byte at the smallest
409address are classified as big endian processors.  Big endian byte-ordering is
410shown below:
411
412.. code-block:: c
413
414    +---------------+----------------+---------------+----------------+
415    |               |                |               |                |
416    |    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
417    |               |                |               |                |
418    +---------------+----------------+---------------+----------------+
419
420Unfortunately, sharing a data structure between big endian and little endian
421processors requires translation into a common endian format.  An application
422designer typically chooses the common endian format to minimize conversion
423overhead.
424
425Another issue in the design of shared data structures is the alignment of data
426structure elements.  Alignment is both processor and compiler implementation
427dependent.  For example, some processors allow data elements to begin on any
428address boundary, while others impose restrictions.  Common restrictions are
429that data elements must begin on either an even address or on a long word
430boundary.  Violation of these restrictions may cause an exception or impose a
431performance penalty.
432
433Other issues which commonly impact the design of shared data structures include
434the representation of floating point numbers, bit fields, decimal data, and
435character strings.  In addition, the representation method for negative
436integers could be one's or two's complement.  These factors combine to increase
437the complexity of designing and manipulating data structures shared between
438processors.
439
440RTEMS addressed these issues in the design of the packets used to communicate
441between nodes.  The RTEMS packet format is designed to allow the MPCI layer to
442perform all necessary conversion without burdening the developer with the
443details of the RTEMS packet format.  As a result, the MPCI layer must be aware
444of the following:
445
446- All packets must begin on a four byte boundary.
447
448- Packets are composed of both RTEMS and application data.  All RTEMS data is
449  treated as 32-bit unsigned quantities and is in the first ``to_convert``
450  32-bit quantities of the packet.  The ``to_convert`` field is part of the
451  ``rtems_packet_prefix`` portion of the packet.
452
453- The RTEMS data component of the packet must be in native endian format.
454  Endian conversion may be performed by either the sending or receiving MPCI
455  layer.
456
457- RTEMS makes no assumptions regarding the application data component of the
458  packet.
459
460Operations
461==========
462
463Announcing a Packet
464-------------------
465
466The ``rtems_multiprocessing_announce`` directive is called by the MPCI layer to
467inform RTEMS that a packet has arrived from another node.  This directive can
468be called from an interrupt service routine or from within a polling routine.
469
470Directives
471==========
472
473This section details the additional directives required to support RTEMS in a
474multiprocessor configuration.  A subsection is dedicated to each of this
475manager's directives and describes the calling sequence, related constants,
476usage, and status codes.
477
478.. raw:: latex
479
480   \clearpage
481
482.. index:: announce arrival of package
483.. index:: rtems_multiprocessing_announce
484
485.. _rtems_multiprocessing_announce:
486
487MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
488-----------------------------------------------------------
489
490CALLING SEQUENCE:
491    .. code-block:: c
492
493        void rtems_multiprocessing_announce( void );
494
495DIRECTIVE STATUS CODES:
496    NONE
497
498DESCRIPTION:
499    This directive informs RTEMS that a multiprocessing communications packet
500    has arrived from another node.  This directive is called by the
501    user-provided MPCI, and is only used in multiprocessor configurations.
502
503NOTES:
504    This directive is typically called from an ISR.
505
506    This directive will almost certainly cause the calling task to be
507    preempted.
508
509    This directive does not generate activity on remote nodes.
Note: See TracBrowser for help on using the repository browser.