source: rtems-docs/c-user/multiprocessing.rst @ 4da4a15

4.115
Last change on this file since 4da4a15 was 4da4a15, checked in by Chris Johns <chrisj@…>, on 11/09/16 at 00:42:10

c-user: Fix header levels. Minor fixes.

  • Property mode set to 100644
File size: 21.1 KB
Line 
1.. comment SPDX-License-Identifier: CC-BY-SA-4.0
2
3.. COMMENT: COPYRIGHT (c) 1988-2008.
4.. COMMENT: On-Line Applications Research Corporation (OAR).
5.. COMMENT: All rights reserved.
6
7Multiprocessing Manager
8***********************
9
10.. index:: multiprocessing
11
12Introduction
13============
14
15In multiprocessor real-time systems, new requirements, such as sharing data and
16global resources between processors, are introduced.  This requires an
17efficient and reliable communications vehicle which allows all processors to
18communicate with each other as necessary.  In addition, the ramifications of
19multiple processors affect each and every characteristic of a real-time system,
20almost always making them more complicated.
21
22RTEMS addresses these issues by providing simple and flexible real-time
23multiprocessing capabilities.  The executive easily lends itself to both
24tightly-coupled and loosely-coupled configurations of the target system
25hardware.  In addition, RTEMS supports systems composed of both homogeneous and
26heterogeneous mixtures of processors and target boards.
27
28A major design goal of the RTEMS executive was to transcend the physical
29boundaries of the target hardware configuration.  This goal is achieved by
30presenting the application software with a logical view of the target system
31where the boundaries between processor nodes are transparent.  As a result, the
32application developer may designate objects such as tasks, queues, events,
33signals, semaphores, and memory blocks as global objects.  These global objects
34may then be accessed by any task regardless of the physical location of the
35object and the accessing task.  RTEMS automatically determines that the object
36being accessed resides on another processor and performs the actions required
37to access the desired object.  Simply stated, RTEMS allows the entire system,
38both hardware and software, to be viewed logically as a single system.
39
40The directives provided by the  Manager are:
41
42- rtems_multiprocessing_announce_ - A multiprocessing communications packet has
43  arrived
44
45Background
46==========
47
48.. index:: multiprocessing topologies
49
50RTEMS makes no assumptions regarding the connection media or topology of a
51multiprocessor system.  The tasks which compose a particular application can be
52spread among as many processors as needed to satisfy the application's timing
53requirements.  The application tasks can interact using a subset of the RTEMS
54directives as if they were on the same processor.  These directives allow
55application tasks to exchange data, communicate, and synchronize regardless of
56which processor they reside upon.
57
58The RTEMS multiprocessor execution model is multiple instruction streams with
59multiple data streams (MIMD).  This execution model has each of the processors
60executing code independent of the other processors.  Because of this
61parallelism, the application designer can more easily guarantee deterministic
62behavior.
63
64By supporting heterogeneous environments, RTEMS allows the systems designer to
65select the most efficient processor for each subsystem of the application.
66Configuring RTEMS for a heterogeneous environment is no more difficult than for
67a homogeneous one.  In keeping with RTEMS philosophy of providing transparent
68physical node boundaries, the minimal heterogeneous processing required is
69isolated in the MPCI layer.
70
71Nodes
72-----
73.. index:: nodes, definition
74
75A processor in a RTEMS system is referred to as a node.  Each node is assigned
76a unique non-zero node number by the application designer.  RTEMS assumes that
77node numbers are assigned consecutively from one to the ``maximum_nodes``
78configuration parameter.  The node number, node, and the maximum number of
79nodes, ``maximum_nodes``, in a system are found in the Multiprocessor
80Configuration Table.  The ``maximum_nodes`` field and the number of global
81objects, ``maximum_global_objects``, is required to be the same on all nodes in
82a system.
83
84The node number is used by RTEMS to identify each node when performing remote
85operations.  Thus, the Multiprocessor Communications Interface Layer (MPCI)
86must be able to route messages based on the node number.
87
88Global Objects
89--------------
90.. index:: global objects, definition
91
92All RTEMS objects which are created with the GLOBAL attribute will be known on
93all other nodes.  Global objects can be referenced from any node in the system,
94although certain directive specific restrictions (e.g. one cannot delete a
95remote object) may apply.  A task does not have to be global to perform
96operations involving remote objects.  The maximum number of global objects is
97the system is user configurable and can be found in the maximum_global_objects
98field in the Multiprocessor Configuration Table.  The distribution of tasks to
99processors is performed during the application design phase.  Dynamic task
100relocation is not supported by RTEMS.
101
102Global Object Table
103-------------------
104.. index:: global objects table
105
106RTEMS maintains two tables containing object information on every node in a
107multiprocessor system: a local object table and a global object table.  The
108local object table on each node is unique and contains information for all
109objects created on this node whether those objects are local or global.  The
110global object table contains information regarding all global objects in the
111system and, consequently, is the same on every node.
112
113Since each node must maintain an identical copy of the global object table, the
114maximum number of entries in each copy of the table must be the same.  The
115maximum number of entries in each copy is determined by the
116maximum_global_objects parameter in the Multiprocessor Configuration Table.
117This parameter, as well as the maximum_nodes parameter, is required to be the
118same on all nodes.  To maintain consistency among the table copies, every node
119in the system must be informed of the creation or deletion of a global object.
120
121Remote Operations
122-----------------
123.. index:: MPCI and remote operations
124
125When an application performs an operation on a remote global object, RTEMS must
126generate a Remote Request (RQ) message and send it to the appropriate node.
127After completing the requested operation, the remote node will build a Remote
128Response (RR) message and send it to the originating node.  Messages generated
129as a side-effect of a directive (such as deleting a global task) are known as
130Remote Processes (RP) and do not require the receiving node to respond.
131
132Other than taking slightly longer to execute directives on remote objects, the
133application is unaware of the location of the objects it acts upon.  The exact
134amount of overhead required for a remote operation is dependent on the media
135connecting the nodes and, to a lesser degree, on the efficiency of the
136user-provided MPCI routines.
137
138The following shows the typical transaction sequence during a remote
139application:
140
141#. The application issues a directive accessing a remote global object.
142
143#. RTEMS determines the node on which the object resides.
144
145#. RTEMS calls the user-provided MPCI routine ``GET_PACKET`` to obtain a packet
146   in which to build a RQ message.
147
148#. After building a message packet, RTEMS calls the user-provided MPCI routine
149   ``SEND_PACKET`` to transmit the packet to the node on which the object
150   resides (referred to as the destination node).
151
152#. The calling task is blocked until the RR message arrives, and control of the
153   processor is transferred to another task.
154
155#. The MPCI layer on the destination node senses the arrival of a packet
156   (commonly in an ISR), and calls the ``rtems_multiprocessing_announce``
157   directive.  This directive readies the Multiprocessing Server.
158
159#. The Multiprocessing Server calls the user-provided MPCI routine
160   ``RECEIVE_PACKET``, performs the requested operation, builds an RR message,
161   and returns it to the originating node.
162
163#. The MPCI layer on the originating node senses the arrival of a packet
164   (typically via an interrupt), and calls the RTEMS
165   ``rtems_multiprocessing_announce`` directive.  This directive readies the
166   Multiprocessing Server.
167
168#. The Multiprocessing Server calls the user-provided MPCI routine
169   ``RECEIVE_PACKET``, readies the original requesting task, and blocks until
170   another packet arrives.  Control is transferred to the original task which
171   then completes processing of the directive.
172
173If an uncorrectable error occurs in the user-provided MPCI layer, the fatal
174error handler should be invoked.  RTEMS assumes the reliable transmission and
175reception of messages by the MPCI and makes no attempt to detect or correct
176errors.
177
178Proxies
179-------
180.. index:: proxy, definition
181
182A proxy is an RTEMS data structure which resides on a remote node and is used
183to represent a task which must block as part of a remote operation. This action
184can occur as part of the ``rtems_semaphore_obtain`` and
185``rtems_message_queue_receive`` directives.  If the object were local, the
186task's control block would be available for modification to indicate it was
187blocking on a message queue or semaphore.  However, the task's control block
188resides only on the same node as the task.  As a result, the remote node must
189allocate a proxy to represent the task until it can be readied.
190
191The maximum number of proxies is defined in the Multiprocessor Configuration
192Table.  Each node in a multiprocessor system may require a different number of
193proxies to be configured.  The distribution of proxy control blocks is
194application dependent and is different from the distribution of tasks.
195
196Multiprocessor Configuration Table
197----------------------------------
198
199The Multiprocessor Configuration Table contains information needed by RTEMS
200when used in a multiprocessor system.  This table is discussed in detail in the
201section Multiprocessor Configuration Table of the Configuring a System chapter.
202
203Multiprocessor Communications Interface Layer
204=============================================
205
206The Multiprocessor Communications Interface Layer (MPCI) is a set of
207user-provided procedures which enable the nodes in a multiprocessor system to
208communicate with one another.  These routines are invoked by RTEMS at various
209times in the preparation and processing of remote requests.  Interrupts are
210enabled when an MPCI procedure is invoked.  It is assumed that if the execution
211mode and/or interrupt level are altered by the MPCI layer, that they will be
212restored prior to returning to RTEMS.
213
214.. index:: MPCI, definition
215
216The MPCI layer is responsible for managing a pool of buffers called packets and
217for sending these packets between system nodes.  Packet buffers contain the
218messages sent between the nodes.  Typically, the MPCI layer will encapsulate
219the packet within an envelope which contains the information needed by the MPCI
220layer.  The number of packets available is dependent on the MPCI layer
221implementation.
222
223.. index:: MPCI entry points
224
225The entry points to the routines in the user's MPCI layer should be placed in
226the Multiprocessor Communications Interface Table.  The user must provide entry
227points for each of the following table entries in a multiprocessor system:
228
229.. list-table::
230 :class: rtems-table
231
232 * - initialization
233   - initialize the MPCI
234 * - get_packet
235   - obtain a packet buffer
236 * - return_packet
237   - return a packet buffer
238 * - send_packet
239   - send a packet to another node
240 * - receive_packet
241   - called to get an arrived packet
242
243A packet is sent by RTEMS in each of the following situations:
244
245- an RQ is generated on an originating node;
246
247- an RR is generated on a destination node;
248
249- a global object is created;
250
251- a global object is deleted;
252
253- a local task blocked on a remote object is deleted;
254
255- during system initialization to check for system consistency.
256
257If the target hardware supports it, the arrival of a packet at a node may
258generate an interrupt.  Otherwise, the real-time clock ISR can check for the
259arrival of a packet.  In any case, the ``rtems_multiprocessing_announce``
260directive must be called to announce the arrival of a packet.  After exiting
261the ISR, control will be passed to the Multiprocessing Server to process the
262packet.  The Multiprocessing Server will call the get_packet entry to obtain a
263packet buffer and the receive_entry entry to copy the message into the buffer
264obtained.
265
266INITIALIZATION
267--------------
268
269The INITIALIZATION component of the user-provided MPCI layer is called as part
270of the ``rtems_initialize_executive`` directive to initialize the MPCI layer
271and associated hardware.  It is invoked immediately after all of the device
272drivers have been initialized.  This component should be adhere to the
273following prototype:
274
275.. index:: rtems_mpci_entry
276
277.. code-block:: c
278
279    rtems_mpci_entry user_mpci_initialization(
280        rtems_configuration_table *configuration
281    );
282
283where configuration is the address of the user's Configuration Table.
284Operations on global objects cannot be performed until this component is
285invoked.  The INITIALIZATION component is invoked only once in the life of any
286system.  If the MPCI layer cannot be successfully initialized, the fatal error
287manager should be invoked by this routine.
288
289One of the primary functions of the MPCI layer is to provide the executive with
290packet buffers.  The INITIALIZATION routine must create and initialize a pool
291of packet buffers.  There must be enough packet buffers so RTEMS can obtain one
292whenever needed.
293
294GET_PACKET
295----------
296
297The GET_PACKET component of the user-provided MPCI layer is called when RTEMS
298must obtain a packet buffer to send or broadcast a message.  This component
299should be adhere to the following prototype:
300
301.. code-block:: c
302
303    rtems_mpci_entry user_mpci_get_packet(
304        rtems_packet_prefix **packet
305    );
306
307where packet is the address of a pointer to a packet.  This routine always
308succeeds and, upon return, packet will contain the address of a packet.  If for
309any reason, a packet cannot be successfully obtained, then the fatal error
310manager should be invoked.
311
312RTEMS has been optimized to avoid the need for obtaining a packet each time a
313message is sent or broadcast.  For example, RTEMS sends response messages (RR)
314back to the originator in the same packet in which the request message (RQ)
315arrived.
316
317RETURN_PACKET
318-------------
319
320The RETURN_PACKET component of the user-provided MPCI layer is called when
321RTEMS needs to release a packet to the free packet buffer pool.  This component
322should be adhere to the following prototype:
323
324.. code-block:: c
325
326    rtems_mpci_entry user_mpci_return_packet(
327        rtems_packet_prefix *packet
328    );
329
330where packet is the address of a packet.  If the packet cannot be successfully
331returned, the fatal error manager should be invoked.
332
333RECEIVE_PACKET
334--------------
335
336The RECEIVE_PACKET component of the user-provided MPCI layer is called when
337RTEMS needs to obtain a packet which has previously arrived.  This component
338should be adhere to the following prototype:
339
340.. code-block:: c
341
342    rtems_mpci_entry user_mpci_receive_packet(
343        rtems_packet_prefix **packet
344    );
345
346where packet is a pointer to the address of a packet to place the message from
347another node.  If a message is available, then packet will contain the address
348of the message from another node.  If no messages are available, this entry
349packet should contain NULL.
350
351SEND_PACKET
352-----------
353
354The SEND_PACKET component of the user-provided MPCI layer is called when RTEMS
355needs to send a packet containing a message to another node.  This component
356should be adhere to the following prototype:
357
358.. code-block:: c
359
360    rtems_mpci_entry user_mpci_send_packet(
361        uint32_t               node,
362        rtems_packet_prefix  **packet
363    );
364
365where node is the node number of the destination and packet is the address of a
366packet which containing a message.  If the packet cannot be successfully sent,
367the fatal error manager should be invoked.
368
369If node is set to zero, the packet is to be broadcasted to all other nodes in
370the system.  Although some MPCI layers will be built upon hardware which
371support a broadcast mechanism, others may be required to generate a copy of the
372packet for each node in the system.
373
374.. COMMENT: XXX packet_prefix structure needs to be defined in this document
375
376Many MPCI layers use the ``packet_length`` field of the ``rtems_packet_prefix``
377portion of the packet to avoid sending unnecessary data.  This is especially
378useful if the media connecting the nodes is relatively slow.
379
380The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the packet
381indicates how much of the packet in 32-bit units may require conversion in a
382heterogeneous system.
383
384Supporting Heterogeneous Environments
385-------------------------------------
386.. index:: heterogeneous multiprocessing
387
388Developing an MPCI layer for a heterogeneous system requires a thorough
389understanding of the differences between the processors which comprise the
390system.  One difficult problem is the varying data representation schemes used
391by different processor types.  The most pervasive data representation problem
392is the order of the bytes which compose a data entity.  Processors which place
393the least significant byte at the smallest address are classified as little
394endian processors.  Little endian byte-ordering is shown below:
395
396.. code-block:: c
397
398    +---------------+----------------+---------------+----------------+
399    |               |                |               |                |
400    |    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
401    |               |                |               |                |
402    +---------------+----------------+---------------+----------------+
403
404Conversely, processors which place the most significant byte at the smallest
405address are classified as big endian processors.  Big endian byte-ordering is
406shown below:
407
408.. code-block:: c
409
410    +---------------+----------------+---------------+----------------+
411    |               |                |               |                |
412    |    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
413    |               |                |               |                |
414    +---------------+----------------+---------------+----------------+
415
416Unfortunately, sharing a data structure between big endian and little endian
417processors requires translation into a common endian format.  An application
418designer typically chooses the common endian format to minimize conversion
419overhead.
420
421Another issue in the design of shared data structures is the alignment of data
422structure elements.  Alignment is both processor and compiler implementation
423dependent.  For example, some processors allow data elements to begin on any
424address boundary, while others impose restrictions.  Common restrictions are
425that data elements must begin on either an even address or on a long word
426boundary.  Violation of these restrictions may cause an exception or impose a
427performance penalty.
428
429Other issues which commonly impact the design of shared data structures include
430the representation of floating point numbers, bit fields, decimal data, and
431character strings.  In addition, the representation method for negative
432integers could be one's or two's complement.  These factors combine to increase
433the complexity of designing and manipulating data structures shared between
434processors.
435
436RTEMS addressed these issues in the design of the packets used to communicate
437between nodes.  The RTEMS packet format is designed to allow the MPCI layer to
438perform all necessary conversion without burdening the developer with the
439details of the RTEMS packet format.  As a result, the MPCI layer must be aware
440of the following:
441
442- All packets must begin on a four byte boundary.
443
444- Packets are composed of both RTEMS and application data.  All RTEMS data is
445  treated as 32-bit unsigned quantities and is in the first ``to_convert``
446  32-bit quantities of the packet.  The ``to_convert`` field is part of the
447  ``rtems_packet_prefix`` portion of the packet.
448
449- The RTEMS data component of the packet must be in native endian format.
450  Endian conversion may be performed by either the sending or receiving MPCI
451  layer.
452
453- RTEMS makes no assumptions regarding the application data component of the
454  packet.
455
456Operations
457==========
458
459Announcing a Packet
460-------------------
461
462The ``rtems_multiprocessing_announce`` directive is called by the MPCI layer to
463inform RTEMS that a packet has arrived from another node.  This directive can
464be called from an interrupt service routine or from within a polling routine.
465
466Directives
467==========
468
469This section details the additional directives required to support RTEMS in a
470multiprocessor configuration.  A subsection is dedicated to each of this
471manager's directives and describes the calling sequence, related constants,
472usage, and status codes.
473
474.. raw:: latex
475
476   \clearpage
477
478.. _rtems_multiprocessing_announce:
479
480MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
481-----------------------------------------------------------
482.. index:: announce arrival of package
483.. index:: rtems_multiprocessing_announce
484
485CALLING SEQUENCE:
486    .. code-block:: c
487
488        void rtems_multiprocessing_announce( void );
489
490DIRECTIVE STATUS CODES:
491    NONE
492
493DESCRIPTION:
494    This directive informs RTEMS that a multiprocessing communications packet
495    has arrived from another node.  This directive is called by the
496    user-provided MPCI, and is only used in multiprocessor configurations.
497
498NOTES:
499    This directive is typically called from an ISR.
500
501    This directive will almost certainly cause the calling task to be
502    preempted.
503
504    This directive does not generate activity on remote nodes.
Note: See TracBrowser for help on using the repository browser.