source: rtems-docs/c_user/multiprocessing.rst @ 25d55d4

4.115
Last change on this file since 25d55d4 was 25d55d4, checked in by Chris Johns <chrisj@…>, on 02/18/16 at 04:41:28

Change code to code-block.

  • Property mode set to 100644
File size: 21.0 KB
Line 
1.. COMMENT: COPYRIGHT (c) 1988-2008.
2.. COMMENT: On-Line Applications Research Corporation (OAR).
3.. COMMENT: All rights reserved.
4
5Multiprocessing Manager
6#######################
7
8.. index:: multiprocessing
9
10Introduction
11============
12
13In multiprocessor real-time systems, new requirements, such as sharing data and
14global resources between processors, are introduced.  This requires an
15efficient and reliable communications vehicle which allows all processors to
16communicate with each other as necessary.  In addition, the ramifications of
17multiple processors affect each and every characteristic of a real-time system,
18almost always making them more complicated.
19
20RTEMS addresses these issues by providing simple and flexible real-time
21multiprocessing capabilities.  The executive easily lends itself to both
22tightly-coupled and loosely-coupled configurations of the target system
23hardware.  In addition, RTEMS supports systems composed of both homogeneous and
24heterogeneous mixtures of processors and target boards.
25
26A major design goal of the RTEMS executive was to transcend the physical
27boundaries of the target hardware configuration.  This goal is achieved by
28presenting the application software with a logical view of the target system
29where the boundaries between processor nodes are transparent.  As a result, the
30application developer may designate objects such as tasks, queues, events,
31signals, semaphores, and memory blocks as global objects.  These global objects
32may then be accessed by any task regardless of the physical location of the
33object and the accessing task.  RTEMS automatically determines that the object
34being accessed resides on another processor and performs the actions required
35to access the desired object.  Simply stated, RTEMS allows the entire system,
36both hardware and software, to be viewed logically as a single system.
37
38The directives provided by the  Manager are:
39
40- rtems_multiprocessing_announce_ - A multiprocessing communications packet has
41  arrived
42
43Background
44==========
45
46.. index:: multiprocessing topologies
47
48RTEMS makes no assumptions regarding the connection media or topology of a
49multiprocessor system.  The tasks which compose a particular application can be
50spread among as many processors as needed to satisfy the application's timing
51requirements.  The application tasks can interact using a subset of the RTEMS
52directives as if they were on the same processor.  These directives allow
53application tasks to exchange data, communicate, and synchronize regardless of
54which processor they reside upon.
55
56The RTEMS multiprocessor execution model is multiple instruction streams with
57multiple data streams (MIMD).  This execution model has each of the processors
58executing code independent of the other processors.  Because of this
59parallelism, the application designer can more easily guarantee deterministic
60behavior.
61
62By supporting heterogeneous environments, RTEMS allows the systems designer to
63select the most efficient processor for each subsystem of the application.
64Configuring RTEMS for a heterogeneous environment is no more difficult than for
65a homogeneous one.  In keeping with RTEMS philosophy of providing transparent
66physical node boundaries, the minimal heterogeneous processing required is
67isolated in the MPCI layer.
68
69Nodes
70-----
71.. index:: nodes, definition
72
73A processor in a RTEMS system is referred to as a node.  Each node is assigned
74a unique non-zero node number by the application designer.  RTEMS assumes that
75node numbers are assigned consecutively from one to the ``maximum_nodes``
76configuration parameter.  The node number, node, and the maximum number of
77nodes, ``maximum_nodes``, in a system are found in the Multiprocessor
78Configuration Table.  The ``maximum_nodes`` field and the number of global
79objects, ``maximum_global_objects``, is required to be the same on all nodes in
80a system.
81
82The node number is used by RTEMS to identify each node when performing remote
83operations.  Thus, the Multiprocessor Communications Interface Layer (MPCI)
84must be able to route messages based on the node number.
85
86Global Objects
87--------------
88.. index:: global objects, definition
89
90All RTEMS objects which are created with the GLOBAL attribute will be known on
91all other nodes.  Global objects can be referenced from any node in the system,
92although certain directive specific restrictions (e.g. one cannot delete a
93remote object) may apply.  A task does not have to be global to perform
94operations involving remote objects.  The maximum number of global objects is
95the system is user configurable and can be found in the maximum_global_objects
96field in the Multiprocessor Configuration Table.  The distribution of tasks to
97processors is performed during the application design phase.  Dynamic task
98relocation is not supported by RTEMS.
99
100Global Object Table
101-------------------
102.. index:: global objects table
103
104RTEMS maintains two tables containing object information on every node in a
105multiprocessor system: a local object table and a global object table.  The
106local object table on each node is unique and contains information for all
107objects created on this node whether those objects are local or global.  The
108global object table contains information regarding all global objects in the
109system and, consequently, is the same on every node.
110
111Since each node must maintain an identical copy of the global object table, the
112maximum number of entries in each copy of the table must be the same.  The
113maximum number of entries in each copy is determined by the
114maximum_global_objects parameter in the Multiprocessor Configuration Table.
115This parameter, as well as the maximum_nodes parameter, is required to be the
116same on all nodes.  To maintain consistency among the table copies, every node
117in the system must be informed of the creation or deletion of a global object.
118
119Remote Operations
120-----------------
121.. index:: MPCI and remote operations
122
123When an application performs an operation on a remote global object, RTEMS must
124generate a Remote Request (RQ) message and send it to the appropriate node.
125After completing the requested operation, the remote node will build a Remote
126Response (RR) message and send it to the originating node.  Messages generated
127as a side-effect of a directive (such as deleting a global task) are known as
128Remote Processes (RP) and do not require the receiving node to respond.
129
130Other than taking slightly longer to execute directives on remote objects, the
131application is unaware of the location of the objects it acts upon.  The exact
132amount of overhead required for a remote operation is dependent on the media
133connecting the nodes and, to a lesser degree, on the efficiency of the
134user-provided MPCI routines.
135
136The following shows the typical transaction sequence during a remote
137application:
138
139#. The application issues a directive accessing a remote global object.
140
141#. RTEMS determines the node on which the object resides.
142
143#. RTEMS calls the user-provided MPCI routine ``GET_PACKET`` to obtain a packet
144   in which to build a RQ message.
145
146#. After building a message packet, RTEMS calls the user-provided MPCI routine
147   ``SEND_PACKET`` to transmit the packet to the node on which the object
148   resides (referred to as the destination node).
149
150#. The calling task is blocked until the RR message arrives, and control of the
151   processor is transferred to another task.
152
153#. The MPCI layer on the destination node senses the arrival of a packet
154   (commonly in an ISR), and calls the ``rtems_multiprocessing_announce``
155   directive.  This directive readies the Multiprocessing Server.
156
157#. The Multiprocessing Server calls the user-provided MPCI routine
158   ``RECEIVE_PACKET``, performs the requested operation, builds an RR message,
159   and returns it to the originating node.
160
161#. The MPCI layer on the originating node senses the arrival of a packet
162   (typically via an interrupt), and calls the RTEMS
163   ``rtems_multiprocessing_announce`` directive.  This directive readies the
164   Multiprocessing Server.
165
166#. The Multiprocessing Server calls the user-provided MPCI routine
167   ``RECEIVE_PACKET``, readies the original requesting task, and blocks until
168   another packet arrives.  Control is transferred to the original task which
169   then completes processing of the directive.
170
171If an uncorrectable error occurs in the user-provided MPCI layer, the fatal
172error handler should be invoked.  RTEMS assumes the reliable transmission and
173reception of messages by the MPCI and makes no attempt to detect or correct
174errors.
175
176Proxies
177-------
178.. index:: proxy, definition
179
180A proxy is an RTEMS data structure which resides on a remote node and is used
181to represent a task which must block as part of a remote operation. This action
182can occur as part of the ``rtems_semaphore_obtain`` and
183``rtems_message_queue_receive`` directives.  If the object were local, the
184task's control block would be available for modification to indicate it was
185blocking on a message queue or semaphore.  However, the task's control block
186resides only on the same node as the task.  As a result, the remote node must
187allocate a proxy to represent the task until it can be readied.
188
189The maximum number of proxies is defined in the Multiprocessor Configuration
190Table.  Each node in a multiprocessor system may require a different number of
191proxies to be configured.  The distribution of proxy control blocks is
192application dependent and is different from the distribution of tasks.
193
194Multiprocessor Configuration Table
195----------------------------------
196
197The Multiprocessor Configuration Table contains information needed by RTEMS
198when used in a multiprocessor system.  This table is discussed in detail in the
199section Multiprocessor Configuration Table of the Configuring a System chapter.
200
201Multiprocessor Communications Interface Layer
202=============================================
203
204The Multiprocessor Communications Interface Layer (MPCI) is a set of
205user-provided procedures which enable the nodes in a multiprocessor system to
206communicate with one another.  These routines are invoked by RTEMS at various
207times in the preparation and processing of remote requests.  Interrupts are
208enabled when an MPCI procedure is invoked.  It is assumed that if the execution
209mode and/or interrupt level are altered by the MPCI layer, that they will be
210restored prior to returning to RTEMS.
211
212.. index:: MPCI, definition
213
214The MPCI layer is responsible for managing a pool of buffers called packets and
215for sending these packets between system nodes.  Packet buffers contain the
216messages sent between the nodes.  Typically, the MPCI layer will encapsulate
217the packet within an envelope which contains the information needed by the MPCI
218layer.  The number of packets available is dependent on the MPCI layer
219implementation.
220
221.. index:: MPCI entry points
222
223The entry points to the routines in the user's MPCI layer should be placed in
224the Multiprocessor Communications Interface Table.  The user must provide entry
225points for each of the following table entries in a multiprocessor system:
226
227.. list-table::
228 :class: rtems-table
229
230 * - initialization
231   - initialize the MPCI
232 * - get_packet
233   - obtain a packet buffer
234 * - return_packet
235   - return a packet buffer
236 * - send_packet
237   - send a packet to another node
238 * - receive_packet
239   - called to get an arrived packet
240
241A packet is sent by RTEMS in each of the following situations:
242
243- an RQ is generated on an originating node;
244
245- an RR is generated on a destination node;
246
247- a global object is created;
248
249- a global object is deleted;
250
251- a local task blocked on a remote object is deleted;
252
253- during system initialization to check for system consistency.
254
255If the target hardware supports it, the arrival of a packet at a node may
256generate an interrupt.  Otherwise, the real-time clock ISR can check for the
257arrival of a packet.  In any case, the ``rtems_multiprocessing_announce``
258directive must be called to announce the arrival of a packet.  After exiting
259the ISR, control will be passed to the Multiprocessing Server to process the
260packet.  The Multiprocessing Server will call the get_packet entry to obtain a
261packet buffer and the receive_entry entry to copy the message into the buffer
262obtained.
263
264INITIALIZATION
265--------------
266
267The INITIALIZATION component of the user-provided MPCI layer is called as part
268of the ``rtems_initialize_executive`` directive to initialize the MPCI layer
269and associated hardware.  It is invoked immediately after all of the device
270drivers have been initialized.  This component should be adhere to the
271following prototype:
272
273.. index:: rtems_mpci_entry
274
275.. code-block:: c
276
277    rtems_mpci_entry user_mpci_initialization(
278        rtems_configuration_table *configuration
279    );
280
281where configuration is the address of the user's Configuration Table.
282Operations on global objects cannot be performed until this component is
283invoked.  The INITIALIZATION component is invoked only once in the life of any
284system.  If the MPCI layer cannot be successfully initialized, the fatal error
285manager should be invoked by this routine.
286
287One of the primary functions of the MPCI layer is to provide the executive with
288packet buffers.  The INITIALIZATION routine must create and initialize a pool
289of packet buffers.  There must be enough packet buffers so RTEMS can obtain one
290whenever needed.
291
292GET_PACKET
293----------
294
295The GET_PACKET component of the user-provided MPCI layer is called when RTEMS
296must obtain a packet buffer to send or broadcast a message.  This component
297should be adhere to the following prototype:
298
299.. code-block:: c
300
301    rtems_mpci_entry user_mpci_get_packet(
302        rtems_packet_prefix **packet
303    );
304
305where packet is the address of a pointer to a packet.  This routine always
306succeeds and, upon return, packet will contain the address of a packet.  If for
307any reason, a packet cannot be successfully obtained, then the fatal error
308manager should be invoked.
309
310RTEMS has been optimized to avoid the need for obtaining a packet each time a
311message is sent or broadcast.  For example, RTEMS sends response messages (RR)
312back to the originator in the same packet in which the request message (RQ)
313arrived.
314
315RETURN_PACKET
316-------------
317
318The RETURN_PACKET component of the user-provided MPCI layer is called when
319RTEMS needs to release a packet to the free packet buffer pool.  This component
320should be adhere to the following prototype:
321
322.. code-block:: c
323
324    rtems_mpci_entry user_mpci_return_packet(
325        rtems_packet_prefix *packet
326    );
327
328where packet is the address of a packet.  If the packet cannot be successfully
329returned, the fatal error manager should be invoked.
330
331RECEIVE_PACKET
332--------------
333
334The RECEIVE_PACKET component of the user-provided MPCI layer is called when
335RTEMS needs to obtain a packet which has previously arrived.  This component
336should be adhere to the following prototype:
337
338.. code-block:: c
339
340    rtems_mpci_entry user_mpci_receive_packet(
341        rtems_packet_prefix **packet
342    );
343
344where packet is a pointer to the address of a packet to place the message from
345another node.  If a message is available, then packet will contain the address
346of the message from another node.  If no messages are available, this entry
347packet should contain NULL.
348
349SEND_PACKET
350-----------
351
352The SEND_PACKET component of the user-provided MPCI layer is called when RTEMS
353needs to send a packet containing a message to another node.  This component
354should be adhere to the following prototype:
355
356.. code-block:: c
357
358    rtems_mpci_entry user_mpci_send_packet(
359        uint32_t               node,
360        rtems_packet_prefix  **packet
361    );
362
363where node is the node number of the destination and packet is the address of a
364packet which containing a message.  If the packet cannot be successfully sent,
365the fatal error manager should be invoked.
366
367If node is set to zero, the packet is to be broadcasted to all other nodes in
368the system.  Although some MPCI layers will be built upon hardware which
369support a broadcast mechanism, others may be required to generate a copy of the
370packet for each node in the system.
371
372.. COMMENT: XXX packet_prefix structure needs to be defined in this document
373
374Many MPCI layers use the ``packet_length`` field of the ``rtems_packet_prefix``
375portion of the packet to avoid sending unnecessary data.  This is especially
376useful if the media connecting the nodes is relatively slow.
377
378The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the packet
379indicates how much of the packet in 32-bit units may require conversion in a
380heterogeneous system.
381
382Supporting Heterogeneous Environments
383-------------------------------------
384.. index:: heterogeneous multiprocessing
385
386Developing an MPCI layer for a heterogeneous system requires a thorough
387understanding of the differences between the processors which comprise the
388system.  One difficult problem is the varying data representation schemes used
389by different processor types.  The most pervasive data representation problem
390is the order of the bytes which compose a data entity.  Processors which place
391the least significant byte at the smallest address are classified as little
392endian processors.  Little endian byte-ordering is shown below:
393
394.. code-block:: c
395
396    +---------------+----------------+---------------+----------------+
397    |               |                |               |                |
398    |    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
399    |               |                |               |                |
400    +---------------+----------------+---------------+----------------+
401
402Conversely, processors which place the most significant byte at the smallest
403address are classified as big endian processors.  Big endian byte-ordering is
404shown below:
405
406.. code-block:: c
407
408    +---------------+----------------+---------------+----------------+
409    |               |                |               |                |
410    |    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
411    |               |                |               |                |
412    +---------------+----------------+---------------+----------------+
413
414Unfortunately, sharing a data structure between big endian and little endian
415processors requires translation into a common endian format.  An application
416designer typically chooses the common endian format to minimize conversion
417overhead.
418
419Another issue in the design of shared data structures is the alignment of data
420structure elements.  Alignment is both processor and compiler implementation
421dependent.  For example, some processors allow data elements to begin on any
422address boundary, while others impose restrictions.  Common restrictions are
423that data elements must begin on either an even address or on a long word
424boundary.  Violation of these restrictions may cause an exception or impose a
425performance penalty.
426
427Other issues which commonly impact the design of shared data structures include
428the representation of floating point numbers, bit fields, decimal data, and
429character strings.  In addition, the representation method for negative
430integers could be one's or two's complement.  These factors combine to increase
431the complexity of designing and manipulating data structures shared between
432processors.
433
434RTEMS addressed these issues in the design of the packets used to communicate
435between nodes.  The RTEMS packet format is designed to allow the MPCI layer to
436perform all necessary conversion without burdening the developer with the
437details of the RTEMS packet format.  As a result, the MPCI layer must be aware
438of the following:
439
440- All packets must begin on a four byte boundary.
441
442- Packets are composed of both RTEMS and application data.  All RTEMS data is
443  treated as 32-bit unsigned quantities and is in the first ``to_convert``
444  32-bit quantities of the packet.  The ``to_convert`` field is part of the
445  ``rtems_packet_prefix`` portion of the packet.
446
447- The RTEMS data component of the packet must be in native endian format.
448  Endian conversion may be performed by either the sending or receiving MPCI
449  layer.
450
451- RTEMS makes no assumptions regarding the application data component of the
452  packet.
453
454Operations
455==========
456
457Announcing a Packet
458-------------------
459
460The ``rtems_multiprocessing_announce`` directive is called by the MPCI layer to
461inform RTEMS that a packet has arrived from another node.  This directive can
462be called from an interrupt service routine or from within a polling routine.
463
464Directives
465==========
466
467This section details the additional directives required to support RTEMS in a
468multiprocessor configuration.  A subsection is dedicated to each of this
469manager's directives and describes the calling sequence, related constants,
470usage, and status codes.
471
472.. _rtems_multiprocessing_announce:
473
474MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
475-----------------------------------------------------------
476.. index:: announce arrival of package
477
478**CALLING SEQUENCE:**
479
480.. index:: rtems_multiprocessing_announce
481
482.. code-block:: c
483
484    void rtems_multiprocessing_announce( void );
485
486**DIRECTIVE STATUS CODES:**
487
488NONE
489
490**DESCRIPTION:**
491
492This directive informs RTEMS that a multiprocessing communications packet has
493arrived from another node.  This directive is called by the user-provided MPCI,
494and is only used in multiprocessor configurations.
495
496**NOTES:**
497
498This directive is typically called from an ISR.
499
500This directive will almost certainly cause the calling task to be preempted.
501
502This directive does not generate activity on remote nodes.
Note: See TracBrowser for help on using the repository browser.