source: rtems-docs/c-user/multiprocessing.rst @ 969e60e

5
Last change on this file since 969e60e was 3384994, checked in by Chris Johns <chrisj@…>, on 11/13/17 at 02:25:18

Clean up sphinx warnings.

  • Fix minor formatting issues.
  • Fix reference the gloassary TLS using ':term:'.
  • Make sure nothing is between an anchor and the heading where ':ref:' references the anchor. This meant moving all the recently added '.. index::' entries.

Update #3232.
Update #3229.

  • Property mode set to 100644
File size: 21.1 KB
Line 
1.. comment SPDX-License-Identifier: CC-BY-SA-4.0
2
3.. COMMENT: COPYRIGHT (c) 1988-2008.
4.. COMMENT: On-Line Applications Research Corporation (OAR).
5.. COMMENT: All rights reserved.
6
7.. index:: multiprocessing
8
9Multiprocessing Manager
10***********************
11
12Introduction
13============
14
15In multiprocessor real-time systems, new requirements, such as sharing data and
16global resources between processors, are introduced.  This requires an
17efficient and reliable communications vehicle which allows all processors to
18communicate with each other as necessary.  In addition, the ramifications of
19multiple processors affect each and every characteristic of a real-time system,
20almost always making them more complicated.
21
22RTEMS addresses these issues by providing simple and flexible real-time
23multiprocessing capabilities.  The executive easily lends itself to both
24tightly-coupled and loosely-coupled configurations of the target system
25hardware.  In addition, RTEMS supports systems composed of both homogeneous and
26heterogeneous mixtures of processors and target boards.
27
28A major design goal of the RTEMS executive was to transcend the physical
29boundaries of the target hardware configuration.  This goal is achieved by
30presenting the application software with a logical view of the target system
31where the boundaries between processor nodes are transparent.  As a result, the
32application developer may designate objects such as tasks, queues, events,
33signals, semaphores, and memory blocks as global objects.  These global objects
34may then be accessed by any task regardless of the physical location of the
35object and the accessing task.  RTEMS automatically determines that the object
36being accessed resides on another processor and performs the actions required
37to access the desired object.  Simply stated, RTEMS allows the entire system,
38both hardware and software, to be viewed logically as a single system.
39
40The directives provided by the  Manager are:
41
42- rtems_multiprocessing_announce_ - A multiprocessing communications packet has
43  arrived
44
45.. index:: multiprocessing topologies
46
47Background
48==========
49
50RTEMS makes no assumptions regarding the connection media or topology of a
51multiprocessor system.  The tasks which compose a particular application can be
52spread among as many processors as needed to satisfy the application's timing
53requirements.  The application tasks can interact using a subset of the RTEMS
54directives as if they were on the same processor.  These directives allow
55application tasks to exchange data, communicate, and synchronize regardless of
56which processor they reside upon.
57
58The RTEMS multiprocessor execution model is multiple instruction streams with
59multiple data streams (MIMD).  This execution model has each of the processors
60executing code independent of the other processors.  Because of this
61parallelism, the application designer can more easily guarantee deterministic
62behavior.
63
64By supporting heterogeneous environments, RTEMS allows the systems designer to
65select the most efficient processor for each subsystem of the application.
66Configuring RTEMS for a heterogeneous environment is no more difficult than for
67a homogeneous one.  In keeping with RTEMS philosophy of providing transparent
68physical node boundaries, the minimal heterogeneous processing required is
69isolated in the MPCI layer.
70
71.. index:: nodes, definition
72
73Nodes
74-----
75
76A processor in a RTEMS system is referred to as a node.  Each node is assigned
77a unique non-zero node number by the application designer.  RTEMS assumes that
78node numbers are assigned consecutively from one to the ``maximum_nodes``
79configuration parameter.  The node number, node, and the maximum number of
80nodes, ``maximum_nodes``, in a system are found in the Multiprocessor
81Configuration Table.  The ``maximum_nodes`` field and the number of global
82objects, ``maximum_global_objects``, is required to be the same on all nodes in
83a system.
84
85The node number is used by RTEMS to identify each node when performing remote
86operations.  Thus, the Multiprocessor Communications Interface Layer (MPCI)
87must be able to route messages based on the node number.
88
89.. index:: global objects, definition
90
91Global Objects
92--------------
93
94All RTEMS objects which are created with the GLOBAL attribute will be known on
95all other nodes.  Global objects can be referenced from any node in the system,
96although certain directive specific restrictions (e.g. one cannot delete a
97remote object) may apply.  A task does not have to be global to perform
98operations involving remote objects.  The maximum number of global objects is
99the system is user configurable and can be found in the maximum_global_objects
100field in the Multiprocessor Configuration Table.  The distribution of tasks to
101processors is performed during the application design phase.  Dynamic task
102relocation is not supported by RTEMS.
103
104.. index:: global objects table
105
106Global Object Table
107-------------------
108
109RTEMS maintains two tables containing object information on every node in a
110multiprocessor system: a local object table and a global object table.  The
111local object table on each node is unique and contains information for all
112objects created on this node whether those objects are local or global.  The
113global object table contains information regarding all global objects in the
114system and, consequently, is the same on every node.
115
116Since each node must maintain an identical copy of the global object table, the
117maximum number of entries in each copy of the table must be the same.  The
118maximum number of entries in each copy is determined by the
119maximum_global_objects parameter in the Multiprocessor Configuration Table.
120This parameter, as well as the maximum_nodes parameter, is required to be the
121same on all nodes.  To maintain consistency among the table copies, every node
122in the system must be informed of the creation or deletion of a global object.
123
124.. index:: MPCI and remote operations
125
126Remote Operations
127-----------------
128
129When an application performs an operation on a remote global object, RTEMS must
130generate a Remote Request (RQ) message and send it to the appropriate node.
131After completing the requested operation, the remote node will build a Remote
132Response (RR) message and send it to the originating node.  Messages generated
133as a side-effect of a directive (such as deleting a global task) are known as
134Remote Processes (RP) and do not require the receiving node to respond.
135
136Other than taking slightly longer to execute directives on remote objects, the
137application is unaware of the location of the objects it acts upon.  The exact
138amount of overhead required for a remote operation is dependent on the media
139connecting the nodes and, to a lesser degree, on the efficiency of the
140user-provided MPCI routines.
141
142The following shows the typical transaction sequence during a remote
143application:
144
145#. The application issues a directive accessing a remote global object.
146
147#. RTEMS determines the node on which the object resides.
148
149#. RTEMS calls the user-provided MPCI routine ``GET_PACKET`` to obtain a packet
150   in which to build a RQ message.
151
152#. After building a message packet, RTEMS calls the user-provided MPCI routine
153   ``SEND_PACKET`` to transmit the packet to the node on which the object
154   resides (referred to as the destination node).
155
156#. The calling task is blocked until the RR message arrives, and control of the
157   processor is transferred to another task.
158
159#. The MPCI layer on the destination node senses the arrival of a packet
160   (commonly in an ISR), and calls the ``rtems_multiprocessing_announce``
161   directive.  This directive readies the Multiprocessing Server.
162
163#. The Multiprocessing Server calls the user-provided MPCI routine
164   ``RECEIVE_PACKET``, performs the requested operation, builds an RR message,
165   and returns it to the originating node.
166
167#. The MPCI layer on the originating node senses the arrival of a packet
168   (typically via an interrupt), and calls the RTEMS
169   ``rtems_multiprocessing_announce`` directive.  This directive readies the
170   Multiprocessing Server.
171
172#. The Multiprocessing Server calls the user-provided MPCI routine
173   ``RECEIVE_PACKET``, readies the original requesting task, and blocks until
174   another packet arrives.  Control is transferred to the original task which
175   then completes processing of the directive.
176
177If an uncorrectable error occurs in the user-provided MPCI layer, the fatal
178error handler should be invoked.  RTEMS assumes the reliable transmission and
179reception of messages by the MPCI and makes no attempt to detect or correct
180errors.
181
182.. index:: proxy, definition
183
184Proxies
185-------
186
187A proxy is an RTEMS data structure which resides on a remote node and is used
188to represent a task which must block as part of a remote operation. This action
189can occur as part of the ``rtems_semaphore_obtain`` and
190``rtems_message_queue_receive`` directives.  If the object were local, the
191task's control block would be available for modification to indicate it was
192blocking on a message queue or semaphore.  However, the task's control block
193resides only on the same node as the task.  As a result, the remote node must
194allocate a proxy to represent the task until it can be readied.
195
196The maximum number of proxies is defined in the Multiprocessor Configuration
197Table.  Each node in a multiprocessor system may require a different number of
198proxies to be configured.  The distribution of proxy control blocks is
199application dependent and is different from the distribution of tasks.
200
201Multiprocessor Configuration Table
202----------------------------------
203
204The Multiprocessor Configuration Table contains information needed by RTEMS
205when used in a multiprocessor system.  This table is discussed in detail in the
206section Multiprocessor Configuration Table of the Configuring a System chapter.
207
208Multiprocessor Communications Interface Layer
209=============================================
210
211The Multiprocessor Communications Interface Layer (MPCI) is a set of
212user-provided procedures which enable the nodes in a multiprocessor system to
213communicate with one another.  These routines are invoked by RTEMS at various
214times in the preparation and processing of remote requests.  Interrupts are
215enabled when an MPCI procedure is invoked.  It is assumed that if the execution
216mode and/or interrupt level are altered by the MPCI layer, that they will be
217restored prior to returning to RTEMS.
218
219.. index:: MPCI, definition
220
221The MPCI layer is responsible for managing a pool of buffers called packets and
222for sending these packets between system nodes.  Packet buffers contain the
223messages sent between the nodes.  Typically, the MPCI layer will encapsulate
224the packet within an envelope which contains the information needed by the MPCI
225layer.  The number of packets available is dependent on the MPCI layer
226implementation.
227
228.. index:: MPCI entry points
229
230The entry points to the routines in the user's MPCI layer should be placed in
231the Multiprocessor Communications Interface Table.  The user must provide entry
232points for each of the following table entries in a multiprocessor system:
233
234.. list-table::
235 :class: rtems-table
236
237 * - initialization
238   - initialize the MPCI
239 * - get_packet
240   - obtain a packet buffer
241 * - return_packet
242   - return a packet buffer
243 * - send_packet
244   - send a packet to another node
245 * - receive_packet
246   - called to get an arrived packet
247
248A packet is sent by RTEMS in each of the following situations:
249
250- an RQ is generated on an originating node;
251
252- an RR is generated on a destination node;
253
254- a global object is created;
255
256- a global object is deleted;
257
258- a local task blocked on a remote object is deleted;
259
260- during system initialization to check for system consistency.
261
262If the target hardware supports it, the arrival of a packet at a node may
263generate an interrupt.  Otherwise, the real-time clock ISR can check for the
264arrival of a packet.  In any case, the ``rtems_multiprocessing_announce``
265directive must be called to announce the arrival of a packet.  After exiting
266the ISR, control will be passed to the Multiprocessing Server to process the
267packet.  The Multiprocessing Server will call the get_packet entry to obtain a
268packet buffer and the receive_entry entry to copy the message into the buffer
269obtained.
270
271INITIALIZATION
272--------------
273
274The INITIALIZATION component of the user-provided MPCI layer is called as part
275of the ``rtems_initialize_executive`` directive to initialize the MPCI layer
276and associated hardware.  It is invoked immediately after all of the device
277drivers have been initialized.  This component should be adhere to the
278following prototype:
279
280.. index:: rtems_mpci_entry
281
282.. code-block:: c
283
284    rtems_mpci_entry user_mpci_initialization(
285        rtems_configuration_table *configuration
286    );
287
288where configuration is the address of the user's Configuration Table.
289Operations on global objects cannot be performed until this component is
290invoked.  The INITIALIZATION component is invoked only once in the life of any
291system.  If the MPCI layer cannot be successfully initialized, the fatal error
292manager should be invoked by this routine.
293
294One of the primary functions of the MPCI layer is to provide the executive with
295packet buffers.  The INITIALIZATION routine must create and initialize a pool
296of packet buffers.  There must be enough packet buffers so RTEMS can obtain one
297whenever needed.
298
299GET_PACKET
300----------
301
302The GET_PACKET component of the user-provided MPCI layer is called when RTEMS
303must obtain a packet buffer to send or broadcast a message.  This component
304should be adhere to the following prototype:
305
306.. code-block:: c
307
308    rtems_mpci_entry user_mpci_get_packet(
309        rtems_packet_prefix **packet
310    );
311
312where packet is the address of a pointer to a packet.  This routine always
313succeeds and, upon return, packet will contain the address of a packet.  If for
314any reason, a packet cannot be successfully obtained, then the fatal error
315manager should be invoked.
316
317RTEMS has been optimized to avoid the need for obtaining a packet each time a
318message is sent or broadcast.  For example, RTEMS sends response messages (RR)
319back to the originator in the same packet in which the request message (RQ)
320arrived.
321
322RETURN_PACKET
323-------------
324
325The RETURN_PACKET component of the user-provided MPCI layer is called when
326RTEMS needs to release a packet to the free packet buffer pool.  This component
327should be adhere to the following prototype:
328
329.. code-block:: c
330
331    rtems_mpci_entry user_mpci_return_packet(
332        rtems_packet_prefix *packet
333    );
334
335where packet is the address of a packet.  If the packet cannot be successfully
336returned, the fatal error manager should be invoked.
337
338RECEIVE_PACKET
339--------------
340
341The RECEIVE_PACKET component of the user-provided MPCI layer is called when
342RTEMS needs to obtain a packet which has previously arrived.  This component
343should be adhere to the following prototype:
344
345.. code-block:: c
346
347    rtems_mpci_entry user_mpci_receive_packet(
348        rtems_packet_prefix **packet
349    );
350
351where packet is a pointer to the address of a packet to place the message from
352another node.  If a message is available, then packet will contain the address
353of the message from another node.  If no messages are available, this entry
354packet should contain NULL.
355
356SEND_PACKET
357-----------
358
359The SEND_PACKET component of the user-provided MPCI layer is called when RTEMS
360needs to send a packet containing a message to another node.  This component
361should be adhere to the following prototype:
362
363.. code-block:: c
364
365    rtems_mpci_entry user_mpci_send_packet(
366        uint32_t               node,
367        rtems_packet_prefix  **packet
368    );
369
370where node is the node number of the destination and packet is the address of a
371packet which containing a message.  If the packet cannot be successfully sent,
372the fatal error manager should be invoked.
373
374If node is set to zero, the packet is to be broadcasted to all other nodes in
375the system.  Although some MPCI layers will be built upon hardware which
376support a broadcast mechanism, others may be required to generate a copy of the
377packet for each node in the system.
378
379.. COMMENT: XXX packet_prefix structure needs to be defined in this document
380
381Many MPCI layers use the ``packet_length`` field of the ``rtems_packet_prefix``
382portion of the packet to avoid sending unnecessary data.  This is especially
383useful if the media connecting the nodes is relatively slow.
384
385The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the packet
386indicates how much of the packet in 32-bit units may require conversion in a
387heterogeneous system.
388
389.. index:: heterogeneous multiprocessing
390
391Supporting Heterogeneous Environments
392-------------------------------------
393
394Developing an MPCI layer for a heterogeneous system requires a thorough
395understanding of the differences between the processors which comprise the
396system.  One difficult problem is the varying data representation schemes used
397by different processor types.  The most pervasive data representation problem
398is the order of the bytes which compose a data entity.  Processors which place
399the least significant byte at the smallest address are classified as little
400endian processors.  Little endian byte-ordering is shown below:
401
402.. code-block:: c
403
404    +---------------+----------------+---------------+----------------+
405    |               |                |               |                |
406    |    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
407    |               |                |               |                |
408    +---------------+----------------+---------------+----------------+
409
410Conversely, processors which place the most significant byte at the smallest
411address are classified as big endian processors.  Big endian byte-ordering is
412shown below:
413
414.. code-block:: c
415
416    +---------------+----------------+---------------+----------------+
417    |               |                |               |                |
418    |    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
419    |               |                |               |                |
420    +---------------+----------------+---------------+----------------+
421
422Unfortunately, sharing a data structure between big endian and little endian
423processors requires translation into a common endian format.  An application
424designer typically chooses the common endian format to minimize conversion
425overhead.
426
427Another issue in the design of shared data structures is the alignment of data
428structure elements.  Alignment is both processor and compiler implementation
429dependent.  For example, some processors allow data elements to begin on any
430address boundary, while others impose restrictions.  Common restrictions are
431that data elements must begin on either an even address or on a long word
432boundary.  Violation of these restrictions may cause an exception or impose a
433performance penalty.
434
435Other issues which commonly impact the design of shared data structures include
436the representation of floating point numbers, bit fields, decimal data, and
437character strings.  In addition, the representation method for negative
438integers could be one's or two's complement.  These factors combine to increase
439the complexity of designing and manipulating data structures shared between
440processors.
441
442RTEMS addressed these issues in the design of the packets used to communicate
443between nodes.  The RTEMS packet format is designed to allow the MPCI layer to
444perform all necessary conversion without burdening the developer with the
445details of the RTEMS packet format.  As a result, the MPCI layer must be aware
446of the following:
447
448- All packets must begin on a four byte boundary.
449
450- Packets are composed of both RTEMS and application data.  All RTEMS data is
451  treated as 32-bit unsigned quantities and is in the first ``to_convert``
452  32-bit quantities of the packet.  The ``to_convert`` field is part of the
453  ``rtems_packet_prefix`` portion of the packet.
454
455- The RTEMS data component of the packet must be in native endian format.
456  Endian conversion may be performed by either the sending or receiving MPCI
457  layer.
458
459- RTEMS makes no assumptions regarding the application data component of the
460  packet.
461
462Operations
463==========
464
465Announcing a Packet
466-------------------
467
468The ``rtems_multiprocessing_announce`` directive is called by the MPCI layer to
469inform RTEMS that a packet has arrived from another node.  This directive can
470be called from an interrupt service routine or from within a polling routine.
471
472Directives
473==========
474
475This section details the additional directives required to support RTEMS in a
476multiprocessor configuration.  A subsection is dedicated to each of this
477manager's directives and describes the calling sequence, related constants,
478usage, and status codes.
479
480.. raw:: latex
481
482   \clearpage
483
484.. index:: announce arrival of package
485.. index:: rtems_multiprocessing_announce
486
487.. _rtems_multiprocessing_announce:
488
489MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
490-----------------------------------------------------------
491
492CALLING SEQUENCE:
493    .. code-block:: c
494
495        void rtems_multiprocessing_announce( void );
496
497DIRECTIVE STATUS CODES:
498    NONE
499
500DESCRIPTION:
501    This directive informs RTEMS that a multiprocessing communications packet
502    has arrived from another node.  This directive is called by the
503    user-provided MPCI, and is only used in multiprocessor configurations.
504
505NOTES:
506    This directive is typically called from an ISR.
507
508    This directive will almost certainly cause the calling task to be
509    preempted.
510
511    This directive does not generate activity on remote nodes.
Note: See TracBrowser for help on using the repository browser.