source: rtems-docs/ada_user/multiprocessing_manager.rst @ 4783b0d

4.115
Last change on this file since 4783b0d was 4783b0d, checked in by Amar Takhar <amar@…>, on 01/17/16 at 16:37:28

Split document into seperate files by section.

  • Property mode set to 100644
File size: 21.1 KB
Line 
1Multiprocessing Manager
2#######################
3
4.. index:: multiprocessing
5
6Introduction
7============
8
9In multiprocessor real-time systems, new
10requirements, such as sharing data and global resources between
11processors, are introduced.  This requires an efficient and
12reliable communications vehicle which allows all processors to
13communicate with each other as necessary.  In addition, the
14ramifications of multiple processors affect each and every
15characteristic of a real-time system, almost always making them
16more complicated.
17
18RTEMS addresses these issues by providing simple and
19flexible real-time multiprocessing capabilities.  The executive
20easily lends itself to both tightly-coupled and loosely-coupled
21configurations of the target system hardware.  In addition,
22RTEMS supports systems composed of both homogeneous and
23heterogeneous mixtures of processors and target boards.
24
25A major design goal of the RTEMS executive was to
26transcend the physical boundaries of the target hardware
27configuration.  This goal is achieved by presenting the
28application software with a logical view of the target system
29where the boundaries between processor nodes are transparent.
30As a result, the application developer may designate objects
31such as tasks, queues, events, signals, semaphores, and memory
32blocks as global objects.  These global objects may then be
33accessed by any task regardless of the physical location of the
34object and the accessing task.  RTEMS automatically determines
35that the object being accessed resides on another processor and
36performs the actions required to access the desired object.
37Simply stated, RTEMS allows the entire system, both hardware and
38software, to be viewed logically as a single system.
39
40Multiprocessing operations are transparent at the application level.
41Operations on remote objects are implicitly processed as remote
42procedure calls.  Although remote operations on objects are supported
43from Ada tasks, the calls used to support the multiprocessing
44communications should be implemented in C and are not supported
45in the Ada binding.  Since there is no Ada binding for RTEMS
46multiprocessing support services, all examples and data structures
47shown in this chapter are in C.
48
49Background
50==========
51
52.. index:: multiprocessing topologies
53
54RTEMS makes no assumptions regarding the connection
55media or topology of a multiprocessor system.  The tasks which
56compose a particular application can be spread among as many
57processors as needed to satisfy the application’s timing
58requirements.  The application tasks can interact using a subset
59of the RTEMS directives as if they were on the same processor.
60These directives allow application tasks to exchange data,
61communicate, and synchronize regardless of which processor they
62reside upon.
63
64The RTEMS multiprocessor execution model is multiple
65instruction streams with multiple data streams (MIMD).  This
66execution model has each of the processors executing code
67independent of the other processors.  Because of this
68parallelism, the application designer can more easily guarantee
69deterministic behavior.
70
71By supporting heterogeneous environments, RTEMS
72allows the systems designer to select the most efficient
73processor for each subsystem of the application.  Configuring
74RTEMS for a heterogeneous environment is no more difficult than
75for a homogeneous one.  In keeping with RTEMS philosophy of
76providing transparent physical node boundaries, the minimal
77heterogeneous processing required is isolated in the MPCI layer.
78
79Nodes
80-----
81.. index:: nodes, definition
82
83A processor in a RTEMS system is referred to as a
84node.  Each node is assigned a unique non-zero node number by
85the application designer.  RTEMS assumes that node numbers are
86assigned consecutively from one to the ``maximum_nodes``
87configuration parameter.  The node
88number, node, and the maximum number of nodes, maximum_nodes, in
89a system are found in the Multiprocessor Configuration Table.
90The maximum_nodes field and the number of global objects,
91maximum_global_objects, is required to be the same on all nodes
92in a system.
93
94The node number is used by RTEMS to identify each
95node when performing remote operations.  Thus, the
96Multiprocessor Communications Interface Layer (MPCI) must be
97able to route messages based on the node number.
98
99Global Objects
100--------------
101.. index:: global objects, definition
102
103All RTEMS objects which are created with the GLOBAL
104attribute will be known on all other nodes.  Global objects can
105be referenced from any node in the system, although certain
106directive specific restrictions (e.g. one cannot delete a remote
107object) may apply.  A task does not have to be global to perform
108operations involving remote objects.  The maximum number of
109global objects is the system is user configurable and can be
110found in the maximum_global_objects field in the Multiprocessor
111Configuration Table.  The distribution of tasks to processors is
112performed during the application design phase.  Dynamic task
113relocation is not supported by RTEMS.
114
115Global Object Table
116-------------------
117.. index:: global objects table
118
119RTEMS maintains two tables containing object
120information on every node in a multiprocessor system: a local
121object table and a global object table.  The local object table
122on each node is unique and contains information for all objects
123created on this node whether those objects are local or global.
124The global object table contains information regarding all
125global objects in the system and, consequently, is the same on
126every node.
127
128Since each node must maintain an identical copy of
129the global object table,  the maximum number of entries in each
130copy of the table must be the same.  The maximum number of
131entries in each copy is determined by the
132maximum_global_objects parameter in the Multiprocessor
133Configuration Table.  This parameter, as well as the
134maximum_nodes parameter, is required to be the same on all
135nodes.  To maintain consistency among the table copies, every
136node in the system must be informed of the creation or deletion
137of a global object.
138
139Remote Operations
140-----------------
141.. index:: MPCI and remote operations
142
143When an application performs an operation on a remote
144global object, RTEMS must generate a Remote Request (RQ) message
145and send it to the appropriate node.  After completing the
146requested operation, the remote node will build a Remote
147Response (RR) message and send it to the originating node.
148Messages generated as a side-effect of a directive (such as
149deleting a global task) are known as Remote Processes (RP) and
150do not require the receiving node to respond.
151
152Other than taking slightly longer to execute
153directives on remote objects, the application is unaware of the
154location of the objects it acts upon.  The exact amount of
155overhead required for a remote operation is dependent on the
156media connecting the nodes and, to a lesser degree, on the
157efficiency of the user-provided MPCI routines.
158
159The following shows the typical transaction sequence
160during a remote application:
161
162# The application issues a directive accessing a
163  remote global object.
164
165# RTEMS determines the node on which the object
166  resides.
167
168# RTEMS calls the user-provided MPCI routine
169  GET_PACKET to obtain a packet in which to build a RQ message.
170
171# After building a message packet, RTEMS calls the
172  user-provided MPCI routine SEND_PACKET to transmit the packet to
173  the node on which the object resides (referred to as the
174  destination node).
175
176# The calling task is blocked until the RR message
177  arrives, and control of the processor is transferred to another
178  task.
179
180# The MPCI layer on the destination node senses the
181  arrival of a packet (commonly in an ISR), and calls the``rtems_multiprocessing_announce``
182  directive.  This directive readies the Multiprocessing Server.
183
184# The Multiprocessing Server calls the user-provided
185  MPCI routine RECEIVE_PACKET, performs the requested operation,
186  builds an RR message, and returns it to the originating node.
187
188# The MPCI layer on the originating node senses the
189  arrival of a packet (typically via an interrupt), and calls the RTEMS``rtems_multiprocessing_announce`` directive.  This directive
190  readies the Multiprocessing Server.
191
192# The Multiprocessing Server calls the user-provided
193  MPCI routine RECEIVE_PACKET, readies the original requesting
194  task, and blocks until another packet arrives.  Control is
195  transferred to the original task which then completes processing
196  of the directive.
197
198If an uncorrectable error occurs in the user-provided
199MPCI layer, the fatal error handler should be invoked.  RTEMS
200assumes the reliable transmission and reception of messages by
201the MPCI and makes no attempt to detect or correct errors.
202
203Proxies
204-------
205.. index:: proxy, definition
206
207A proxy is an RTEMS data structure which resides on a
208remote node and is used to represent a task which must block as
209part of a remote operation. This action can occur as part of the``rtems.semaphore_obtain`` and``rtems.message_queue_receive`` directives.  If the
210object were local, the task’s control block would be available
211for modification to indicate it was blocking on a message queue
212or semaphore.  However, the task’s control block resides only on
213the same node as the task.  As a result, the remote node must
214allocate a proxy to represent the task until it can be readied.
215
216The maximum number of proxies is defined in the
217Multiprocessor Configuration Table.  Each node in a
218multiprocessor system may require a different number of proxies
219to be configured.  The distribution of proxy control blocks is
220application dependent and is different from the distribution of
221tasks.
222
223Multiprocessor Configuration Table
224----------------------------------
225
226The Multiprocessor Configuration Table contains
227information needed by RTEMS when used in a multiprocessor
228system.  This table is discussed in detail in the section
229Multiprocessor Configuration Table of the Configuring a System
230chapter.
231
232Multiprocessor Communications Interface Layer
233=============================================
234
235The Multiprocessor Communications Interface Layer
236(MPCI) is a set of user-provided procedures which enable the
237nodes in a multiprocessor system to communicate with one
238another.  These routines are invoked by RTEMS at various times
239in the preparation and processing of remote requests.
240Interrupts are enabled when an MPCI procedure is invoked.  It is
241assumed that if the execution mode and/or interrupt level are
242altered by the MPCI layer, that they will be restored prior to
243returning to RTEMS... index:: MPCI, definition
244
245The MPCI layer is responsible for managing a pool of
246buffers called packets and for sending these packets between
247system nodes.  Packet buffers contain the messages sent between
248the nodes.  Typically, the MPCI layer will encapsulate the
249packet within an envelope which contains the information needed
250by the MPCI layer.  The number of packets available is dependent
251on the MPCI layer implementation... index:: MPCI entry points
252
253The entry points to the routines in the user’s MPCI
254layer should be placed in the Multiprocessor Communications
255Interface Table.  The user must provide entry points for each of
256the following table entries in a multiprocessor system:
257
258- initialization        initialize the MPCI
259
260- get_packet    obtain a packet buffer
261
262- return_packet         return a packet buffer
263
264- send_packet   send a packet to another node
265
266- receive_packet        called to get an arrived packet
267
268A packet is sent by RTEMS in each of the following situations:
269
270- an RQ is generated on an originating node;
271
272- an RR is generated on a destination node;
273
274- a global object is created;
275
276- a global object is deleted;
277
278- a local task blocked on a remote object is deleted;
279
280- during system initialization to check for system consistency.
281
282If the target hardware supports it, the arrival of a
283packet at a node may generate an interrupt.  Otherwise, the
284real-time clock ISR can check for the arrival of a packet.  In
285any case, the``rtems_multiprocessing_announce`` directive must be called
286to announce the arrival of a packet.  After exiting the ISR,
287control will be passed to the Multiprocessing Server to process
288the packet.  The Multiprocessing Server will call the get_packet
289entry to obtain a packet buffer and the receive_entry entry to
290copy the message into the buffer obtained.
291
292INITIALIZATION
293--------------
294
295The INITIALIZATION component of the user-provided
296MPCI layer is called as part of the ``rtems_initialize_executive``
297directive to initialize the MPCI layer and associated hardware.
298It is invoked immediately after all of the device drivers have
299been initialized.  This component should be adhere to the
300following prototype:.. index:: rtems_mpci_entry
301
302.. code:: c
303
304    rtems_mpci_entry user_mpci_initialization(
305    rtems_configuration_table \*configuration
306    );
307
308where configuration is the address of the user’s
309Configuration Table.  Operations on global objects cannot be
310performed until this component is invoked.  The INITIALIZATION
311component is invoked only once in the life of any system.  If
312the MPCI layer cannot be successfully initialized, the fatal
313error manager should be invoked by this routine.
314
315One of the primary functions of the MPCI layer is to
316provide the executive with packet buffers.  The INITIALIZATION
317routine must create and initialize a pool of packet buffers.
318There must be enough packet buffers so RTEMS can obtain one
319whenever needed.
320
321GET_PACKET
322----------
323
324The GET_PACKET component of the user-provided MPCI
325layer is called when RTEMS must obtain a packet buffer to send
326or broadcast a message.  This component should be adhere to the
327following prototype:
328.. code:: c
329
330    rtems_mpci_entry user_mpci_get_packet(
331    rtems_packet_prefix \**packet
332    );
333
334where packet is the address of a pointer to a packet.
335This routine always succeeds and, upon return, packet will
336contain the address of a packet.  If for any reason, a packet
337cannot be successfully obtained, then the fatal error manager
338should be invoked.
339
340RTEMS has been optimized to avoid the need for
341obtaining a packet each time a message is sent or broadcast.
342For example, RTEMS sends response messages (RR) back to the
343originator in the same packet in which the request message (RQ)
344arrived.
345
346RETURN_PACKET
347-------------
348
349The RETURN_PACKET component of the user-provided MPCI
350layer is called when RTEMS needs to release a packet to the free
351packet buffer pool.  This component should be adhere to the
352following prototype:
353.. code:: c
354
355    rtems_mpci_entry user_mpci_return_packet(
356    rtems_packet_prefix \*packet
357    );
358
359where packet is the address of a packet.  If the
360packet cannot be successfully returned, the fatal error manager
361should be invoked.
362
363RECEIVE_PACKET
364--------------
365
366The RECEIVE_PACKET component of the user-provided
367MPCI layer is called when RTEMS needs to obtain a packet which
368has previously arrived.  This component should be adhere to the
369following prototype:
370.. code:: c
371
372    rtems_mpci_entry user_mpci_receive_packet(
373    rtems_packet_prefix \**packet
374    );
375
376where packet is a pointer to the address of a packet
377to place the message from another node.  If a message is
378available, then packet will contain the address of the message
379from another node.  If no messages are available, this entry
380packet should contain NULL.
381
382SEND_PACKET
383-----------
384
385The SEND_PACKET component of the user-provided MPCI
386layer is called when RTEMS needs to send a packet containing a
387message to another node.  This component should be adhere to the
388following prototype:
389.. code:: c
390
391    rtems_mpci_entry user_mpci_send_packet(
392    uint32_t               node,
393    rtems_packet_prefix  \**packet
394    );
395
396where node is the node number of the destination and packet is the
397address of a packet which containing a message.  If the packet cannot
398be successfully sent, the fatal error manager should be invoked.
399
400If node is set to zero, the packet is to be
401broadcasted to all other nodes in the system.  Although some
402MPCI layers will be built upon hardware which support a
403broadcast mechanism, others may be required to generate a copy
404of the packet for each node in the system.
405
406.. COMMENT: XXX packet_prefix structure needs to be defined in this document
407
408Many MPCI layers use the ``packet_length`` field of the``rtems_packet_prefix`` portion
409of the packet to avoid sending unnecessary data.  This is especially
410useful if the media connecting the nodes is relatively slow.
411
412The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the
413packet indicates how much of the packet in 32-bit units may require conversion
414in a heterogeneous system.
415
416Supporting Heterogeneous Environments
417-------------------------------------
418.. index:: heterogeneous multiprocessing
419
420Developing an MPCI layer for a heterogeneous system
421requires a thorough understanding of the differences between the
422processors which comprise the system.  One difficult problem is
423the varying data representation schemes used by different
424processor types.  The most pervasive data representation problem
425is the order of the bytes which compose a data entity.
426Processors which place the least significant byte at the
427smallest address are classified as little endian processors.
428Little endian byte-ordering is shown below:
429
430.. code:: c
431
432    +---------------+----------------+---------------+----------------+
433    |               |                |               |                |
434    |    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
435    |               |                |               |                |
436    +---------------+----------------+---------------+----------------+
437
438Conversely, processors which place the most
439significant byte at the smallest address are classified as big
440endian processors.  Big endian byte-ordering is shown below:
441.. code:: c
442
443    +---------------+----------------+---------------+----------------+
444    |               |                |               |                |
445    |    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
446    |               |                |               |                |
447    +---------------+----------------+---------------+----------------+
448
449Unfortunately, sharing a data structure between big
450endian and little endian processors requires translation into a
451common endian format.  An application designer typically chooses
452the common endian format to minimize conversion overhead.
453
454Another issue in the design of shared data structures
455is the alignment of data structure elements.  Alignment is both
456processor and compiler implementation dependent.  For example,
457some processors allow data elements to begin on any address
458boundary, while others impose restrictions.  Common restrictions
459are that data elements must begin on either an even address or
460on a long word boundary.  Violation of these restrictions may
461cause an exception or impose a performance penalty.
462
463Other issues which commonly impact the design of
464shared data structures include the representation of floating
465point numbers, bit fields, decimal data, and character strings.
466In addition, the representation method for negative integers
467could be one’s or two’s complement.  These factors combine to
468increase the complexity of designing and manipulating data
469structures shared between processors.
470
471RTEMS addressed these issues in the design of the
472packets used to communicate between nodes.  The RTEMS packet
473format is designed to allow the MPCI layer to perform all
474necessary conversion without burdening the developer with the
475details of the RTEMS packet format.  As a result, the MPCI layer
476must be aware of the following:
477
478- All packets must begin on a four byte boundary.
479
480- Packets are composed of both RTEMS and application data.  All RTEMS data
481  is treated as 32-bit unsigned quantities and is in the first ``to_convert``
482  32-bit quantities of the packet.  The ``to_convert`` field is part of the``rtems_packet_prefix`` portion of the packet.
483
484- The RTEMS data component of the packet must be in native
485  endian format.  Endian conversion may be performed by either the
486  sending or receiving MPCI layer.
487
488- RTEMS makes no assumptions regarding the application
489  data component of the packet.
490
491Operations
492==========
493
494Announcing a Packet
495-------------------
496
497The ``rtems_multiprocessing_announce`` directive is called by
498the MPCI layer to inform RTEMS that a packet has arrived from
499another node.  This directive can be called from an interrupt
500service routine or from within a polling routine.
501
502Directives
503==========
504
505This section details the additional directives
506required to support RTEMS in a multiprocessor configuration.  A
507subsection is dedicated to each of this manager’s directives and
508describes the calling sequence, related constants, usage, and
509status codes.
510
511MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
512-----------------------------------------------------------
513.. index:: announce arrival of package
514
515**CALLING SEQUENCE:**
516
517.. index:: rtems_multiprocessing_announce
518
519.. code:: c
520
521    void rtems_multiprocessing_announce( void );
522
523**DIRECTIVE STATUS CODES:**
524
525NONE
526
527**DESCRIPTION:**
528
529This directive informs RTEMS that a multiprocessing
530communications packet has arrived from another node.  This
531directive is called by the user-provided MPCI, and is only used
532in multiprocessor configurations.
533
534**NOTES:**
535
536This directive is typically called from an ISR.
537
538This directive will almost certainly cause the
539calling task to be preempted.
540
541This directive does not generate activity on remote nodes.
542
543.. COMMENT: COPYRIGHT (c) 2014.
544
545.. COMMENT: On-Line Applications Research Corporation (OAR).
546
547.. COMMENT: All rights reserved.
548
Note: See TracBrowser for help on using the repository browser.