source: rtems/doc/user/mp.t @ 139b2e4a

4.104.114.84.95
Last change on this file since 139b2e4a was 139b2e4a, checked in by Joel Sherrill <joel.sherrill@…>, on 06/04/97 at 18:32:07

added CVS Id string

  • Property mode set to 100644
File size: 24.6 KB
Line 
1@c
2@c  COPYRIGHT (c) 1996.
3@c  On-Line Applications Research Corporation (OAR).
4@c  All rights reserved.
5@c
6@c  $Id$
7@c
8
9@ifinfo
10@node Multiprocessing Manager, Multiprocessing Manager Introduction, Configuring a System Sizing the RTEMS RAM Workspace, Top
11@end ifinfo
12@chapter Multiprocessing Manager
13@ifinfo
14@menu
15* Multiprocessing Manager Introduction::
16* Multiprocessing Manager Background::
17* Multiprocessing Manager Multiprocessor Communications Interface Layer::
18* Multiprocessing Manager Operations::
19* Multiprocessing Manager Directives::
20@end menu
21@end ifinfo
22
23@ifinfo
24@node Multiprocessing Manager Introduction, Multiprocessing Manager Background, Multiprocessing Manager, Multiprocessing Manager
25@end ifinfo
26@section Introduction
27
28In multiprocessor real-time systems, new
29requirements, such as sharing data and global resources between
30processors, are introduced.  This requires an efficient and
31reliable communications vehicle which allows all processors to
32communicate with each other as necessary.  In addition, the
33ramifications of multiple processors affect each and every
34characteristic of a real-time system, almost always making them
35more complicated.
36
37RTEMS addresses these issues by providing simple and
38flexible real-time multiprocessing capabilities.  The executive
39easily lends itself to both tightly-coupled and loosely-coupled
40configurations of the target system hardware.  In addition,
41RTEMS supports systems composed of both homogeneous and
42heterogeneous mixtures of processors and target boards.
43
44A major design goal of the RTEMS executive was to
45transcend the physical boundaries of the target hardware
46configuration.  This goal is achieved by presenting the
47application software with a logical view of the target system
48where the boundaries between processor nodes are transparent.
49As a result, the application developer may designate objects
50such as tasks, queues, events, signals, semaphores, and memory
51blocks as global objects.  These global objects may then be
52accessed by any task regardless of the physical location of the
53object and the accessing task.  RTEMS automatically determines
54that the object being accessed resides on another processor and
55performs the actions required to access the desired object.
56Simply stated, RTEMS allows the entire system, both hardware and
57software, to be viewed logically as a single system.
58
59@ifinfo
60@node Multiprocessing Manager Background, Nodes, Multiprocessing Manager Introduction, Multiprocessing Manager
61@end ifinfo
62@section Background
63@ifinfo
64@menu
65* Nodes::
66* Global Objects::
67* Global Object Table::
68* Remote Operations::
69* Proxies::
70* Multiprocessor Configuration Table::
71@end menu
72@end ifinfo
73
74RTEMS makes no assumptions regarding the connection
75media or topology of a multiprocessor system.  The tasks which
76compose a particular application can be spread among as many
77processors as needed to satisfy the application's timing
78requirements.  The application tasks can interact using a subset
79of the RTEMS directives as if they were on the same processor.
80These directives allow application tasks to exchange data,
81communicate, and synchronize regardless of which processor they
82reside upon.
83
84The RTEMS multiprocessor execution model is multiple
85instruction streams with multiple data streams (MIMD).  This
86execution model has each of the processors executing code
87independent of the other processors.  Because of this
88parallelism, the application designer can more easily guarantee
89deterministic behavior.
90
91By supporting heterogeneous environments, RTEMS
92allows the systems designer to select the most efficient
93processor for each subsystem of the application.  Configuring
94RTEMS for a heterogeneous environment is no more difficult than
95for a homogeneous one.  In keeping with RTEMS philosophy of
96providing transparent physical node boundaries, the minimal
97heterogeneous processing required is isolated in the MPCI layer.
98
99@ifinfo
100@node Nodes, Global Objects, Multiprocessing Manager Background, Multiprocessing Manager Background
101@end ifinfo
102@subsection Nodes
103
104A processor in a RTEMS system is referred to as a
105node.  Each node is assigned a unique non-zero node number by
106the application designer.  RTEMS assumes that node numbers are
107assigned consecutively from one to maximum_nodes.  The node
108number, node, and the maximum number of nodes, maximum_nodes, in
109a system are found in the Multiprocessor Configuration Table.
110The maximum_nodes field and the number of global objects,
111maximum_global_objects, is required to be the same on all nodes
112in a system.
113
114The node number is used by RTEMS to identify each
115node when performing remote operations.  Thus, the
116Multiprocessor Communications Interface Layer (MPCI) must be
117able to route messages based on the node number.
118
119@ifinfo
120@node Global Objects, Global Object Table, Nodes, Multiprocessing Manager Background
121@end ifinfo
122@subsection Global Objects
123
124All RTEMS objects which are created with the GLOBAL
125attribute will be known on all other nodes.  Global objects can
126be referenced from any node in the system, although certain
127directive specific restrictions (e.g. one cannot delete a remote
128object) may apply.  A task does not have to be global to perform
129operations involving remote objects.  The maximum number of
130global objects is the system is user configurable and can be
131found in the maximum_global_objects field in the Multiprocessor
132Configuration Table.  The distribution of tasks to processors is
133performed during the application design phase.  Dynamic task
134relocation is not supported by RTEMS.
135
136@ifinfo
137@node Global Object Table, Remote Operations, Global Objects, Multiprocessing Manager Background
138@end ifinfo
139@subsection Global Object Table
140
141RTEMS maintains two tables containing object
142information on every node in a multiprocessor system: a local
143object table and a global object table.  The local object table
144on each node is unique and contains information for all objects
145created on this node whether those objects are local or global.
146The global object table contains information regarding all
147global objects in the system and, consequently, is the same on
148every node.
149
150Since each node must maintain an identical copy of
151the global object table,  the maximum number of entries in each
152copy of the table must be the same.  The maximum number of
153entries in each copy is determined by the
154maximum_global_objects parameter in the Multiprocessor
155Configuration Table.  This parameter, as well as the
156maximum_nodes parameter, is required to be the same on all
157nodes.  To maintain consistency among the table copies, every
158node in the system must be informed of the creation or deletion
159of a global object.
160
161@ifinfo
162@node Remote Operations, Proxies, Global Object Table, Multiprocessing Manager Background
163@end ifinfo
164@subsection Remote Operations
165
166When an application performs an operation on a remote
167global object, RTEMS must generate a Remote Request (RQ) message
168and send it to the appropriate node.  After completing the
169requested operation, the remote node will build a Remote
170Response (RR) message and send it to the originating node.
171Messages generated as a side-effect of a directive (such as
172deleting a global task) are known as Remote Processes (RP) and
173do not require the receiving node to respond.
174
175Other than taking slightly longer to execute
176directives on remote objects, the application is unaware of the
177location of the objects it acts upon.  The exact amount of
178overhead required for a remote operation is dependent on the
179media connecting the nodes and, to a lesser degree, on the
180efficiency of the user-provided MPCI routines.
181
182The following shows the typical transaction sequence
183during a remote application:
184
185@enumerate
186
187@item The application issues a directive accessing a
188remote global object.
189
190@item RTEMS determines the node on which the object
191resides.
192
193@item RTEMS calls the user-provided MPCI routine
194GET_PACKET to obtain a packet in which to build a RQ message.
195
196@item After building a message packet, RTEMS calls the
197user-provided MPCI routine SEND_PACKET to transmit the packet to
198the node on which the object resides (referred to as the
199destination node).
200
201@item The calling task is blocked until the RR message
202arrives, and control of the processor is transferred to another
203task.
204
205@item The MPCI layer on the destination node senses the
206arrival of a packet (commonly in an ISR), and calls the
207multiprocessing_announce directive.  This directive readies the
208Multiprocessing Server.
209
210@item The Multiprocessing Server calls the user-provided
211MPCI routine RECEIVE_PACKET, performs the requested operation,
212builds an RR message, and returns it to the originating node.
213
214@item The MPCI layer on the originating node senses the
215arrival of a packet (typically via an interrupt), and calls the
216RTEMS multiprocessing_announce directive.  This directive
217readies the Multiprocessing Server.
218
219@item The Multiprocessing Server calls the user-provided
220MPCI routine RECEIVE_PACKET, readies the original requesting
221task, and blocks until another packet arrives.  Control is
222transferred to the original task which then completes processing
223of the directive.
224
225@end enumerate
226
227If an uncorrectable error occurs in the user-provided
228MPCI layer, the fatal error handler should be invoked.  RTEMS
229assumes the reliable transmission and reception of messages by
230the MPCI and makes no attempt to detect or correct errors.
231
232@ifinfo
233@node Proxies, Multiprocessor Configuration Table, Remote Operations, Multiprocessing Manager Background
234@end ifinfo
235@subsection Proxies
236
237A proxy is an RTEMS data structure which resides on a
238remote node and is used to represent a task which must block as
239part of a remote operation. This action can occur as part of the
240semaphore_obtain and message_queue_receive directives.  If the
241object were local, the task's control block would be available
242for modification to indicate it was blocking on a message queue
243or semaphore.  However, the task's control block resides only on
244the same node as the task.  As a result, the remote node must
245allocate a proxy to represent the task until it can be readied.
246
247The maximum number of proxies is defined in the
248Multiprocessor Configuration Table.  Each node in a
249multiprocessor system may require a different number of proxies
250to be configured.  The distribution of proxy control blocks is
251application dependent and is different from the distribution of
252tasks.
253
254@ifinfo
255@node Multiprocessor Configuration Table, Multiprocessing Manager Multiprocessor Communications Interface Layer, Proxies, Multiprocessing Manager Background
256@end ifinfo
257@subsection Multiprocessor Configuration Table
258
259The Multiprocessor Configuration Table contains
260information needed by RTEMS when used in a multiprocessor
261system.  This table is discussed in detail in the section
262Multiprocessor Configuration Table of the Configuring a System
263chapter.
264
265@ifinfo
266@node Multiprocessing Manager Multiprocessor Communications Interface Layer, INITIALIZATION, Multiprocessor Configuration Table, Multiprocessing Manager
267@end ifinfo
268@section Multiprocessor Communications Interface Layer
269@ifinfo
270@menu
271* INITIALIZATION::
272* GET_PACKET::
273* RETURN_PACKET::
274* RECEIVE_PACKET::
275* SEND_PACKET::
276* Supporting Heterogeneous Environments::
277@end menu
278@end ifinfo
279
280The Multiprocessor Communications Interface Layer
281(MPCI) is a set of user-provided procedures which enable the
282nodes in a multiprocessor system to communicate with one
283another.  These routines are invoked by RTEMS at various times
284in the preparation and processing of remote requests.
285Interrupts are enabled when an MPCI procedure is invoked.  It is
286assumed that if the execution mode and/or interrupt level are
287altered by the MPCI layer, that they will be restored prior to
288returning to RTEMS.
289
290The MPCI layer is responsible for managing a pool of
291buffers called packets and for sending these packets between
292system nodes.  Packet buffers contain the messages sent between
293the nodes.  Typically, the MPCI layer will encapsulate the
294packet within an envelope which contains the information needed
295by the MPCI layer.  The number of packets available is dependent
296on the MPCI layer implementation.
297
298The entry points to the routines in the user's MPCI
299layer should be placed in the Multiprocessor Communications
300Interface Table.  The user must provide entry points for each of
301the following table entries in a multiprocessor system:
302
303@itemize @bullet
304@item initialization    initialize the MPCI
305@item get_packet        obtain a packet buffer
306@item return_packet     return a packet buffer
307@item send_packet       send a packet to another node
308@item receive_packet    called to get an arrived packet
309@end itemize
310
311A packet is sent by RTEMS in each of the following
312situations:
313
314@itemize @bullet
315@item an RQ is generated on an originating node;
316@item an RR is generated on a destination node;
317@item a global object is created;
318@item a global object is deleted;
319@item a local task blocked on a remote object is deleted;
320@item during system initialization to check for system consistency.
321@end itemize
322
323If the target hardware supports it, the arrival of a
324packet at a node may generate an interrupt.  Otherwise, the
325real-time clock ISR can check for the arrival of a packet.  In
326any case, the multiprocessing_announce directive must be called
327to announce the arrival of a packet.  After exiting the ISR,
328control will be passed to the Multiprocessing Server to process
329the packet.  The Multiprocessing Server will call the get_packet
330entry to obtain a packet buffer and the receive_entry entry to
331copy the message into the buffer obtained.
332
333@ifinfo
334@node INITIALIZATION, GET_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer, Multiprocessing Manager Multiprocessor Communications Interface Layer
335@end ifinfo
336@subsection INITIALIZATION
337
338The INITIALIZATION component of the user-provided
339MPCI layer is called as part of the initialize_executive
340directive to initialize the MPCI layer and associated hardware.
341It is invoked immediately after all of the device drivers have
342been initialized.  This component should be adhere to the
343following prototype:
344
345@ifset is-C
346@example
347@group
348rtems_mpci_entry user_mpci_initialization(
349  rtems_configuration_table *configuration
350);
351@end group
352@end example
353@end ifset
354
355@ifset is-Ada
356@example
357procedure User_MPCI_Initialization (
358   Configuration : in     RTEMS.Configuration_Table_Pointer
359);
360@end example
361@end ifset
362
363where configuration is the address of the user's
364Configuration Table.  Operations on global objects cannot be
365performed until this component is invoked.  The INITIALIZATION
366component is invoked only once in the life of any system.  If
367the MPCI layer cannot be successfully initialized, the fatal
368error manager should be invoked by this routine.
369
370One of the primary functions of the MPCI layer is to
371provide the executive with packet buffers.  The INITIALIZATION
372routine must create and initialize a pool of packet buffers.
373There must be enough packet buffers so RTEMS can obtain one
374whenever needed.
375
376@ifinfo
377@node GET_PACKET, RETURN_PACKET, INITIALIZATION, Multiprocessing Manager Multiprocessor Communications Interface Layer
378@end ifinfo
379@subsection GET_PACKET
380
381The GET_PACKET component of the user-provided MPCI
382layer is called when RTEMS must obtain a packet buffer to send
383or broadcast a message.  This component should be adhere to the
384following prototype:
385
386@ifset is-C
387@example
388@group
389rtems_mpci_entry user_mpci_get_packet(
390  rtems_packet_prefix **packet
391);
392@end group
393@end example
394@end ifset
395
396@ifset is-Ada
397@example
398procedure User_MPCI_Get_Packet (
399   Packet : access RTEMS.Packet_Prefix_Pointer
400);
401@end example
402@end ifset
403
404where packet is the address of a pointer to a packet.
405This routine always succeeds and, upon return, packet will
406contain the address of a packet.  If for any reason, a packet
407cannot be successfully obtained, then the fatal error manager
408should be invoked.
409
410RTEMS has been optimized to avoid the need for
411obtaining a packet each time a message is sent or broadcast.
412For example, RTEMS sends response messages (RR) back to the
413originator in the same packet in which the request message (RQ)
414arrived.
415
416@ifinfo
417@node RETURN_PACKET, RECEIVE_PACKET, GET_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
418@end ifinfo
419@subsection RETURN_PACKET
420
421The RETURN_PACKET component of the user-provided MPCI
422layer is called when RTEMS needs to release a packet to the free
423packet buffer pool.  This component should be adhere to the
424following prototype:
425
426@ifset is-C
427@example
428@group
429rtems_mpci_entry user_mpci_return_packet(
430  rtems_packet_prefix *packet
431);
432@end group
433@end example
434@end ifset
435
436@ifset is-Ada
437@example
438procedure User_MPCI_Return_Packet (
439   Packet : in     RTEMS.Packet_Prefix_Pointer
440);
441@end example
442@end ifset
443
444where packet is the address of a packet.  If the
445packet cannot be successfully returned, the fatal error manager
446should be invoked.
447
448@ifinfo
449@node RECEIVE_PACKET, SEND_PACKET, RETURN_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
450@end ifinfo
451@subsection RECEIVE_PACKET
452
453The RECEIVE_PACKET component of the user-provided
454MPCI layer is called when RTEMS needs to obtain a packet which
455has previously arrived.  This component should be adhere to the
456following prototype:
457
458@ifset is-C
459@example
460@group
461rtems_mpci_entry user_mpci_receive_packet(
462  rtems_packet_prefix **packet
463);
464@end group
465@end example
466@end ifset
467
468@ifset is-Ada
469@example
470procedure User_MPCI_Receive_Packet (
471   Packet : access RTEMS.Packet_Prefix_Pointer
472);
473@end example
474@end ifset
475
476where packet is a pointer to the address of a packet
477to place the message from another node.  If a message is
478available, then packet will contain the address of the message
479from another node.  If no messages are available, this entry
480packet should contain NULL.
481
482@ifinfo
483@node SEND_PACKET, Supporting Heterogeneous Environments, RECEIVE_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
484@end ifinfo
485@subsection SEND_PACKET
486
487The SEND_PACKET component of the user-provided MPCI
488layer is called when RTEMS needs to send a packet containing a
489message to another node.  This component should be adhere to the
490following prototype:
491
492@ifset is-C
493@example
494@group
495rtems_mpci_entry user_mpci_send_packet(
496  rtems_unsigned32       node,
497  rtems_packet_prefix  **packet
498);
499@end group
500@end example
501@end ifset
502
503@ifset is-Ada
504@example
505procedure User_MPCI_Send_Packet (
506   Node   : in     RTEMS.Unsigned32;
507   Packet : access RTEMS.Packet_Prefix_Pointer
508);
509@end example
510@end ifset
511
512where node is the node number of the destination and packet is the
513address of a packet which containing a message.  If the packet cannot
514be successfully sent, the fatal error manager should be invoked.
515
516If node is set to zero, the packet is to be
517broadcasted to all other nodes in the system.  Although some
518MPCI layers will be built upon hardware which support a
519broadcast mechanism, others may be required to generate a copy
520of the packet for each node in the system.
521
522Many MPCI layers use the packet_length field of the MP_packet_prefix
523of the packet to avoid sending unnecessary data.  This is especially
524useful if the media connecting the nodes is relatively slow.
525
526The to_convert field of the MP_packet_prefix portion of the packet indicates
527how much of the packet (in unsigned32's) may require conversion in a
528heterogeneous system.
529
530@ifinfo
531@node Supporting Heterogeneous Environments, Multiprocessing Manager Operations, SEND_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
532@end ifinfo
533@subsection Supporting Heterogeneous Environments
534
535Developing an MPCI layer for a heterogeneous system
536requires a thorough understanding of the differences between the
537processors which comprise the system.  One difficult problem is
538the varying data representation schemes used by different
539processor types.  The most pervasive data representation problem
540is the order of the bytes which compose a data entity.
541Processors which place the least significant byte at the
542smallest address are classified as little endian processors.
543Little endian byte-ordering is shown below:
544
545
546@example
547@group
548+---------------+----------------+---------------+----------------+
549|               |                |               |                |
550|    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
551|               |                |               |                |
552+---------------+----------------+---------------+----------------+
553@end group
554@end example
555
556Conversely, processors which place the most
557significant byte at the smallest address are classified as big
558endian processors.  Big endian byte-ordering is shown below:
559
560@example
561@group
562+---------------+----------------+---------------+----------------+
563|               |                |               |                |
564|    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
565|               |                |               |                |
566+---------------+----------------+---------------+----------------+
567@end group
568@end example
569
570Unfortunately, sharing a data structure between big
571endian and little endian processors requires translation into a
572common endian format.  An application designer typically chooses
573the common endian format to minimize conversion overhead.
574
575Another issue in the design of shared data structures
576is the alignment of data structure elements.  Alignment is both
577processor and compiler implementation dependent.  For example,
578some processors allow data elements to begin on any address
579boundary, while others impose restrictions.  Common restrictions
580are that data elements must begin on either an even address or
581on a long word boundary.  Violation of these restrictions may
582cause an exception or impose a performance penalty.
583
584Other issues which commonly impact the design of
585shared data structures include the representation of floating
586point numbers, bit fields, decimal data, and character strings.
587In addition, the representation method for negative integers
588could be one's or two's complement.  These factors combine to
589increase the complexity of designing and manipulating data
590structures shared between processors.
591
592RTEMS addressed these issues in the design of the
593packets used to communicate between nodes.  The RTEMS packet
594format is designed to allow the MPCI layer to perform all
595necessary conversion without burdening the developer with the
596details of the RTEMS packet format.  As a result, the MPCI layer
597must be aware of the following:
598
599@itemize @bullet
600@item All packets must begin on a four byte boundary.
601
602@item Packets are composed of both RTEMS and application data.
603All RTEMS data is treated as thirty-two (32) bit unsigned
604quantities and is in the first @code{MINIMUM_UNSIGNED32S_TO_CONVERT}
605thirty-two (32) quantities of the packet.
606
607@item The RTEMS data component of the packet must be in native
608endian format.  Endian conversion may be performed by either the
609sending or receiving MPCI layer.
610
611@item RTEMS makes no assumptions regarding the application
612data component of the packet.
613@end itemize
614
615@ifinfo
616@node Multiprocessing Manager Operations, Announcing a Packet, Supporting Heterogeneous Environments, Multiprocessing Manager
617@end ifinfo
618@section Operations
619@ifinfo
620@menu
621* Announcing a Packet::
622@end menu
623@end ifinfo
624
625@ifinfo
626@node Announcing a Packet, Multiprocessing Manager Directives, Multiprocessing Manager Operations, Multiprocessing Manager Operations
627@end ifinfo
628@subsection Announcing a Packet
629
630The multiprocessing_announce directive is called by
631the MPCI layer to inform RTEMS that a packet has arrived from
632another node.  This directive can be called from an interrupt
633service routine or from within a polling routine.
634
635@ifinfo
636@node Multiprocessing Manager Directives, MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet, Announcing a Packet, Multiprocessing Manager
637@end ifinfo
638@section Directives
639@ifinfo
640@menu
641* MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet::
642@end menu
643@end ifinfo
644
645This section details the additional directives
646required to support RTEMS in a multiprocessor configuration.  A
647subsection is dedicated to each of this manager's directives and
648describes the calling sequence, related constants, usage, and
649status codes.
650
651@page
652@ifinfo
653@node MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet, Directive Status Codes, Multiprocessing Manager Directives, Multiprocessing Manager Directives
654@end ifinfo
655@subsection MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
656
657@subheading CALLING SEQUENCE:
658
659@ifset is-C
660@example
661void rtems_multiprocessing_announce( void );
662@end example
663@end ifset
664
665@ifset is-Ada
666@example
667procedure Multiprocessing_Announce;
668@end example
669@end ifset
670
671@subheading DIRECTIVE STATUS CODES:
672
673NONE
674
675@subheading DESCRIPTION:
676
677This directive informs RTEMS that a multiprocessing
678communications packet has arrived from another node.  This
679directive is called by the user-provided MPCI, and is only used
680in multiprocessor configurations.
681
682@subheading NOTES:
683
684This directive is typically called from an ISR.
685
686This directive will almost certainly cause the
687calling task to be preempted.
688
689This directive does not generate activity on remote nodes.
Note: See TracBrowser for help on using the repository browser.