source: rtems/doc/user/mp.t @ 61389eac

4.104.114.84.95
Last change on this file since 61389eac was 61389eac, checked in by Joel Sherrill <joel.sherrill@…>, on 05/29/97 at 21:53:58

first cut at Ada bindings manual

  • Property mode set to 100644
File size: 24.6 KB
Line 
1@c
2@c  COPYRIGHT (c) 1996.
3@c  On-Line Applications Research Corporation (OAR).
4@c  All rights reserved.
5@c
6
7@ifinfo
8@node Multiprocessing Manager, Multiprocessing Manager Introduction, Configuring a System Sizing the RTEMS RAM Workspace, Top
9@end ifinfo
10@chapter Multiprocessing Manager
11@ifinfo
12@menu
13* Multiprocessing Manager Introduction::
14* Multiprocessing Manager Background::
15* Multiprocessing Manager Multiprocessor Communications Interface Layer::
16* Multiprocessing Manager Operations::
17* Multiprocessing Manager Directives::
18@end menu
19@end ifinfo
20
21@ifinfo
22@node Multiprocessing Manager Introduction, Multiprocessing Manager Background, Multiprocessing Manager, Multiprocessing Manager
23@end ifinfo
24@section Introduction
25
26In multiprocessor real-time systems, new
27requirements, such as sharing data and global resources between
28processors, are introduced.  This requires an efficient and
29reliable communications vehicle which allows all processors to
30communicate with each other as necessary.  In addition, the
31ramifications of multiple processors affect each and every
32characteristic of a real-time system, almost always making them
33more complicated.
34
35RTEMS addresses these issues by providing simple and
36flexible real-time multiprocessing capabilities.  The executive
37easily lends itself to both tightly-coupled and loosely-coupled
38configurations of the target system hardware.  In addition,
39RTEMS supports systems composed of both homogeneous and
40heterogeneous mixtures of processors and target boards.
41
42A major design goal of the RTEMS executive was to
43transcend the physical boundaries of the target hardware
44configuration.  This goal is achieved by presenting the
45application software with a logical view of the target system
46where the boundaries between processor nodes are transparent.
47As a result, the application developer may designate objects
48such as tasks, queues, events, signals, semaphores, and memory
49blocks as global objects.  These global objects may then be
50accessed by any task regardless of the physical location of the
51object and the accessing task.  RTEMS automatically determines
52that the object being accessed resides on another processor and
53performs the actions required to access the desired object.
54Simply stated, RTEMS allows the entire system, both hardware and
55software, to be viewed logically as a single system.
56
57@ifinfo
58@node Multiprocessing Manager Background, Nodes, Multiprocessing Manager Introduction, Multiprocessing Manager
59@end ifinfo
60@section Background
61@ifinfo
62@menu
63* Nodes::
64* Global Objects::
65* Global Object Table::
66* Remote Operations::
67* Proxies::
68* Multiprocessor Configuration Table::
69@end menu
70@end ifinfo
71
72RTEMS makes no assumptions regarding the connection
73media or topology of a multiprocessor system.  The tasks which
74compose a particular application can be spread among as many
75processors as needed to satisfy the application's timing
76requirements.  The application tasks can interact using a subset
77of the RTEMS directives as if they were on the same processor.
78These directives allow application tasks to exchange data,
79communicate, and synchronize regardless of which processor they
80reside upon.
81
82The RTEMS multiprocessor execution model is multiple
83instruction streams with multiple data streams (MIMD).  This
84execution model has each of the processors executing code
85independent of the other processors.  Because of this
86parallelism, the application designer can more easily guarantee
87deterministic behavior.
88
89By supporting heterogeneous environments, RTEMS
90allows the systems designer to select the most efficient
91processor for each subsystem of the application.  Configuring
92RTEMS for a heterogeneous environment is no more difficult than
93for a homogeneous one.  In keeping with RTEMS philosophy of
94providing transparent physical node boundaries, the minimal
95heterogeneous processing required is isolated in the MPCI layer.
96
97@ifinfo
98@node Nodes, Global Objects, Multiprocessing Manager Background, Multiprocessing Manager Background
99@end ifinfo
100@subsection Nodes
101
102A processor in a RTEMS system is referred to as a
103node.  Each node is assigned a unique non-zero node number by
104the application designer.  RTEMS assumes that node numbers are
105assigned consecutively from one to maximum_nodes.  The node
106number, node, and the maximum number of nodes, maximum_nodes, in
107a system are found in the Multiprocessor Configuration Table.
108The maximum_nodes field and the number of global objects,
109maximum_global_objects, is required to be the same on all nodes
110in a system.
111
112The node number is used by RTEMS to identify each
113node when performing remote operations.  Thus, the
114Multiprocessor Communications Interface Layer (MPCI) must be
115able to route messages based on the node number.
116
117@ifinfo
118@node Global Objects, Global Object Table, Nodes, Multiprocessing Manager Background
119@end ifinfo
120@subsection Global Objects
121
122All RTEMS objects which are created with the GLOBAL
123attribute will be known on all other nodes.  Global objects can
124be referenced from any node in the system, although certain
125directive specific restrictions (e.g. one cannot delete a remote
126object) may apply.  A task does not have to be global to perform
127operations involving remote objects.  The maximum number of
128global objects is the system is user configurable and can be
129found in the maximum_global_objects field in the Multiprocessor
130Configuration Table.  The distribution of tasks to processors is
131performed during the application design phase.  Dynamic task
132relocation is not supported by RTEMS.
133
134@ifinfo
135@node Global Object Table, Remote Operations, Global Objects, Multiprocessing Manager Background
136@end ifinfo
137@subsection Global Object Table
138
139RTEMS maintains two tables containing object
140information on every node in a multiprocessor system: a local
141object table and a global object table.  The local object table
142on each node is unique and contains information for all objects
143created on this node whether those objects are local or global.
144The global object table contains information regarding all
145global objects in the system and, consequently, is the same on
146every node.
147
148Since each node must maintain an identical copy of
149the global object table,  the maximum number of entries in each
150copy of the table must be the same.  The maximum number of
151entries in each copy is determined by the
152maximum_global_objects parameter in the Multiprocessor
153Configuration Table.  This parameter, as well as the
154maximum_nodes parameter, is required to be the same on all
155nodes.  To maintain consistency among the table copies, every
156node in the system must be informed of the creation or deletion
157of a global object.
158
159@ifinfo
160@node Remote Operations, Proxies, Global Object Table, Multiprocessing Manager Background
161@end ifinfo
162@subsection Remote Operations
163
164When an application performs an operation on a remote
165global object, RTEMS must generate a Remote Request (RQ) message
166and send it to the appropriate node.  After completing the
167requested operation, the remote node will build a Remote
168Response (RR) message and send it to the originating node.
169Messages generated as a side-effect of a directive (such as
170deleting a global task) are known as Remote Processes (RP) and
171do not require the receiving node to respond.
172
173Other than taking slightly longer to execute
174directives on remote objects, the application is unaware of the
175location of the objects it acts upon.  The exact amount of
176overhead required for a remote operation is dependent on the
177media connecting the nodes and, to a lesser degree, on the
178efficiency of the user-provided MPCI routines.
179
180The following shows the typical transaction sequence
181during a remote application:
182
183@enumerate
184
185@item The application issues a directive accessing a
186remote global object.
187
188@item RTEMS determines the node on which the object
189resides.
190
191@item RTEMS calls the user-provided MPCI routine
192GET_PACKET to obtain a packet in which to build a RQ message.
193
194@item After building a message packet, RTEMS calls the
195user-provided MPCI routine SEND_PACKET to transmit the packet to
196the node on which the object resides (referred to as the
197destination node).
198
199@item The calling task is blocked until the RR message
200arrives, and control of the processor is transferred to another
201task.
202
203@item The MPCI layer on the destination node senses the
204arrival of a packet (commonly in an ISR), and calls the
205multiprocessing_announce directive.  This directive readies the
206Multiprocessing Server.
207
208@item The Multiprocessing Server calls the user-provided
209MPCI routine RECEIVE_PACKET, performs the requested operation,
210builds an RR message, and returns it to the originating node.
211
212@item The MPCI layer on the originating node senses the
213arrival of a packet (typically via an interrupt), and calls the
214RTEMS multiprocessing_announce directive.  This directive
215readies the Multiprocessing Server.
216
217@item The Multiprocessing Server calls the user-provided
218MPCI routine RECEIVE_PACKET, readies the original requesting
219task, and blocks until another packet arrives.  Control is
220transferred to the original task which then completes processing
221of the directive.
222
223@end enumerate
224
225If an uncorrectable error occurs in the user-provided
226MPCI layer, the fatal error handler should be invoked.  RTEMS
227assumes the reliable transmission and reception of messages by
228the MPCI and makes no attempt to detect or correct errors.
229
230@ifinfo
231@node Proxies, Multiprocessor Configuration Table, Remote Operations, Multiprocessing Manager Background
232@end ifinfo
233@subsection Proxies
234
235A proxy is an RTEMS data structure which resides on a
236remote node and is used to represent a task which must block as
237part of a remote operation. This action can occur as part of the
238semaphore_obtain and message_queue_receive directives.  If the
239object were local, the task's control block would be available
240for modification to indicate it was blocking on a message queue
241or semaphore.  However, the task's control block resides only on
242the same node as the task.  As a result, the remote node must
243allocate a proxy to represent the task until it can be readied.
244
245The maximum number of proxies is defined in the
246Multiprocessor Configuration Table.  Each node in a
247multiprocessor system may require a different number of proxies
248to be configured.  The distribution of proxy control blocks is
249application dependent and is different from the distribution of
250tasks.
251
252@ifinfo
253@node Multiprocessor Configuration Table, Multiprocessing Manager Multiprocessor Communications Interface Layer, Proxies, Multiprocessing Manager Background
254@end ifinfo
255@subsection Multiprocessor Configuration Table
256
257The Multiprocessor Configuration Table contains
258information needed by RTEMS when used in a multiprocessor
259system.  This table is discussed in detail in the section
260Multiprocessor Configuration Table of the Configuring a System
261chapter.
262
263@ifinfo
264@node Multiprocessing Manager Multiprocessor Communications Interface Layer, INITIALIZATION, Multiprocessor Configuration Table, Multiprocessing Manager
265@end ifinfo
266@section Multiprocessor Communications Interface Layer
267@ifinfo
268@menu
269* INITIALIZATION::
270* GET_PACKET::
271* RETURN_PACKET::
272* RECEIVE_PACKET::
273* SEND_PACKET::
274* Supporting Heterogeneous Environments::
275@end menu
276@end ifinfo
277
278The Multiprocessor Communications Interface Layer
279(MPCI) is a set of user-provided procedures which enable the
280nodes in a multiprocessor system to communicate with one
281another.  These routines are invoked by RTEMS at various times
282in the preparation and processing of remote requests.
283Interrupts are enabled when an MPCI procedure is invoked.  It is
284assumed that if the execution mode and/or interrupt level are
285altered by the MPCI layer, that they will be restored prior to
286returning to RTEMS.
287
288The MPCI layer is responsible for managing a pool of
289buffers called packets and for sending these packets between
290system nodes.  Packet buffers contain the messages sent between
291the nodes.  Typically, the MPCI layer will encapsulate the
292packet within an envelope which contains the information needed
293by the MPCI layer.  The number of packets available is dependent
294on the MPCI layer implementation.
295
296The entry points to the routines in the user's MPCI
297layer should be placed in the Multiprocessor Communications
298Interface Table.  The user must provide entry points for each of
299the following table entries in a multiprocessor system:
300
301@itemize @bullet
302@item initialization    initialize the MPCI
303@item get_packet        obtain a packet buffer
304@item return_packet     return a packet buffer
305@item send_packet       send a packet to another node
306@item receive_packet    called to get an arrived packet
307@end itemize
308
309A packet is sent by RTEMS in each of the following
310situations:
311
312@itemize @bullet
313@item an RQ is generated on an originating node;
314@item an RR is generated on a destination node;
315@item a global object is created;
316@item a global object is deleted;
317@item a local task blocked on a remote object is deleted;
318@item during system initialization to check for system consistency.
319@end itemize
320
321If the target hardware supports it, the arrival of a
322packet at a node may generate an interrupt.  Otherwise, the
323real-time clock ISR can check for the arrival of a packet.  In
324any case, the multiprocessing_announce directive must be called
325to announce the arrival of a packet.  After exiting the ISR,
326control will be passed to the Multiprocessing Server to process
327the packet.  The Multiprocessing Server will call the get_packet
328entry to obtain a packet buffer and the receive_entry entry to
329copy the message into the buffer obtained.
330
331@ifinfo
332@node INITIALIZATION, GET_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer, Multiprocessing Manager Multiprocessor Communications Interface Layer
333@end ifinfo
334@subsection INITIALIZATION
335
336The INITIALIZATION component of the user-provided
337MPCI layer is called as part of the initialize_executive
338directive to initialize the MPCI layer and associated hardware.
339It is invoked immediately after all of the device drivers have
340been initialized.  This component should be adhere to the
341following prototype:
342
343@ifset is-C
344@example
345@group
346rtems_mpci_entry user_mpci_initialization(
347  rtems_configuration_table *configuration
348);
349@end group
350@end example
351@end ifset
352
353@ifset is-Ada
354@example
355procedure User_MPCI_Initialization (
356   Configuration : in     RTEMS.Configuration_Table_Pointer
357);
358@end example
359@end ifset
360
361where configuration is the address of the user's
362Configuration Table.  Operations on global objects cannot be
363performed until this component is invoked.  The INITIALIZATION
364component is invoked only once in the life of any system.  If
365the MPCI layer cannot be successfully initialized, the fatal
366error manager should be invoked by this routine.
367
368One of the primary functions of the MPCI layer is to
369provide the executive with packet buffers.  The INITIALIZATION
370routine must create and initialize a pool of packet buffers.
371There must be enough packet buffers so RTEMS can obtain one
372whenever needed.
373
374@ifinfo
375@node GET_PACKET, RETURN_PACKET, INITIALIZATION, Multiprocessing Manager Multiprocessor Communications Interface Layer
376@end ifinfo
377@subsection GET_PACKET
378
379The GET_PACKET component of the user-provided MPCI
380layer is called when RTEMS must obtain a packet buffer to send
381or broadcast a message.  This component should be adhere to the
382following prototype:
383
384@ifset is-C
385@example
386@group
387rtems_mpci_entry user_mpci_get_packet(
388  rtems_packet_prefix **packet
389);
390@end group
391@end example
392@end ifset
393
394@ifset is-Ada
395@example
396procedure User_MPCI_Get_Packet (
397   Packet : access RTEMS.Packet_Prefix_Pointer
398);
399@end example
400@end ifset
401
402where packet is the address of a pointer to a packet.
403This routine always succeeds and, upon return, packet will
404contain the address of a packet.  If for any reason, a packet
405cannot be successfully obtained, then the fatal error manager
406should be invoked.
407
408RTEMS has been optimized to avoid the need for
409obtaining a packet each time a message is sent or broadcast.
410For example, RTEMS sends response messages (RR) back to the
411originator in the same packet in which the request message (RQ)
412arrived.
413
414@ifinfo
415@node RETURN_PACKET, RECEIVE_PACKET, GET_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
416@end ifinfo
417@subsection RETURN_PACKET
418
419The RETURN_PACKET component of the user-provided MPCI
420layer is called when RTEMS needs to release a packet to the free
421packet buffer pool.  This component should be adhere to the
422following prototype:
423
424@ifset is-C
425@example
426@group
427rtems_mpci_entry user_mpci_return_packet(
428  rtems_packet_prefix *packet
429);
430@end group
431@end example
432@end ifset
433
434@ifset is-Ada
435@example
436procedure User_MPCI_Return_Packet (
437   Packet : in     RTEMS.Packet_Prefix_Pointer
438);
439@end example
440@end ifset
441
442where packet is the address of a packet.  If the
443packet cannot be successfully returned, the fatal error manager
444should be invoked.
445
446@ifinfo
447@node RECEIVE_PACKET, SEND_PACKET, RETURN_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
448@end ifinfo
449@subsection RECEIVE_PACKET
450
451The RECEIVE_PACKET component of the user-provided
452MPCI layer is called when RTEMS needs to obtain a packet which
453has previously arrived.  This component should be adhere to the
454following prototype:
455
456@ifset is-C
457@example
458@group
459rtems_mpci_entry user_mpci_receive_packet(
460  rtems_packet_prefix **packet
461);
462@end group
463@end example
464@end ifset
465
466@ifset is-Ada
467@example
468procedure User_MPCI_Receive_Packet (
469   Packet : access RTEMS.Packet_Prefix_Pointer
470);
471@end example
472@end ifset
473
474where packet is a pointer to the address of a packet
475to place the message from another node.  If a message is
476available, then packet will contain the address of the message
477from another node.  If no messages are available, this entry
478packet should contain NULL.
479
480@ifinfo
481@node SEND_PACKET, Supporting Heterogeneous Environments, RECEIVE_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
482@end ifinfo
483@subsection SEND_PACKET
484
485The SEND_PACKET component of the user-provided MPCI
486layer is called when RTEMS needs to send a packet containing a
487message to another node.  This component should be adhere to the
488following prototype:
489
490@ifset is-C
491@example
492@group
493rtems_mpci_entry user_mpci_send_packet(
494  rtems_unsigned32       node,
495  rtems_packet_prefix  **packet
496);
497@end group
498@end example
499@end ifset
500
501@ifset is-Ada
502@example
503procedure User_MPCI_Send_Packet (
504   Node   : in     RTEMS.Unsigned32;
505   Packet : access RTEMS.Packet_Prefix_Pointer
506);
507@end example
508@end ifset
509
510where node is the node number of the destination and packet is the
511address of a packet which containing a message.  If the packet cannot
512be successfully sent, the fatal error manager should be invoked.
513
514If node is set to zero, the packet is to be
515broadcasted to all other nodes in the system.  Although some
516MPCI layers will be built upon hardware which support a
517broadcast mechanism, others may be required to generate a copy
518of the packet for each node in the system.
519
520Many MPCI layers use the packet_length field of the MP_packet_prefix
521of the packet to avoid sending unnecessary data.  This is especially
522useful if the media connecting the nodes is relatively slow.
523
524The to_convert field of the MP_packet_prefix portion of the packet indicates
525how much of the packet (in unsigned32's) may require conversion in a
526heterogeneous system.
527
528@ifinfo
529@node Supporting Heterogeneous Environments, Multiprocessing Manager Operations, SEND_PACKET, Multiprocessing Manager Multiprocessor Communications Interface Layer
530@end ifinfo
531@subsection Supporting Heterogeneous Environments
532
533Developing an MPCI layer for a heterogeneous system
534requires a thorough understanding of the differences between the
535processors which comprise the system.  One difficult problem is
536the varying data representation schemes used by different
537processor types.  The most pervasive data representation problem
538is the order of the bytes which compose a data entity.
539Processors which place the least significant byte at the
540smallest address are classified as little endian processors.
541Little endian byte-ordering is shown below:
542
543
544@example
545@group
546+---------------+----------------+---------------+----------------+
547|               |                |               |                |
548|    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
549|               |                |               |                |
550+---------------+----------------+---------------+----------------+
551@end group
552@end example
553
554Conversely, processors which place the most
555significant byte at the smallest address are classified as big
556endian processors.  Big endian byte-ordering is shown below:
557
558@example
559@group
560+---------------+----------------+---------------+----------------+
561|               |                |               |                |
562|    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
563|               |                |               |                |
564+---------------+----------------+---------------+----------------+
565@end group
566@end example
567
568Unfortunately, sharing a data structure between big
569endian and little endian processors requires translation into a
570common endian format.  An application designer typically chooses
571the common endian format to minimize conversion overhead.
572
573Another issue in the design of shared data structures
574is the alignment of data structure elements.  Alignment is both
575processor and compiler implementation dependent.  For example,
576some processors allow data elements to begin on any address
577boundary, while others impose restrictions.  Common restrictions
578are that data elements must begin on either an even address or
579on a long word boundary.  Violation of these restrictions may
580cause an exception or impose a performance penalty.
581
582Other issues which commonly impact the design of
583shared data structures include the representation of floating
584point numbers, bit fields, decimal data, and character strings.
585In addition, the representation method for negative integers
586could be one's or two's complement.  These factors combine to
587increase the complexity of designing and manipulating data
588structures shared between processors.
589
590RTEMS addressed these issues in the design of the
591packets used to communicate between nodes.  The RTEMS packet
592format is designed to allow the MPCI layer to perform all
593necessary conversion without burdening the developer with the
594details of the RTEMS packet format.  As a result, the MPCI layer
595must be aware of the following:
596
597@itemize @bullet
598@item All packets must begin on a four byte boundary.
599
600@item Packets are composed of both RTEMS and application data.
601All RTEMS data is treated as thirty-two (32) bit unsigned
602quantities and is in the first MINIMUM_UNSIGNED32S_TO_CONVERT
603thirty-two (32) quantities of the packet.
604
605@item The RTEMS data component of the packet must be in native
606endian format.  Endian conversion may be performed by either the
607sending or receiving MPCI layer.
608
609@item RTEMS makes no assumptions regarding the application
610data component of the packet.
611@end itemize
612
613@ifinfo
614@node Multiprocessing Manager Operations, Announcing a Packet, Supporting Heterogeneous Environments, Multiprocessing Manager
615@end ifinfo
616@section Operations
617@ifinfo
618@menu
619* Announcing a Packet::
620@end menu
621@end ifinfo
622
623@ifinfo
624@node Announcing a Packet, Multiprocessing Manager Directives, Multiprocessing Manager Operations, Multiprocessing Manager Operations
625@end ifinfo
626@subsection Announcing a Packet
627
628The multiprocessing_announce directive is called by
629the MPCI layer to inform RTEMS that a packet has arrived from
630another node.  This directive can be called from an interrupt
631service routine or from within a polling routine.
632
633@ifinfo
634@node Multiprocessing Manager Directives, MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet, Announcing a Packet, Multiprocessing Manager
635@end ifinfo
636@section Directives
637@ifinfo
638@menu
639* MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet::
640@end menu
641@end ifinfo
642
643This section details the additional directives
644required to support RTEMS in a multiprocessor configuration.  A
645subsection is dedicated to each of this manager's directives and
646describes the calling sequence, related constants, usage, and
647status codes.
648
649@page
650@ifinfo
651@node MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet, Directive Status Codes, Multiprocessing Manager Directives, Multiprocessing Manager Directives
652@end ifinfo
653@subsection MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
654
655@subheading CALLING SEQUENCE:
656
657@ifset is-C
658@example
659void rtems_multiprocessing_announce( void );
660@end example
661@end ifset
662
663@ifset is-Ada
664@example
665procedure Multiprocessing_Announce;
666@end example
667@end ifset
668
669@subheading DIRECTIVE STATUS CODES:
670
671NONE
672
673@subheading DESCRIPTION:
674
675This directive informs RTEMS that a multiprocessing
676communications packet has arrived from another node.  This
677directive is called by the user-provided MPCI, and is only used
678in multiprocessor configurations.
679
680@subheading NOTES:
681
682This directive is typically called from an ISR.
683
684This directive will almost certainly cause the
685calling task to be preempted.
686
687This directive does not generate activity on remote nodes.
Note: See TracBrowser for help on using the repository browser.