source: rtems/doc/user/mp.t @ 2a517327

4.104.114.95
Last change on this file since 2a517327 was 2a517327, checked in by Joel Sherrill <joel.sherrill@…>, on 08/21/08 at 14:50:16

2008-08-21 Joel Sherrill <joel.sherrill@…>

  • bsp_howto/ata.t, bsp_howto/clock.t, bsp_howto/ide-ctrl.t, bsp_howto/nvmem.t, bsp_howto/shmsupp.t, user/datatypes.t, user/mp.t, user/userext.t: Eliminate all references to rtems_signedXX and rtems_unsignedXX.
  • Property mode set to 100644
File size: 21.3 KB
Line 
1@c
2@c  COPYRIGHT (c) 1988-2007.
3@c  On-Line Applications Research Corporation (OAR).
4@c  All rights reserved.
5@c
6@c  $Id$
7@c
8
9@chapter Multiprocessing Manager
10
11@cindex multiprocessing
12
13@section Introduction
14
15In multiprocessor real-time systems, new
16requirements, such as sharing data and global resources between
17processors, are introduced.  This requires an efficient and
18reliable communications vehicle which allows all processors to
19communicate with each other as necessary.  In addition, the
20ramifications of multiple processors affect each and every
21characteristic of a real-time system, almost always making them
22more complicated.
23
24RTEMS addresses these issues by providing simple and
25flexible real-time multiprocessing capabilities.  The executive
26easily lends itself to both tightly-coupled and loosely-coupled
27configurations of the target system hardware.  In addition,
28RTEMS supports systems composed of both homogeneous and
29heterogeneous mixtures of processors and target boards.
30
31A major design goal of the RTEMS executive was to
32transcend the physical boundaries of the target hardware
33configuration.  This goal is achieved by presenting the
34application software with a logical view of the target system
35where the boundaries between processor nodes are transparent.
36As a result, the application developer may designate objects
37such as tasks, queues, events, signals, semaphores, and memory
38blocks as global objects.  These global objects may then be
39accessed by any task regardless of the physical location of the
40object and the accessing task.  RTEMS automatically determines
41that the object being accessed resides on another processor and
42performs the actions required to access the desired object.
43Simply stated, RTEMS allows the entire system, both hardware and
44software, to be viewed logically as a single system.
45
46@ifset is-Ada
47Multiprocessing operations are transparent at the application level.
48Operations on remote objects are implicitly processed as remote
49procedure calls.  Although remote operations on objects are supported
50from Ada tasks, the calls used to support the multiprocessing
51communications should be implemented in C and are not supported
52in the Ada binding.  Since there is no Ada binding for RTEMS
53multiprocessing support services, all examples and data structures
54shown in this chapter are in C.
55@end ifset
56
57@section Background
58
59@cindex multiprocessing topologies
60
61RTEMS makes no assumptions regarding the connection
62media or topology of a multiprocessor system.  The tasks which
63compose a particular application can be spread among as many
64processors as needed to satisfy the application's timing
65requirements.  The application tasks can interact using a subset
66of the RTEMS directives as if they were on the same processor.
67These directives allow application tasks to exchange data,
68communicate, and synchronize regardless of which processor they
69reside upon.
70
71The RTEMS multiprocessor execution model is multiple
72instruction streams with multiple data streams (MIMD).  This
73execution model has each of the processors executing code
74independent of the other processors.  Because of this
75parallelism, the application designer can more easily guarantee
76deterministic behavior.
77
78By supporting heterogeneous environments, RTEMS
79allows the systems designer to select the most efficient
80processor for each subsystem of the application.  Configuring
81RTEMS for a heterogeneous environment is no more difficult than
82for a homogeneous one.  In keeping with RTEMS philosophy of
83providing transparent physical node boundaries, the minimal
84heterogeneous processing required is isolated in the MPCI layer.
85
86@subsection Nodes
87
88@cindex nodes, definition
89
90A processor in a RTEMS system is referred to as a
91node.  Each node is assigned a unique non-zero node number by
92the application designer.  RTEMS assumes that node numbers are
93assigned consecutively from one to the @code{maximum_nodes}
94configuration parameter.  The node
95number, node, and the maximum number of nodes, maximum_nodes, in
96a system are found in the Multiprocessor Configuration Table.
97The maximum_nodes field and the number of global objects,
98maximum_global_objects, is required to be the same on all nodes
99in a system.
100
101The node number is used by RTEMS to identify each
102node when performing remote operations.  Thus, the
103Multiprocessor Communications Interface Layer (MPCI) must be
104able to route messages based on the node number.
105
106@subsection Global Objects
107
108@cindex global objects, definition
109
110All RTEMS objects which are created with the GLOBAL
111attribute will be known on all other nodes.  Global objects can
112be referenced from any node in the system, although certain
113directive specific restrictions (e.g. one cannot delete a remote
114object) may apply.  A task does not have to be global to perform
115operations involving remote objects.  The maximum number of
116global objects is the system is user configurable and can be
117found in the maximum_global_objects field in the Multiprocessor
118Configuration Table.  The distribution of tasks to processors is
119performed during the application design phase.  Dynamic task
120relocation is not supported by RTEMS.
121
122@subsection Global Object Table
123
124@cindex global objects table
125
126RTEMS maintains two tables containing object
127information on every node in a multiprocessor system: a local
128object table and a global object table.  The local object table
129on each node is unique and contains information for all objects
130created on this node whether those objects are local or global.
131The global object table contains information regarding all
132global objects in the system and, consequently, is the same on
133every node.
134
135Since each node must maintain an identical copy of
136the global object table,  the maximum number of entries in each
137copy of the table must be the same.  The maximum number of
138entries in each copy is determined by the
139maximum_global_objects parameter in the Multiprocessor
140Configuration Table.  This parameter, as well as the
141maximum_nodes parameter, is required to be the same on all
142nodes.  To maintain consistency among the table copies, every
143node in the system must be informed of the creation or deletion
144of a global object.
145
146@subsection Remote Operations
147
148@cindex MPCI and remote operations
149
150When an application performs an operation on a remote
151global object, RTEMS must generate a Remote Request (RQ) message
152and send it to the appropriate node.  After completing the
153requested operation, the remote node will build a Remote
154Response (RR) message and send it to the originating node.
155Messages generated as a side-effect of a directive (such as
156deleting a global task) are known as Remote Processes (RP) and
157do not require the receiving node to respond.
158
159Other than taking slightly longer to execute
160directives on remote objects, the application is unaware of the
161location of the objects it acts upon.  The exact amount of
162overhead required for a remote operation is dependent on the
163media connecting the nodes and, to a lesser degree, on the
164efficiency of the user-provided MPCI routines.
165
166The following shows the typical transaction sequence
167during a remote application:
168
169@enumerate
170
171@item The application issues a directive accessing a
172remote global object.
173
174@item RTEMS determines the node on which the object
175resides.
176
177@item RTEMS calls the user-provided MPCI routine
178GET_PACKET to obtain a packet in which to build a RQ message.
179
180@item After building a message packet, RTEMS calls the
181user-provided MPCI routine SEND_PACKET to transmit the packet to
182the node on which the object resides (referred to as the
183destination node).
184
185@item The calling task is blocked until the RR message
186arrives, and control of the processor is transferred to another
187task.
188
189@item The MPCI layer on the destination node senses the
190arrival of a packet (commonly in an ISR), and calls the
191@code{rtems_multiprocessing_announce}
192directive.  This directive readies the Multiprocessing Server.
193
194@item The Multiprocessing Server calls the user-provided
195MPCI routine RECEIVE_PACKET, performs the requested operation,
196builds an RR message, and returns it to the originating node.
197
198@item The MPCI layer on the originating node senses the
199arrival of a packet (typically via an interrupt), and calls the RTEMS
200@code{rtems_multiprocessing_announce} directive.  This directive
201readies the Multiprocessing Server.
202
203@item The Multiprocessing Server calls the user-provided
204MPCI routine RECEIVE_PACKET, readies the original requesting
205task, and blocks until another packet arrives.  Control is
206transferred to the original task which then completes processing
207of the directive.
208
209@end enumerate
210
211If an uncorrectable error occurs in the user-provided
212MPCI layer, the fatal error handler should be invoked.  RTEMS
213assumes the reliable transmission and reception of messages by
214the MPCI and makes no attempt to detect or correct errors.
215
216@subsection Proxies
217
218@cindex proxy, definition
219
220A proxy is an RTEMS data structure which resides on a
221remote node and is used to represent a task which must block as
222part of a remote operation. This action can occur as part of the
223@code{@value{DIRPREFIX}semaphore_obtain} and
224@code{@value{DIRPREFIX}message_queue_receive} directives.  If the
225object were local, the task's control block would be available
226for modification to indicate it was blocking on a message queue
227or semaphore.  However, the task's control block resides only on
228the same node as the task.  As a result, the remote node must
229allocate a proxy to represent the task until it can be readied.
230
231The maximum number of proxies is defined in the
232Multiprocessor Configuration Table.  Each node in a
233multiprocessor system may require a different number of proxies
234to be configured.  The distribution of proxy control blocks is
235application dependent and is different from the distribution of
236tasks.
237
238@subsection Multiprocessor Configuration Table
239
240The Multiprocessor Configuration Table contains
241information needed by RTEMS when used in a multiprocessor
242system.  This table is discussed in detail in the section
243Multiprocessor Configuration Table of the Configuring a System
244chapter.
245
246@section Multiprocessor Communications Interface Layer
247
248The Multiprocessor Communications Interface Layer
249(MPCI) is a set of user-provided procedures which enable the
250nodes in a multiprocessor system to communicate with one
251another.  These routines are invoked by RTEMS at various times
252in the preparation and processing of remote requests.
253Interrupts are enabled when an MPCI procedure is invoked.  It is
254assumed that if the execution mode and/or interrupt level are
255altered by the MPCI layer, that they will be restored prior to
256returning to RTEMS.
257
258@cindex MPCI, definition
259
260The MPCI layer is responsible for managing a pool of
261buffers called packets and for sending these packets between
262system nodes.  Packet buffers contain the messages sent between
263the nodes.  Typically, the MPCI layer will encapsulate the
264packet within an envelope which contains the information needed
265by the MPCI layer.  The number of packets available is dependent
266on the MPCI layer implementation.
267
268@cindex MPCI entry points
269
270The entry points to the routines in the user's MPCI
271layer should be placed in the Multiprocessor Communications
272Interface Table.  The user must provide entry points for each of
273the following table entries in a multiprocessor system:
274
275@itemize @bullet
276@item initialization    initialize the MPCI
277@item get_packet        obtain a packet buffer
278@item return_packet     return a packet buffer
279@item send_packet       send a packet to another node
280@item receive_packet    called to get an arrived packet
281@end itemize
282
283A packet is sent by RTEMS in each of the following situations:
284
285@itemize @bullet
286@item an RQ is generated on an originating node;
287@item an RR is generated on a destination node;
288@item a global object is created;
289@item a global object is deleted;
290@item a local task blocked on a remote object is deleted;
291@item during system initialization to check for system consistency.
292@end itemize
293
294If the target hardware supports it, the arrival of a
295packet at a node may generate an interrupt.  Otherwise, the
296real-time clock ISR can check for the arrival of a packet.  In
297any case, the
298@code{rtems_multiprocessing_announce} directive must be called
299to announce the arrival of a packet.  After exiting the ISR,
300control will be passed to the Multiprocessing Server to process
301the packet.  The Multiprocessing Server will call the get_packet
302entry to obtain a packet buffer and the receive_entry entry to
303copy the message into the buffer obtained.
304
305@subsection INITIALIZATION
306
307The INITIALIZATION component of the user-provided
308MPCI layer is called as part of the @code{rtems_initialize_executive}
309directive to initialize the MPCI layer and associated hardware.
310It is invoked immediately after all of the device drivers have
311been initialized.  This component should be adhere to the
312following prototype:
313
314@findex rtems_mpci_entry
315@example
316@group
317rtems_mpci_entry user_mpci_initialization(
318  rtems_configuration_table *configuration
319);
320@end group
321@end example
322
323where configuration is the address of the user's
324Configuration Table.  Operations on global objects cannot be
325performed until this component is invoked.  The INITIALIZATION
326component is invoked only once in the life of any system.  If
327the MPCI layer cannot be successfully initialized, the fatal
328error manager should be invoked by this routine.
329
330One of the primary functions of the MPCI layer is to
331provide the executive with packet buffers.  The INITIALIZATION
332routine must create and initialize a pool of packet buffers.
333There must be enough packet buffers so RTEMS can obtain one
334whenever needed.
335
336@subsection GET_PACKET
337
338The GET_PACKET component of the user-provided MPCI
339layer is called when RTEMS must obtain a packet buffer to send
340or broadcast a message.  This component should be adhere to the
341following prototype:
342
343@example
344@group
345rtems_mpci_entry user_mpci_get_packet(
346  rtems_packet_prefix **packet
347);
348@end group
349@end example
350
351where packet is the address of a pointer to a packet.
352This routine always succeeds and, upon return, packet will
353contain the address of a packet.  If for any reason, a packet
354cannot be successfully obtained, then the fatal error manager
355should be invoked.
356
357RTEMS has been optimized to avoid the need for
358obtaining a packet each time a message is sent or broadcast.
359For example, RTEMS sends response messages (RR) back to the
360originator in the same packet in which the request message (RQ)
361arrived.
362
363@subsection RETURN_PACKET
364
365The RETURN_PACKET component of the user-provided MPCI
366layer is called when RTEMS needs to release a packet to the free
367packet buffer pool.  This component should be adhere to the
368following prototype:
369
370@example
371@group
372rtems_mpci_entry user_mpci_return_packet(
373  rtems_packet_prefix *packet
374);
375@end group
376@end example
377
378where packet is the address of a packet.  If the
379packet cannot be successfully returned, the fatal error manager
380should be invoked.
381
382@subsection RECEIVE_PACKET
383
384The RECEIVE_PACKET component of the user-provided
385MPCI layer is called when RTEMS needs to obtain a packet which
386has previously arrived.  This component should be adhere to the
387following prototype:
388
389@example
390@group
391rtems_mpci_entry user_mpci_receive_packet(
392  rtems_packet_prefix **packet
393);
394@end group
395@end example
396
397where packet is a pointer to the address of a packet
398to place the message from another node.  If a message is
399available, then packet will contain the address of the message
400from another node.  If no messages are available, this entry
401packet should contain NULL.
402
403@subsection SEND_PACKET
404
405The SEND_PACKET component of the user-provided MPCI
406layer is called when RTEMS needs to send a packet containing a
407message to another node.  This component should be adhere to the
408following prototype:
409
410@example
411@group
412rtems_mpci_entry user_mpci_send_packet(
413  uint32_t               node,
414  rtems_packet_prefix  **packet
415);
416@end group
417@end example
418
419where node is the node number of the destination and packet is the
420address of a packet which containing a message.  If the packet cannot
421be successfully sent, the fatal error manager should be invoked.
422
423If node is set to zero, the packet is to be
424broadcasted to all other nodes in the system.  Although some
425MPCI layers will be built upon hardware which support a
426broadcast mechanism, others may be required to generate a copy
427of the packet for each node in the system.
428
429@c XXX packet_prefix structure needs to be defined in this document
430Many MPCI layers use the @code{packet_length} field of the
431@code{rtems_packet_prefix} portion
432of the packet to avoid sending unnecessary data.  This is especially
433useful if the media connecting the nodes is relatively slow.
434
435The to_convert field of the MP_packet_prefix portion of the packet indicates
436how much of the packet (in @code{uint32_t}'s) may require
437conversion in a heterogeneous system.
438
439@subsection Supporting Heterogeneous Environments
440
441@cindex heterogeneous multiprocessing
442
443Developing an MPCI layer for a heterogeneous system
444requires a thorough understanding of the differences between the
445processors which comprise the system.  One difficult problem is
446the varying data representation schemes used by different
447processor types.  The most pervasive data representation problem
448is the order of the bytes which compose a data entity.
449Processors which place the least significant byte at the
450smallest address are classified as little endian processors.
451Little endian byte-ordering is shown below:
452
453
454@example
455@group
456+---------------+----------------+---------------+----------------+
457|               |                |               |                |
458|    Byte 3     |     Byte 2     |    Byte 1     |    Byte 0      |
459|               |                |               |                |
460+---------------+----------------+---------------+----------------+
461@end group
462@end example
463
464Conversely, processors which place the most
465significant byte at the smallest address are classified as big
466endian processors.  Big endian byte-ordering is shown below:
467
468@example
469@group
470+---------------+----------------+---------------+----------------+
471|               |                |               |                |
472|    Byte 0     |     Byte 1     |    Byte 2     |    Byte 3      |
473|               |                |               |                |
474+---------------+----------------+---------------+----------------+
475@end group
476@end example
477
478Unfortunately, sharing a data structure between big
479endian and little endian processors requires translation into a
480common endian format.  An application designer typically chooses
481the common endian format to minimize conversion overhead.
482
483Another issue in the design of shared data structures
484is the alignment of data structure elements.  Alignment is both
485processor and compiler implementation dependent.  For example,
486some processors allow data elements to begin on any address
487boundary, while others impose restrictions.  Common restrictions
488are that data elements must begin on either an even address or
489on a long word boundary.  Violation of these restrictions may
490cause an exception or impose a performance penalty.
491
492Other issues which commonly impact the design of
493shared data structures include the representation of floating
494point numbers, bit fields, decimal data, and character strings.
495In addition, the representation method for negative integers
496could be one's or two's complement.  These factors combine to
497increase the complexity of designing and manipulating data
498structures shared between processors.
499
500RTEMS addressed these issues in the design of the
501packets used to communicate between nodes.  The RTEMS packet
502format is designed to allow the MPCI layer to perform all
503necessary conversion without burdening the developer with the
504details of the RTEMS packet format.  As a result, the MPCI layer
505must be aware of the following:
506
507@itemize @bullet
508@item All packets must begin on a four byte boundary.
509
510@item Packets are composed of both RTEMS and application data.
511All RTEMS data is treated as thirty-two (32) bit unsigned
512quantities and is in the first @code{@value{RPREFIX}MINIMUM_UNSIGNED32S_TO_CONVERT}
513thirty-two (32) quantities of the packet.
514
515@item The RTEMS data component of the packet must be in native
516endian format.  Endian conversion may be performed by either the
517sending or receiving MPCI layer.
518
519@item RTEMS makes no assumptions regarding the application
520data component of the packet.
521@end itemize
522
523@section Operations
524
525@subsection Announcing a Packet
526
527The @code{rtems_multiprocessing_announce} directive is called by
528the MPCI layer to inform RTEMS that a packet has arrived from
529another node.  This directive can be called from an interrupt
530service routine or from within a polling routine.
531
532@section Directives
533
534This section details the additional directives
535required to support RTEMS in a multiprocessor configuration.  A
536subsection is dedicated to each of this manager's directives and
537describes the calling sequence, related constants, usage, and
538status codes.
539
540@c
541@c
542@c
543@page
544@subsection MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet
545
546@cindex announce arrival of package
547
548@subheading CALLING SEQUENCE:
549
550@findex rtems_multiprocessing_announce
551@example
552void rtems_multiprocessing_announce( void );
553@end example
554
555@subheading DIRECTIVE STATUS CODES:
556
557NONE
558
559@subheading DESCRIPTION:
560
561This directive informs RTEMS that a multiprocessing
562communications packet has arrived from another node.  This
563directive is called by the user-provided MPCI, and is only used
564in multiprocessor configurations.
565
566@subheading NOTES:
567
568This directive is typically called from an ISR.
569
570This directive will almost certainly cause the
571calling task to be preempted.
572
573This directive does not generate activity on remote nodes.
Note: See TracBrowser for help on using the repository browser.