1 | Multiprocessing Manager |
---|
2 | ####################### |
---|
3 | |
---|
4 | .. index:: multiprocessing |
---|
5 | |
---|
6 | Introduction |
---|
7 | ============ |
---|
8 | |
---|
9 | In multiprocessor real-time systems, new |
---|
10 | requirements, such as sharing data and global resources between |
---|
11 | processors, are introduced. This requires an efficient and |
---|
12 | reliable communications vehicle which allows all processors to |
---|
13 | communicate with each other as necessary. In addition, the |
---|
14 | ramifications of multiple processors affect each and every |
---|
15 | characteristic of a real-time system, almost always making them |
---|
16 | more complicated. |
---|
17 | |
---|
18 | RTEMS addresses these issues by providing simple and |
---|
19 | flexible real-time multiprocessing capabilities. The executive |
---|
20 | easily lends itself to both tightly-coupled and loosely-coupled |
---|
21 | configurations of the target system hardware. In addition, |
---|
22 | RTEMS supports systems composed of both homogeneous and |
---|
23 | heterogeneous mixtures of processors and target boards. |
---|
24 | |
---|
25 | A major design goal of the RTEMS executive was to |
---|
26 | transcend the physical boundaries of the target hardware |
---|
27 | configuration. This goal is achieved by presenting the |
---|
28 | application software with a logical view of the target system |
---|
29 | where the boundaries between processor nodes are transparent. |
---|
30 | As a result, the application developer may designate objects |
---|
31 | such as tasks, queues, events, signals, semaphores, and memory |
---|
32 | blocks as global objects. These global objects may then be |
---|
33 | accessed by any task regardless of the physical location of the |
---|
34 | object and the accessing task. RTEMS automatically determines |
---|
35 | that the object being accessed resides on another processor and |
---|
36 | performs the actions required to access the desired object. |
---|
37 | Simply stated, RTEMS allows the entire system, both hardware and |
---|
38 | software, to be viewed logically as a single system. |
---|
39 | |
---|
40 | Background |
---|
41 | ========== |
---|
42 | |
---|
43 | .. index:: multiprocessing topologies |
---|
44 | |
---|
45 | RTEMS makes no assumptions regarding the connection |
---|
46 | media or topology of a multiprocessor system. The tasks which |
---|
47 | compose a particular application can be spread among as many |
---|
48 | processors as needed to satisfy the application's timing |
---|
49 | requirements. The application tasks can interact using a subset |
---|
50 | of the RTEMS directives as if they were on the same processor. |
---|
51 | These directives allow application tasks to exchange data, |
---|
52 | communicate, and synchronize regardless of which processor they |
---|
53 | reside upon. |
---|
54 | |
---|
55 | The RTEMS multiprocessor execution model is multiple |
---|
56 | instruction streams with multiple data streams (MIMD). This |
---|
57 | execution model has each of the processors executing code |
---|
58 | independent of the other processors. Because of this |
---|
59 | parallelism, the application designer can more easily guarantee |
---|
60 | deterministic behavior. |
---|
61 | |
---|
62 | By supporting heterogeneous environments, RTEMS |
---|
63 | allows the systems designer to select the most efficient |
---|
64 | processor for each subsystem of the application. Configuring |
---|
65 | RTEMS for a heterogeneous environment is no more difficult than |
---|
66 | for a homogeneous one. In keeping with RTEMS philosophy of |
---|
67 | providing transparent physical node boundaries, the minimal |
---|
68 | heterogeneous processing required is isolated in the MPCI layer. |
---|
69 | |
---|
70 | Nodes |
---|
71 | ----- |
---|
72 | .. index:: nodes, definition |
---|
73 | |
---|
74 | A processor in a RTEMS system is referred to as a |
---|
75 | node. Each node is assigned a unique non-zero node number by |
---|
76 | the application designer. RTEMS assumes that node numbers are |
---|
77 | assigned consecutively from one to the ``maximum_nodes`` |
---|
78 | configuration parameter. The node |
---|
79 | number, node, and the maximum number of nodes, maximum_nodes, in |
---|
80 | a system are found in the Multiprocessor Configuration Table. |
---|
81 | The maximum_nodes field and the number of global objects, |
---|
82 | maximum_global_objects, is required to be the same on all nodes |
---|
83 | in a system. |
---|
84 | |
---|
85 | The node number is used by RTEMS to identify each |
---|
86 | node when performing remote operations. Thus, the |
---|
87 | Multiprocessor Communications Interface Layer (MPCI) must be |
---|
88 | able to route messages based on the node number. |
---|
89 | |
---|
90 | Global Objects |
---|
91 | -------------- |
---|
92 | .. index:: global objects, definition |
---|
93 | |
---|
94 | All RTEMS objects which are created with the GLOBAL |
---|
95 | attribute will be known on all other nodes. Global objects can |
---|
96 | be referenced from any node in the system, although certain |
---|
97 | directive specific restrictions (e.g. one cannot delete a remote |
---|
98 | object) may apply. A task does not have to be global to perform |
---|
99 | operations involving remote objects. The maximum number of |
---|
100 | global objects is the system is user configurable and can be |
---|
101 | found in the maximum_global_objects field in the Multiprocessor |
---|
102 | Configuration Table. The distribution of tasks to processors is |
---|
103 | performed during the application design phase. Dynamic task |
---|
104 | relocation is not supported by RTEMS. |
---|
105 | |
---|
106 | Global Object Table |
---|
107 | ------------------- |
---|
108 | .. index:: global objects table |
---|
109 | |
---|
110 | RTEMS maintains two tables containing object |
---|
111 | information on every node in a multiprocessor system: a local |
---|
112 | object table and a global object table. The local object table |
---|
113 | on each node is unique and contains information for all objects |
---|
114 | created on this node whether those objects are local or global. |
---|
115 | The global object table contains information regarding all |
---|
116 | global objects in the system and, consequently, is the same on |
---|
117 | every node. |
---|
118 | |
---|
119 | Since each node must maintain an identical copy of |
---|
120 | the global object table, the maximum number of entries in each |
---|
121 | copy of the table must be the same. The maximum number of |
---|
122 | entries in each copy is determined by the |
---|
123 | maximum_global_objects parameter in the Multiprocessor |
---|
124 | Configuration Table. This parameter, as well as the |
---|
125 | maximum_nodes parameter, is required to be the same on all |
---|
126 | nodes. To maintain consistency among the table copies, every |
---|
127 | node in the system must be informed of the creation or deletion |
---|
128 | of a global object. |
---|
129 | |
---|
130 | Remote Operations |
---|
131 | ----------------- |
---|
132 | .. index:: MPCI and remote operations |
---|
133 | |
---|
134 | When an application performs an operation on a remote |
---|
135 | global object, RTEMS must generate a Remote Request (RQ) message |
---|
136 | and send it to the appropriate node. After completing the |
---|
137 | requested operation, the remote node will build a Remote |
---|
138 | Response (RR) message and send it to the originating node. |
---|
139 | Messages generated as a side-effect of a directive (such as |
---|
140 | deleting a global task) are known as Remote Processes (RP) and |
---|
141 | do not require the receiving node to respond. |
---|
142 | |
---|
143 | Other than taking slightly longer to execute |
---|
144 | directives on remote objects, the application is unaware of the |
---|
145 | location of the objects it acts upon. The exact amount of |
---|
146 | overhead required for a remote operation is dependent on the |
---|
147 | media connecting the nodes and, to a lesser degree, on the |
---|
148 | efficiency of the user-provided MPCI routines. |
---|
149 | |
---|
150 | The following shows the typical transaction sequence |
---|
151 | during a remote application: |
---|
152 | |
---|
153 | # The application issues a directive accessing a |
---|
154 | remote global object. |
---|
155 | |
---|
156 | # RTEMS determines the node on which the object |
---|
157 | resides. |
---|
158 | |
---|
159 | # RTEMS calls the user-provided MPCI routine |
---|
160 | GET_PACKET to obtain a packet in which to build a RQ message. |
---|
161 | |
---|
162 | # After building a message packet, RTEMS calls the |
---|
163 | user-provided MPCI routine SEND_PACKET to transmit the packet to |
---|
164 | the node on which the object resides (referred to as the |
---|
165 | destination node). |
---|
166 | |
---|
167 | # The calling task is blocked until the RR message |
---|
168 | arrives, and control of the processor is transferred to another |
---|
169 | task. |
---|
170 | |
---|
171 | # The MPCI layer on the destination node senses the |
---|
172 | arrival of a packet (commonly in an ISR), and calls the``rtems_multiprocessing_announce`` |
---|
173 | directive. This directive readies the Multiprocessing Server. |
---|
174 | |
---|
175 | # The Multiprocessing Server calls the user-provided |
---|
176 | MPCI routine RECEIVE_PACKET, performs the requested operation, |
---|
177 | builds an RR message, and returns it to the originating node. |
---|
178 | |
---|
179 | # The MPCI layer on the originating node senses the |
---|
180 | arrival of a packet (typically via an interrupt), and calls the RTEMS``rtems_multiprocessing_announce`` directive. This directive |
---|
181 | readies the Multiprocessing Server. |
---|
182 | |
---|
183 | # The Multiprocessing Server calls the user-provided |
---|
184 | MPCI routine RECEIVE_PACKET, readies the original requesting |
---|
185 | task, and blocks until another packet arrives. Control is |
---|
186 | transferred to the original task which then completes processing |
---|
187 | of the directive. |
---|
188 | |
---|
189 | If an uncorrectable error occurs in the user-provided |
---|
190 | MPCI layer, the fatal error handler should be invoked. RTEMS |
---|
191 | assumes the reliable transmission and reception of messages by |
---|
192 | the MPCI and makes no attempt to detect or correct errors. |
---|
193 | |
---|
194 | Proxies |
---|
195 | ------- |
---|
196 | .. index:: proxy, definition |
---|
197 | |
---|
198 | A proxy is an RTEMS data structure which resides on a |
---|
199 | remote node and is used to represent a task which must block as |
---|
200 | part of a remote operation. This action can occur as part of the``rtems_semaphore_obtain`` and``rtems_message_queue_receive`` directives. If the |
---|
201 | object were local, the task's control block would be available |
---|
202 | for modification to indicate it was blocking on a message queue |
---|
203 | or semaphore. However, the task's control block resides only on |
---|
204 | the same node as the task. As a result, the remote node must |
---|
205 | allocate a proxy to represent the task until it can be readied. |
---|
206 | |
---|
207 | The maximum number of proxies is defined in the |
---|
208 | Multiprocessor Configuration Table. Each node in a |
---|
209 | multiprocessor system may require a different number of proxies |
---|
210 | to be configured. The distribution of proxy control blocks is |
---|
211 | application dependent and is different from the distribution of |
---|
212 | tasks. |
---|
213 | |
---|
214 | Multiprocessor Configuration Table |
---|
215 | ---------------------------------- |
---|
216 | |
---|
217 | The Multiprocessor Configuration Table contains |
---|
218 | information needed by RTEMS when used in a multiprocessor |
---|
219 | system. This table is discussed in detail in the section |
---|
220 | Multiprocessor Configuration Table of the Configuring a System |
---|
221 | chapter. |
---|
222 | |
---|
223 | Multiprocessor Communications Interface Layer |
---|
224 | ============================================= |
---|
225 | |
---|
226 | The Multiprocessor Communications Interface Layer |
---|
227 | (MPCI) is a set of user-provided procedures which enable the |
---|
228 | nodes in a multiprocessor system to communicate with one |
---|
229 | another. These routines are invoked by RTEMS at various times |
---|
230 | in the preparation and processing of remote requests. |
---|
231 | Interrupts are enabled when an MPCI procedure is invoked. It is |
---|
232 | assumed that if the execution mode and/or interrupt level are |
---|
233 | altered by the MPCI layer, that they will be restored prior to |
---|
234 | returning to RTEMS... index:: MPCI, definition |
---|
235 | |
---|
236 | The MPCI layer is responsible for managing a pool of |
---|
237 | buffers called packets and for sending these packets between |
---|
238 | system nodes. Packet buffers contain the messages sent between |
---|
239 | the nodes. Typically, the MPCI layer will encapsulate the |
---|
240 | packet within an envelope which contains the information needed |
---|
241 | by the MPCI layer. The number of packets available is dependent |
---|
242 | on the MPCI layer implementation... index:: MPCI entry points |
---|
243 | |
---|
244 | The entry points to the routines in the user's MPCI |
---|
245 | layer should be placed in the Multiprocessor Communications |
---|
246 | Interface Table. The user must provide entry points for each of |
---|
247 | the following table entries in a multiprocessor system: |
---|
248 | |
---|
249 | - initialization initialize the MPCI |
---|
250 | |
---|
251 | - get_packet obtain a packet buffer |
---|
252 | |
---|
253 | - return_packet return a packet buffer |
---|
254 | |
---|
255 | - send_packet send a packet to another node |
---|
256 | |
---|
257 | - receive_packet called to get an arrived packet |
---|
258 | |
---|
259 | A packet is sent by RTEMS in each of the following situations: |
---|
260 | |
---|
261 | - an RQ is generated on an originating node; |
---|
262 | |
---|
263 | - an RR is generated on a destination node; |
---|
264 | |
---|
265 | - a global object is created; |
---|
266 | |
---|
267 | - a global object is deleted; |
---|
268 | |
---|
269 | - a local task blocked on a remote object is deleted; |
---|
270 | |
---|
271 | - during system initialization to check for system consistency. |
---|
272 | |
---|
273 | If the target hardware supports it, the arrival of a |
---|
274 | packet at a node may generate an interrupt. Otherwise, the |
---|
275 | real-time clock ISR can check for the arrival of a packet. In |
---|
276 | any case, the``rtems_multiprocessing_announce`` directive must be called |
---|
277 | to announce the arrival of a packet. After exiting the ISR, |
---|
278 | control will be passed to the Multiprocessing Server to process |
---|
279 | the packet. The Multiprocessing Server will call the get_packet |
---|
280 | entry to obtain a packet buffer and the receive_entry entry to |
---|
281 | copy the message into the buffer obtained. |
---|
282 | |
---|
283 | INITIALIZATION |
---|
284 | -------------- |
---|
285 | |
---|
286 | The INITIALIZATION component of the user-provided |
---|
287 | MPCI layer is called as part of the ``rtems_initialize_executive`` |
---|
288 | directive to initialize the MPCI layer and associated hardware. |
---|
289 | It is invoked immediately after all of the device drivers have |
---|
290 | been initialized. This component should be adhere to the |
---|
291 | following prototype:.. index:: rtems_mpci_entry |
---|
292 | |
---|
293 | .. code:: c |
---|
294 | |
---|
295 | rtems_mpci_entry user_mpci_initialization( |
---|
296 | rtems_configuration_table \*configuration |
---|
297 | ); |
---|
298 | |
---|
299 | where configuration is the address of the user's |
---|
300 | Configuration Table. Operations on global objects cannot be |
---|
301 | performed until this component is invoked. The INITIALIZATION |
---|
302 | component is invoked only once in the life of any system. If |
---|
303 | the MPCI layer cannot be successfully initialized, the fatal |
---|
304 | error manager should be invoked by this routine. |
---|
305 | |
---|
306 | One of the primary functions of the MPCI layer is to |
---|
307 | provide the executive with packet buffers. The INITIALIZATION |
---|
308 | routine must create and initialize a pool of packet buffers. |
---|
309 | There must be enough packet buffers so RTEMS can obtain one |
---|
310 | whenever needed. |
---|
311 | |
---|
312 | GET_PACKET |
---|
313 | ---------- |
---|
314 | |
---|
315 | The GET_PACKET component of the user-provided MPCI |
---|
316 | layer is called when RTEMS must obtain a packet buffer to send |
---|
317 | or broadcast a message. This component should be adhere to the |
---|
318 | following prototype: |
---|
319 | .. code:: c |
---|
320 | |
---|
321 | rtems_mpci_entry user_mpci_get_packet( |
---|
322 | rtems_packet_prefix \**packet |
---|
323 | ); |
---|
324 | |
---|
325 | where packet is the address of a pointer to a packet. |
---|
326 | This routine always succeeds and, upon return, packet will |
---|
327 | contain the address of a packet. If for any reason, a packet |
---|
328 | cannot be successfully obtained, then the fatal error manager |
---|
329 | should be invoked. |
---|
330 | |
---|
331 | RTEMS has been optimized to avoid the need for |
---|
332 | obtaining a packet each time a message is sent or broadcast. |
---|
333 | For example, RTEMS sends response messages (RR) back to the |
---|
334 | originator in the same packet in which the request message (RQ) |
---|
335 | arrived. |
---|
336 | |
---|
337 | RETURN_PACKET |
---|
338 | ------------- |
---|
339 | |
---|
340 | The RETURN_PACKET component of the user-provided MPCI |
---|
341 | layer is called when RTEMS needs to release a packet to the free |
---|
342 | packet buffer pool. This component should be adhere to the |
---|
343 | following prototype: |
---|
344 | .. code:: c |
---|
345 | |
---|
346 | rtems_mpci_entry user_mpci_return_packet( |
---|
347 | rtems_packet_prefix \*packet |
---|
348 | ); |
---|
349 | |
---|
350 | where packet is the address of a packet. If the |
---|
351 | packet cannot be successfully returned, the fatal error manager |
---|
352 | should be invoked. |
---|
353 | |
---|
354 | RECEIVE_PACKET |
---|
355 | -------------- |
---|
356 | |
---|
357 | The RECEIVE_PACKET component of the user-provided |
---|
358 | MPCI layer is called when RTEMS needs to obtain a packet which |
---|
359 | has previously arrived. This component should be adhere to the |
---|
360 | following prototype: |
---|
361 | .. code:: c |
---|
362 | |
---|
363 | rtems_mpci_entry user_mpci_receive_packet( |
---|
364 | rtems_packet_prefix \**packet |
---|
365 | ); |
---|
366 | |
---|
367 | where packet is a pointer to the address of a packet |
---|
368 | to place the message from another node. If a message is |
---|
369 | available, then packet will contain the address of the message |
---|
370 | from another node. If no messages are available, this entry |
---|
371 | packet should contain NULL. |
---|
372 | |
---|
373 | SEND_PACKET |
---|
374 | ----------- |
---|
375 | |
---|
376 | The SEND_PACKET component of the user-provided MPCI |
---|
377 | layer is called when RTEMS needs to send a packet containing a |
---|
378 | message to another node. This component should be adhere to the |
---|
379 | following prototype: |
---|
380 | .. code:: c |
---|
381 | |
---|
382 | rtems_mpci_entry user_mpci_send_packet( |
---|
383 | uint32_t node, |
---|
384 | rtems_packet_prefix \**packet |
---|
385 | ); |
---|
386 | |
---|
387 | where node is the node number of the destination and packet is the |
---|
388 | address of a packet which containing a message. If the packet cannot |
---|
389 | be successfully sent, the fatal error manager should be invoked. |
---|
390 | |
---|
391 | If node is set to zero, the packet is to be |
---|
392 | broadcasted to all other nodes in the system. Although some |
---|
393 | MPCI layers will be built upon hardware which support a |
---|
394 | broadcast mechanism, others may be required to generate a copy |
---|
395 | of the packet for each node in the system. |
---|
396 | |
---|
397 | .. COMMENT: XXX packet_prefix structure needs to be defined in this document |
---|
398 | |
---|
399 | Many MPCI layers use the ``packet_length`` field of the``rtems_packet_prefix`` portion |
---|
400 | of the packet to avoid sending unnecessary data. This is especially |
---|
401 | useful if the media connecting the nodes is relatively slow. |
---|
402 | |
---|
403 | The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the |
---|
404 | packet indicates how much of the packet in 32-bit units may require conversion |
---|
405 | in a heterogeneous system. |
---|
406 | |
---|
407 | Supporting Heterogeneous Environments |
---|
408 | ------------------------------------- |
---|
409 | .. index:: heterogeneous multiprocessing |
---|
410 | |
---|
411 | Developing an MPCI layer for a heterogeneous system |
---|
412 | requires a thorough understanding of the differences between the |
---|
413 | processors which comprise the system. One difficult problem is |
---|
414 | the varying data representation schemes used by different |
---|
415 | processor types. The most pervasive data representation problem |
---|
416 | is the order of the bytes which compose a data entity. |
---|
417 | Processors which place the least significant byte at the |
---|
418 | smallest address are classified as little endian processors. |
---|
419 | Little endian byte-ordering is shown below: |
---|
420 | |
---|
421 | .. code:: c |
---|
422 | |
---|
423 | +---------------+----------------+---------------+----------------+ |
---|
424 | | | | | | |
---|
425 | | Byte 3 | Byte 2 | Byte 1 | Byte 0 | |
---|
426 | | | | | | |
---|
427 | +---------------+----------------+---------------+----------------+ |
---|
428 | |
---|
429 | Conversely, processors which place the most |
---|
430 | significant byte at the smallest address are classified as big |
---|
431 | endian processors. Big endian byte-ordering is shown below: |
---|
432 | .. code:: c |
---|
433 | |
---|
434 | +---------------+----------------+---------------+----------------+ |
---|
435 | | | | | | |
---|
436 | | Byte 0 | Byte 1 | Byte 2 | Byte 3 | |
---|
437 | | | | | | |
---|
438 | +---------------+----------------+---------------+----------------+ |
---|
439 | |
---|
440 | Unfortunately, sharing a data structure between big |
---|
441 | endian and little endian processors requires translation into a |
---|
442 | common endian format. An application designer typically chooses |
---|
443 | the common endian format to minimize conversion overhead. |
---|
444 | |
---|
445 | Another issue in the design of shared data structures |
---|
446 | is the alignment of data structure elements. Alignment is both |
---|
447 | processor and compiler implementation dependent. For example, |
---|
448 | some processors allow data elements to begin on any address |
---|
449 | boundary, while others impose restrictions. Common restrictions |
---|
450 | are that data elements must begin on either an even address or |
---|
451 | on a long word boundary. Violation of these restrictions may |
---|
452 | cause an exception or impose a performance penalty. |
---|
453 | |
---|
454 | Other issues which commonly impact the design of |
---|
455 | shared data structures include the representation of floating |
---|
456 | point numbers, bit fields, decimal data, and character strings. |
---|
457 | In addition, the representation method for negative integers |
---|
458 | could be one's or two's complement. These factors combine to |
---|
459 | increase the complexity of designing and manipulating data |
---|
460 | structures shared between processors. |
---|
461 | |
---|
462 | RTEMS addressed these issues in the design of the |
---|
463 | packets used to communicate between nodes. The RTEMS packet |
---|
464 | format is designed to allow the MPCI layer to perform all |
---|
465 | necessary conversion without burdening the developer with the |
---|
466 | details of the RTEMS packet format. As a result, the MPCI layer |
---|
467 | must be aware of the following: |
---|
468 | |
---|
469 | - All packets must begin on a four byte boundary. |
---|
470 | |
---|
471 | - Packets are composed of both RTEMS and application data. All RTEMS data |
---|
472 | is treated as 32-bit unsigned quantities and is in the first ``to_convert`` |
---|
473 | 32-bit quantities of the packet. The ``to_convert`` field is part of the``rtems_packet_prefix`` portion of the packet. |
---|
474 | |
---|
475 | - The RTEMS data component of the packet must be in native |
---|
476 | endian format. Endian conversion may be performed by either the |
---|
477 | sending or receiving MPCI layer. |
---|
478 | |
---|
479 | - RTEMS makes no assumptions regarding the application |
---|
480 | data component of the packet. |
---|
481 | |
---|
482 | Operations |
---|
483 | ========== |
---|
484 | |
---|
485 | Announcing a Packet |
---|
486 | ------------------- |
---|
487 | |
---|
488 | The ``rtems_multiprocessing_announce`` directive is called by |
---|
489 | the MPCI layer to inform RTEMS that a packet has arrived from |
---|
490 | another node. This directive can be called from an interrupt |
---|
491 | service routine or from within a polling routine. |
---|
492 | |
---|
493 | Directives |
---|
494 | ========== |
---|
495 | |
---|
496 | This section details the additional directives |
---|
497 | required to support RTEMS in a multiprocessor configuration. A |
---|
498 | subsection is dedicated to each of this manager's directives and |
---|
499 | describes the calling sequence, related constants, usage, and |
---|
500 | status codes. |
---|
501 | |
---|
502 | MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet |
---|
503 | ----------------------------------------------------------- |
---|
504 | .. index:: announce arrival of package |
---|
505 | |
---|
506 | **CALLING SEQUENCE:** |
---|
507 | |
---|
508 | .. index:: rtems_multiprocessing_announce |
---|
509 | |
---|
510 | .. code:: c |
---|
511 | |
---|
512 | void rtems_multiprocessing_announce( void ); |
---|
513 | |
---|
514 | **DIRECTIVE STATUS CODES:** |
---|
515 | |
---|
516 | NONE |
---|
517 | |
---|
518 | **DESCRIPTION:** |
---|
519 | |
---|
520 | This directive informs RTEMS that a multiprocessing |
---|
521 | communications packet has arrived from another node. This |
---|
522 | directive is called by the user-provided MPCI, and is only used |
---|
523 | in multiprocessor configurations. |
---|
524 | |
---|
525 | **NOTES:** |
---|
526 | |
---|
527 | This directive is typically called from an ISR. |
---|
528 | |
---|
529 | This directive will almost certainly cause the |
---|
530 | calling task to be preempted. |
---|
531 | |
---|
532 | This directive does not generate activity on remote nodes. |
---|
533 | |
---|
534 | .. COMMENT: COPYRIGHT (c) 2014. |
---|
535 | |
---|
536 | .. COMMENT: On-Line Applications Research Corporation (OAR). |
---|
537 | |
---|
538 | .. COMMENT: All rights reserved. |
---|
539 | |
---|