1 | .. comment SPDX-License-Identifier: CC-BY-SA-4.0 |
---|
2 | |
---|
3 | .. Copyright (C) 1988, 2008 On-Line Applications Research Corporation (OAR) |
---|
4 | .. COMMENT: All rights reserved. |
---|
5 | |
---|
6 | .. index:: multiprocessing |
---|
7 | |
---|
8 | Multiprocessing Manager |
---|
9 | *********************** |
---|
10 | |
---|
11 | Introduction |
---|
12 | ============ |
---|
13 | |
---|
14 | In multiprocessor real-time systems, new requirements, such as sharing data and |
---|
15 | global resources between processors, are introduced. This requires an |
---|
16 | efficient and reliable communications vehicle which allows all processors to |
---|
17 | communicate with each other as necessary. In addition, the ramifications of |
---|
18 | multiple processors affect each and every characteristic of a real-time system, |
---|
19 | almost always making them more complicated. |
---|
20 | |
---|
21 | RTEMS addresses these issues by providing simple and flexible real-time |
---|
22 | multiprocessing capabilities. The executive easily lends itself to both |
---|
23 | tightly-coupled and loosely-coupled configurations of the target system |
---|
24 | hardware. In addition, RTEMS supports systems composed of both homogeneous and |
---|
25 | heterogeneous mixtures of processors and target boards. |
---|
26 | |
---|
27 | A major design goal of the RTEMS executive was to transcend the physical |
---|
28 | boundaries of the target hardware configuration. This goal is achieved by |
---|
29 | presenting the application software with a logical view of the target system |
---|
30 | where the boundaries between processor nodes are transparent. As a result, the |
---|
31 | application developer may designate objects such as tasks, queues, events, |
---|
32 | signals, semaphores, and memory blocks as global objects. These global objects |
---|
33 | may then be accessed by any task regardless of the physical location of the |
---|
34 | object and the accessing task. RTEMS automatically determines that the object |
---|
35 | being accessed resides on another processor and performs the actions required |
---|
36 | to access the desired object. Simply stated, RTEMS allows the entire system, |
---|
37 | both hardware and software, to be viewed logically as a single system. |
---|
38 | |
---|
39 | The directives provided by the Manager are: |
---|
40 | |
---|
41 | - rtems_multiprocessing_announce_ - A multiprocessing communications packet has |
---|
42 | arrived |
---|
43 | |
---|
44 | .. index:: multiprocessing topologies |
---|
45 | |
---|
46 | Background |
---|
47 | ========== |
---|
48 | |
---|
49 | RTEMS makes no assumptions regarding the connection media or topology of a |
---|
50 | multiprocessor system. The tasks which compose a particular application can be |
---|
51 | spread among as many processors as needed to satisfy the application's timing |
---|
52 | requirements. The application tasks can interact using a subset of the RTEMS |
---|
53 | directives as if they were on the same processor. These directives allow |
---|
54 | application tasks to exchange data, communicate, and synchronize regardless of |
---|
55 | which processor they reside upon. |
---|
56 | |
---|
57 | The RTEMS multiprocessor execution model is multiple instruction streams with |
---|
58 | multiple data streams (MIMD). This execution model has each of the processors |
---|
59 | executing code independent of the other processors. Because of this |
---|
60 | parallelism, the application designer can more easily guarantee deterministic |
---|
61 | behavior. |
---|
62 | |
---|
63 | By supporting heterogeneous environments, RTEMS allows the systems designer to |
---|
64 | select the most efficient processor for each subsystem of the application. |
---|
65 | Configuring RTEMS for a heterogeneous environment is no more difficult than for |
---|
66 | a homogeneous one. In keeping with RTEMS philosophy of providing transparent |
---|
67 | physical node boundaries, the minimal heterogeneous processing required is |
---|
68 | isolated in the MPCI layer. |
---|
69 | |
---|
70 | .. index:: nodes, definition |
---|
71 | |
---|
72 | Nodes |
---|
73 | ----- |
---|
74 | |
---|
75 | A processor in a RTEMS system is referred to as a node. Each node is assigned |
---|
76 | a unique non-zero node number by the application designer. RTEMS assumes that |
---|
77 | node numbers are assigned consecutively from one to the ``maximum_nodes`` |
---|
78 | configuration parameter. The node number, node, and the maximum number of |
---|
79 | nodes, ``maximum_nodes``, in a system are found in the Multiprocessor |
---|
80 | Configuration Table. The ``maximum_nodes`` field and the number of global |
---|
81 | objects, ``maximum_global_objects``, is required to be the same on all nodes in |
---|
82 | a system. |
---|
83 | |
---|
84 | The node number is used by RTEMS to identify each node when performing remote |
---|
85 | operations. Thus, the Multiprocessor Communications Interface Layer (MPCI) |
---|
86 | must be able to route messages based on the node number. |
---|
87 | |
---|
88 | .. index:: global objects, definition |
---|
89 | |
---|
90 | Global Objects |
---|
91 | -------------- |
---|
92 | |
---|
93 | All RTEMS objects which are created with the GLOBAL attribute will be known on |
---|
94 | all other nodes. Global objects can be referenced from any node in the system, |
---|
95 | although certain directive specific restrictions (e.g. one cannot delete a |
---|
96 | remote object) may apply. A task does not have to be global to perform |
---|
97 | operations involving remote objects. The maximum number of global objects is |
---|
98 | the system is user configurable and can be found in the maximum_global_objects |
---|
99 | field in the Multiprocessor Configuration Table. The distribution of tasks to |
---|
100 | processors is performed during the application design phase. Dynamic task |
---|
101 | relocation is not supported by RTEMS. |
---|
102 | |
---|
103 | .. index:: global objects table |
---|
104 | |
---|
105 | Global Object Table |
---|
106 | ------------------- |
---|
107 | |
---|
108 | RTEMS maintains two tables containing object information on every node in a |
---|
109 | multiprocessor system: a local object table and a global object table. The |
---|
110 | local object table on each node is unique and contains information for all |
---|
111 | objects created on this node whether those objects are local or global. The |
---|
112 | global object table contains information regarding all global objects in the |
---|
113 | system and, consequently, is the same on every node. |
---|
114 | |
---|
115 | Since each node must maintain an identical copy of the global object table, the |
---|
116 | maximum number of entries in each copy of the table must be the same. The |
---|
117 | maximum number of entries in each copy is determined by the |
---|
118 | maximum_global_objects parameter in the Multiprocessor Configuration Table. |
---|
119 | This parameter, as well as the maximum_nodes parameter, is required to be the |
---|
120 | same on all nodes. To maintain consistency among the table copies, every node |
---|
121 | in the system must be informed of the creation or deletion of a global object. |
---|
122 | |
---|
123 | .. index:: MPCI and remote operations |
---|
124 | |
---|
125 | Remote Operations |
---|
126 | ----------------- |
---|
127 | |
---|
128 | When an application performs an operation on a remote global object, RTEMS must |
---|
129 | generate a Remote Request (RQ) message and send it to the appropriate node. |
---|
130 | After completing the requested operation, the remote node will build a Remote |
---|
131 | Response (RR) message and send it to the originating node. Messages generated |
---|
132 | as a side-effect of a directive (such as deleting a global task) are known as |
---|
133 | Remote Processes (RP) and do not require the receiving node to respond. |
---|
134 | |
---|
135 | Other than taking slightly longer to execute directives on remote objects, the |
---|
136 | application is unaware of the location of the objects it acts upon. The exact |
---|
137 | amount of overhead required for a remote operation is dependent on the media |
---|
138 | connecting the nodes and, to a lesser degree, on the efficiency of the |
---|
139 | user-provided MPCI routines. |
---|
140 | |
---|
141 | The following shows the typical transaction sequence during a remote |
---|
142 | application: |
---|
143 | |
---|
144 | #. The application issues a directive accessing a remote global object. |
---|
145 | |
---|
146 | #. RTEMS determines the node on which the object resides. |
---|
147 | |
---|
148 | #. RTEMS calls the user-provided MPCI routine ``GET_PACKET`` to obtain a packet |
---|
149 | in which to build a RQ message. |
---|
150 | |
---|
151 | #. After building a message packet, RTEMS calls the user-provided MPCI routine |
---|
152 | ``SEND_PACKET`` to transmit the packet to the node on which the object |
---|
153 | resides (referred to as the destination node). |
---|
154 | |
---|
155 | #. The calling task is blocked until the RR message arrives, and control of the |
---|
156 | processor is transferred to another task. |
---|
157 | |
---|
158 | #. The MPCI layer on the destination node senses the arrival of a packet |
---|
159 | (commonly in an ISR), and calls the ``rtems_multiprocessing_announce`` |
---|
160 | directive. This directive readies the Multiprocessing Server. |
---|
161 | |
---|
162 | #. The Multiprocessing Server calls the user-provided MPCI routine |
---|
163 | ``RECEIVE_PACKET``, performs the requested operation, builds an RR message, |
---|
164 | and returns it to the originating node. |
---|
165 | |
---|
166 | #. The MPCI layer on the originating node senses the arrival of a packet |
---|
167 | (typically via an interrupt), and calls the RTEMS |
---|
168 | ``rtems_multiprocessing_announce`` directive. This directive readies the |
---|
169 | Multiprocessing Server. |
---|
170 | |
---|
171 | #. The Multiprocessing Server calls the user-provided MPCI routine |
---|
172 | ``RECEIVE_PACKET``, readies the original requesting task, and blocks until |
---|
173 | another packet arrives. Control is transferred to the original task which |
---|
174 | then completes processing of the directive. |
---|
175 | |
---|
176 | If an uncorrectable error occurs in the user-provided MPCI layer, the fatal |
---|
177 | error handler should be invoked. RTEMS assumes the reliable transmission and |
---|
178 | reception of messages by the MPCI and makes no attempt to detect or correct |
---|
179 | errors. |
---|
180 | |
---|
181 | .. index:: proxy, definition |
---|
182 | |
---|
183 | Proxies |
---|
184 | ------- |
---|
185 | |
---|
186 | A proxy is an RTEMS data structure which resides on a remote node and is used |
---|
187 | to represent a task which must block as part of a remote operation. This action |
---|
188 | can occur as part of the ``rtems_semaphore_obtain`` and |
---|
189 | ``rtems_message_queue_receive`` directives. If the object were local, the |
---|
190 | task's control block would be available for modification to indicate it was |
---|
191 | blocking on a message queue or semaphore. However, the task's control block |
---|
192 | resides only on the same node as the task. As a result, the remote node must |
---|
193 | allocate a proxy to represent the task until it can be readied. |
---|
194 | |
---|
195 | The maximum number of proxies is defined in the Multiprocessor Configuration |
---|
196 | Table. Each node in a multiprocessor system may require a different number of |
---|
197 | proxies to be configured. The distribution of proxy control blocks is |
---|
198 | application dependent and is different from the distribution of tasks. |
---|
199 | |
---|
200 | Multiprocessor Configuration Table |
---|
201 | ---------------------------------- |
---|
202 | |
---|
203 | The Multiprocessor Configuration Table contains information needed by RTEMS |
---|
204 | when used in a multiprocessor system. This table is discussed in detail in the |
---|
205 | section Multiprocessor Configuration Table of the Configuring a System chapter. |
---|
206 | |
---|
207 | Multiprocessor Communications Interface Layer |
---|
208 | ============================================= |
---|
209 | |
---|
210 | The Multiprocessor Communications Interface Layer (MPCI) is a set of |
---|
211 | user-provided procedures which enable the nodes in a multiprocessor system to |
---|
212 | communicate with one another. These routines are invoked by RTEMS at various |
---|
213 | times in the preparation and processing of remote requests. Interrupts are |
---|
214 | enabled when an MPCI procedure is invoked. It is assumed that if the execution |
---|
215 | mode and/or interrupt level are altered by the MPCI layer, that they will be |
---|
216 | restored prior to returning to RTEMS. |
---|
217 | |
---|
218 | .. index:: MPCI, definition |
---|
219 | |
---|
220 | The MPCI layer is responsible for managing a pool of buffers called packets and |
---|
221 | for sending these packets between system nodes. Packet buffers contain the |
---|
222 | messages sent between the nodes. Typically, the MPCI layer will encapsulate |
---|
223 | the packet within an envelope which contains the information needed by the MPCI |
---|
224 | layer. The number of packets available is dependent on the MPCI layer |
---|
225 | implementation. |
---|
226 | |
---|
227 | .. index:: MPCI entry points |
---|
228 | |
---|
229 | The entry points to the routines in the user's MPCI layer should be placed in |
---|
230 | the Multiprocessor Communications Interface Table. The user must provide entry |
---|
231 | points for each of the following table entries in a multiprocessor system: |
---|
232 | |
---|
233 | .. list-table:: |
---|
234 | :class: rtems-table |
---|
235 | |
---|
236 | * - initialization |
---|
237 | - initialize the MPCI |
---|
238 | * - get_packet |
---|
239 | - obtain a packet buffer |
---|
240 | * - return_packet |
---|
241 | - return a packet buffer |
---|
242 | * - send_packet |
---|
243 | - send a packet to another node |
---|
244 | * - receive_packet |
---|
245 | - called to get an arrived packet |
---|
246 | |
---|
247 | A packet is sent by RTEMS in each of the following situations: |
---|
248 | |
---|
249 | - an RQ is generated on an originating node; |
---|
250 | |
---|
251 | - an RR is generated on a destination node; |
---|
252 | |
---|
253 | - a global object is created; |
---|
254 | |
---|
255 | - a global object is deleted; |
---|
256 | |
---|
257 | - a local task blocked on a remote object is deleted; |
---|
258 | |
---|
259 | - during system initialization to check for system consistency. |
---|
260 | |
---|
261 | If the target hardware supports it, the arrival of a packet at a node may |
---|
262 | generate an interrupt. Otherwise, the real-time clock ISR can check for the |
---|
263 | arrival of a packet. In any case, the ``rtems_multiprocessing_announce`` |
---|
264 | directive must be called to announce the arrival of a packet. After exiting |
---|
265 | the ISR, control will be passed to the Multiprocessing Server to process the |
---|
266 | packet. The Multiprocessing Server will call the get_packet entry to obtain a |
---|
267 | packet buffer and the receive_entry entry to copy the message into the buffer |
---|
268 | obtained. |
---|
269 | |
---|
270 | INITIALIZATION |
---|
271 | -------------- |
---|
272 | |
---|
273 | The INITIALIZATION component of the user-provided MPCI layer is called as part |
---|
274 | of the ``rtems_initialize_executive`` directive to initialize the MPCI layer |
---|
275 | and associated hardware. It is invoked immediately after all of the device |
---|
276 | drivers have been initialized. This component should be adhere to the |
---|
277 | following prototype: |
---|
278 | |
---|
279 | .. index:: rtems_mpci_entry |
---|
280 | |
---|
281 | .. code-block:: c |
---|
282 | |
---|
283 | rtems_mpci_entry user_mpci_initialization( |
---|
284 | rtems_configuration_table *configuration |
---|
285 | ); |
---|
286 | |
---|
287 | where configuration is the address of the user's Configuration Table. |
---|
288 | Operations on global objects cannot be performed until this component is |
---|
289 | invoked. The INITIALIZATION component is invoked only once in the life of any |
---|
290 | system. If the MPCI layer cannot be successfully initialized, the fatal error |
---|
291 | manager should be invoked by this routine. |
---|
292 | |
---|
293 | One of the primary functions of the MPCI layer is to provide the executive with |
---|
294 | packet buffers. The INITIALIZATION routine must create and initialize a pool |
---|
295 | of packet buffers. There must be enough packet buffers so RTEMS can obtain one |
---|
296 | whenever needed. |
---|
297 | |
---|
298 | GET_PACKET |
---|
299 | ---------- |
---|
300 | |
---|
301 | The GET_PACKET component of the user-provided MPCI layer is called when RTEMS |
---|
302 | must obtain a packet buffer to send or broadcast a message. This component |
---|
303 | should be adhere to the following prototype: |
---|
304 | |
---|
305 | .. code-block:: c |
---|
306 | |
---|
307 | rtems_mpci_entry user_mpci_get_packet( |
---|
308 | rtems_packet_prefix **packet |
---|
309 | ); |
---|
310 | |
---|
311 | where packet is the address of a pointer to a packet. This routine always |
---|
312 | succeeds and, upon return, packet will contain the address of a packet. If for |
---|
313 | any reason, a packet cannot be successfully obtained, then the fatal error |
---|
314 | manager should be invoked. |
---|
315 | |
---|
316 | RTEMS has been optimized to avoid the need for obtaining a packet each time a |
---|
317 | message is sent or broadcast. For example, RTEMS sends response messages (RR) |
---|
318 | back to the originator in the same packet in which the request message (RQ) |
---|
319 | arrived. |
---|
320 | |
---|
321 | RETURN_PACKET |
---|
322 | ------------- |
---|
323 | |
---|
324 | The RETURN_PACKET component of the user-provided MPCI layer is called when |
---|
325 | RTEMS needs to release a packet to the free packet buffer pool. This component |
---|
326 | should be adhere to the following prototype: |
---|
327 | |
---|
328 | .. code-block:: c |
---|
329 | |
---|
330 | rtems_mpci_entry user_mpci_return_packet( |
---|
331 | rtems_packet_prefix *packet |
---|
332 | ); |
---|
333 | |
---|
334 | where packet is the address of a packet. If the packet cannot be successfully |
---|
335 | returned, the fatal error manager should be invoked. |
---|
336 | |
---|
337 | RECEIVE_PACKET |
---|
338 | -------------- |
---|
339 | |
---|
340 | The RECEIVE_PACKET component of the user-provided MPCI layer is called when |
---|
341 | RTEMS needs to obtain a packet which has previously arrived. This component |
---|
342 | should be adhere to the following prototype: |
---|
343 | |
---|
344 | .. code-block:: c |
---|
345 | |
---|
346 | rtems_mpci_entry user_mpci_receive_packet( |
---|
347 | rtems_packet_prefix **packet |
---|
348 | ); |
---|
349 | |
---|
350 | where packet is a pointer to the address of a packet to place the message from |
---|
351 | another node. If a message is available, then packet will contain the address |
---|
352 | of the message from another node. If no messages are available, this entry |
---|
353 | packet should contain NULL. |
---|
354 | |
---|
355 | SEND_PACKET |
---|
356 | ----------- |
---|
357 | |
---|
358 | The SEND_PACKET component of the user-provided MPCI layer is called when RTEMS |
---|
359 | needs to send a packet containing a message to another node. This component |
---|
360 | should be adhere to the following prototype: |
---|
361 | |
---|
362 | .. code-block:: c |
---|
363 | |
---|
364 | rtems_mpci_entry user_mpci_send_packet( |
---|
365 | uint32_t node, |
---|
366 | rtems_packet_prefix **packet |
---|
367 | ); |
---|
368 | |
---|
369 | where node is the node number of the destination and packet is the address of a |
---|
370 | packet which containing a message. If the packet cannot be successfully sent, |
---|
371 | the fatal error manager should be invoked. |
---|
372 | |
---|
373 | If node is set to zero, the packet is to be broadcasted to all other nodes in |
---|
374 | the system. Although some MPCI layers will be built upon hardware which |
---|
375 | support a broadcast mechanism, others may be required to generate a copy of the |
---|
376 | packet for each node in the system. |
---|
377 | |
---|
378 | .. COMMENT: XXX packet_prefix structure needs to be defined in this document |
---|
379 | |
---|
380 | Many MPCI layers use the ``packet_length`` field of the ``rtems_packet_prefix`` |
---|
381 | portion of the packet to avoid sending unnecessary data. This is especially |
---|
382 | useful if the media connecting the nodes is relatively slow. |
---|
383 | |
---|
384 | The ``to_convert`` field of the ``rtems_packet_prefix`` portion of the packet |
---|
385 | indicates how much of the packet in 32-bit units may require conversion in a |
---|
386 | heterogeneous system. |
---|
387 | |
---|
388 | .. index:: heterogeneous multiprocessing |
---|
389 | |
---|
390 | Supporting Heterogeneous Environments |
---|
391 | ------------------------------------- |
---|
392 | |
---|
393 | Developing an MPCI layer for a heterogeneous system requires a thorough |
---|
394 | understanding of the differences between the processors which comprise the |
---|
395 | system. One difficult problem is the varying data representation schemes used |
---|
396 | by different processor types. The most pervasive data representation problem |
---|
397 | is the order of the bytes which compose a data entity. Processors which place |
---|
398 | the least significant byte at the smallest address are classified as little |
---|
399 | endian processors. Little endian byte-ordering is shown below: |
---|
400 | |
---|
401 | .. code-block:: c |
---|
402 | |
---|
403 | +---------------+----------------+---------------+----------------+ |
---|
404 | | | | | | |
---|
405 | | Byte 3 | Byte 2 | Byte 1 | Byte 0 | |
---|
406 | | | | | | |
---|
407 | +---------------+----------------+---------------+----------------+ |
---|
408 | |
---|
409 | Conversely, processors which place the most significant byte at the smallest |
---|
410 | address are classified as big endian processors. Big endian byte-ordering is |
---|
411 | shown below: |
---|
412 | |
---|
413 | .. code-block:: c |
---|
414 | |
---|
415 | +---------------+----------------+---------------+----------------+ |
---|
416 | | | | | | |
---|
417 | | Byte 0 | Byte 1 | Byte 2 | Byte 3 | |
---|
418 | | | | | | |
---|
419 | +---------------+----------------+---------------+----------------+ |
---|
420 | |
---|
421 | Unfortunately, sharing a data structure between big endian and little endian |
---|
422 | processors requires translation into a common endian format. An application |
---|
423 | designer typically chooses the common endian format to minimize conversion |
---|
424 | overhead. |
---|
425 | |
---|
426 | Another issue in the design of shared data structures is the alignment of data |
---|
427 | structure elements. Alignment is both processor and compiler implementation |
---|
428 | dependent. For example, some processors allow data elements to begin on any |
---|
429 | address boundary, while others impose restrictions. Common restrictions are |
---|
430 | that data elements must begin on either an even address or on a long word |
---|
431 | boundary. Violation of these restrictions may cause an exception or impose a |
---|
432 | performance penalty. |
---|
433 | |
---|
434 | Other issues which commonly impact the design of shared data structures include |
---|
435 | the representation of floating point numbers, bit fields, decimal data, and |
---|
436 | character strings. In addition, the representation method for negative |
---|
437 | integers could be one's or two's complement. These factors combine to increase |
---|
438 | the complexity of designing and manipulating data structures shared between |
---|
439 | processors. |
---|
440 | |
---|
441 | RTEMS addressed these issues in the design of the packets used to communicate |
---|
442 | between nodes. The RTEMS packet format is designed to allow the MPCI layer to |
---|
443 | perform all necessary conversion without burdening the developer with the |
---|
444 | details of the RTEMS packet format. As a result, the MPCI layer must be aware |
---|
445 | of the following: |
---|
446 | |
---|
447 | - All packets must begin on a four byte boundary. |
---|
448 | |
---|
449 | - Packets are composed of both RTEMS and application data. All RTEMS data is |
---|
450 | treated as 32-bit unsigned quantities and is in the first ``to_convert`` |
---|
451 | 32-bit quantities of the packet. The ``to_convert`` field is part of the |
---|
452 | ``rtems_packet_prefix`` portion of the packet. |
---|
453 | |
---|
454 | - The RTEMS data component of the packet must be in native endian format. |
---|
455 | Endian conversion may be performed by either the sending or receiving MPCI |
---|
456 | layer. |
---|
457 | |
---|
458 | - RTEMS makes no assumptions regarding the application data component of the |
---|
459 | packet. |
---|
460 | |
---|
461 | Operations |
---|
462 | ========== |
---|
463 | |
---|
464 | Announcing a Packet |
---|
465 | ------------------- |
---|
466 | |
---|
467 | The ``rtems_multiprocessing_announce`` directive is called by the MPCI layer to |
---|
468 | inform RTEMS that a packet has arrived from another node. This directive can |
---|
469 | be called from an interrupt service routine or from within a polling routine. |
---|
470 | |
---|
471 | Directives |
---|
472 | ========== |
---|
473 | |
---|
474 | This section details the additional directives required to support RTEMS in a |
---|
475 | multiprocessor configuration. A subsection is dedicated to each of this |
---|
476 | manager's directives and describes the calling sequence, related constants, |
---|
477 | usage, and status codes. |
---|
478 | |
---|
479 | .. raw:: latex |
---|
480 | |
---|
481 | \clearpage |
---|
482 | |
---|
483 | .. index:: announce arrival of package |
---|
484 | .. index:: rtems_multiprocessing_announce |
---|
485 | |
---|
486 | .. _rtems_multiprocessing_announce: |
---|
487 | |
---|
488 | MULTIPROCESSING_ANNOUNCE - Announce the arrival of a packet |
---|
489 | ----------------------------------------------------------- |
---|
490 | |
---|
491 | CALLING SEQUENCE: |
---|
492 | .. code-block:: c |
---|
493 | |
---|
494 | void rtems_multiprocessing_announce( void ); |
---|
495 | |
---|
496 | DIRECTIVE STATUS CODES: |
---|
497 | NONE |
---|
498 | |
---|
499 | DESCRIPTION: |
---|
500 | This directive informs RTEMS that a multiprocessing communications packet |
---|
501 | has arrived from another node. This directive is called by the |
---|
502 | user-provided MPCI, and is only used in multiprocessor configurations. |
---|
503 | |
---|
504 | NOTES: |
---|
505 | This directive is typically called from an ISR. |
---|
506 | |
---|
507 | This directive will almost certainly cause the calling task to be |
---|
508 | preempted. |
---|
509 | |
---|
510 | This directive does not generate activity on remote nodes. |
---|