#2556 closed enhancement (fixed)

Implement the O(m) Independence-Preserving Protocol (OMIP)

Reported by: Sebastian Huber Owned by: Sebastian Huber
Priority: high Milestone: 5.1
Component: score Version:
Severity: normal Keywords:
Cc: Blocked By:
Blocking:

Description (last modified by Sebastian Huber)

Background

The O(m) Independence-Preserving Protocol (OMIP) is a generalization of the priority inheritance protocol to clustered scheduling which avoids the non-preemptive sections present with priority boosting. The m denotes the number of processors in the system. Its implementation requires an extension of the scheduler helping protocol already used for the MrsP semaphores. However, the current implementation of the scheduler helping protocol has two major issues, see Catellani, Sebastiano, Luca Bonato, Sebastian Huber, and Enrico Mezzetti: Challenges in the Imple- mentation of MrsP. In Reliable Software Technologies - Ada-Europe 2015, pages 179–195, 2015. Firstly, the run-time of some scheduler operations depend on the size of the resource dependency tree. Secondly, the scheduler operations of threads which don't use shared resources must deal with the scheduler helping protocol in case an owner of a shared resource is somehow involved.

To illustrate the second issue, let us look at the following example. We have a system with eight processors and two L2 caches. We assign processor 0 to a partition P for latency sensitive real-time tasks (e.g. sensor and actuator handling), processors 1, 2 and 3 are assigned to a cluster CA and the remaining processors are assigned to a cluster CB for soft real-time worker tasks. The worker tasks use a shared resource, e.g. a file system for data storage. Let us suppose a task R of partition P sends a message to the workers. This may make a waiting worker ready, which in turn pre-empts the owner of a shared resource. In this case the scheduler helping protocol takes action and is carried out by the task R. This contradicts the intended isolation of scheduler instances.

The reason for this unfortunate coupling is a design issue of the scheduler helping protocol implementation. Some scheduler operations may return a thread in need of help. For example, if a thread is unblocked which pre-empts an owner of a shared resource, then the pre-empted thread is returned. Once a thread in need of help is returned, the ask for help operation of the scheduler is executed. An alternative to this return value based approach is the introduction of a pre-emption intervention during thread dispatching. Threads taking part in the scheduler helping protocol indicate this with a positive resource count value. In case a thread dispatch occurs and pre-empts an owner of a shared resource, the scheduler ask for help operation is invoked. So, the work is carried out on behalf of the thread which takes part in the scheduler helping protocol.

To overcome the first issue, an improved resource dependency tracking is required. One approach is to use a recursive red-black tree based data structure, see #2412.

Implementation

There are several steps necessary to implement OMIP.

  • Introduce per-scheduler locks.
  • Enable context switches with interrupts enabled.
  • Add a pre-emption intervention to the thread dispatch.
  • Add a table for priority nodes to the thread control block. For each scheduler instance there is one priority node.
  • Update the table in case the thread blocks on a resource, a timeout while waiting for a resource occurs, or ownership of a resource is transferred to the thread.
  • Use this table in the pre-emption intervention.
  • Update the MrsP implementation to the new infrastructure.

Currently, only one scheduler lock for all scheduler instances is used. This simplified the MrsP implementation and due to the presence of a Giant lock, this was not an issue. With the elimination of the Giant lock, however, we need one scheduler lock per scheduler instance to really profit from a decoupled system due to clustered scheduling.

The current implementation of thread dispatching has some implications with respect to the interrupt latency. It is crucial to preserve the system invariant that a thread can execute on at most one processor in the system at a time. This is accomplished with a boolean indicator in the thread context. The processor architecture specific context switch code will mark that a thread context is no longer executing and waits that the heir context stopped execution before it restores the heir context and resumes execution of the heir thread (the boolean indicator is basically a TTAS lock). So, there is one point in time in which a processor is without a thread. This is essential to avoid cyclic dependencies in case multiple threads migrate at once. Otherwise some supervising entity is necessary to prevent deadlocks. Such a global supervisor would lead to scalability problems so this approach is not used. Currently the context switch is performed with interrupts disabled. Thus in case the heir thread is currently executing on another processor, the time of disabled interrupts is prolonged since one processor has to wait for another processor to make progress.

If we add pre-emption intervention to the thread dispatch sequence, then there is an even greater need to avoid this issue with the interrupt latency. Interrupts normally store the context of the interrupted thread on its stack. In case a thread is marked as not executing, we must not use its thread stack to store such an interrupt context. We cannot use the heir stack before it stopped execution on another processor. If we enable interrupts during this transition, then we have to provide an alternative thread independent stack for interrupts in this time frame.

The pre-emption intervention should be added to _Thread_Do_dispatch() before the heir is read and perform the following pseudo-code actions.

pre_emption_intervention(executing):
	if executing.resource_count > 0:
		executing.lock()
		if executing.is_ready():
			for scheduler in executing.schedulers:
				scheduler.lock()
			if !executing.is_scheduled():
				for scheduler in executing.schedulers:
					scheduler.ask_for_help(executing)
			for scheduler in executing.schedulers:
				scheduler.unlock()
		else if executing.active_help_level > 0:
			idle.use(executing.scheduler_node)
		executing.unlock()

The scheduler help operation affects multiple scheduler instances. In terms of locking we have only two options,

  • use a global scheduler lock, or
  • obtain multiple per-scheduler locks at once.

A global scheduler lock is not an option. To avoid deadlocks obtain the per-scheduler locks in a fixed order. However, in this case the per-scheduler locks will observe different worst-case and average-case acquire times (depending on the order).

Use a recursive data structure to determine the highest priority available to a thread for each scheduler instance, e.g.

typedef struct Thread_Priority_node {
	Priority_Control current_priority;
	Priority_Control real_priority;
	struct Thread_Priority_node *owner;
	RBTree_Node Node;
	RBTree_Control Inherited_priorities;
} Thread_Priority_node;

typedef struct {
	...
	Thread_Priority_node *priority_nodes; /* One per scheduler instances */
	...
} Thread_Control;

Initially a thread has a priority node reflecting its real priority. The Thread_Priority_node::owner is NULL. The Thread_Priority_node::current_priority is set to the real priority. The Thread_Priority_node::Inherited_priorities is empty.

In case the thread must wait for ownership of a mutex, then it enqueues its priority node in Thread_Priority_node::Inherited_priorities of the mutex owner.

In case the thread is dequeued from the wait queue of a mutex, then it dequeues its priority node in Thread_Priority_node::Inherited_priorities of the previous mutex owner (ownership transfer) or the current mutex owner (acquire timeout).

In case the minimum of the Thread_Priority_node::real_priority and the Thread_Priority_node::Inherited_priorities changes, then Thread_Priority_node::current_priority is updated. In case the Thread_Priority_node::owner its not NULL, the priority change propagates to the owner, and so on. In case Thread_Priority_node::current_priority changes, the corresponding scheduler is notified.

Use the thread lock to protect the priority nodes.

Attachments (2)

resource.png (31.0 KB) - added by Sebastian Huber on 01/27/16 at 08:49:03.
help.png (12.9 KB) - added by Sebastian Huber on 01/27/16 at 08:49:16.

Download all attachments as: .zip

Change History (59)

comment:1 Changed on 01/27/16 at 08:46:27 by Sebastian Huber

Description: modified (diff)

Changed on 01/27/16 at 08:49:03 by Sebastian Huber

Attachment: resource.png added

Changed on 01/27/16 at 08:49:16 by Sebastian Huber

Attachment: help.png added

comment:2 Changed on 01/27/16 at 08:52:02 by Sebastian Huber


This is an example resource dependency tree with sixteen threads t0 up to t15 and sixteen resources r0 up to r15. The root of this tree is t0. The thread t0 owns the resources r0, r1, r2, r3, r6, r11 and r12 and is in the ready state. The threads t1 up to t15 wait directly or indirectly via resources owned by t0 and are in a blocked state. The colour of the thread nodes indicate the scheduler instance.

comment:3 Changed on 01/27/16 at 08:54:45 by Sebastian Huber


This is an example of a table of priority nodes with sixteen threads t0 up to t15 and three scheduler instances s0 up to s2 corresponding to the previous example. The overall resource owner is t0. The colour of the nodes indicate the scheduler instance. Several threads of different scheduler instances depend on thread t10. So, the thread t10 contributes for example the highest priority node of scheduler instance s2 to thread t0 even though it uses scheduler instance s0.

comment:4 Changed on 02/04/16 at 12:00:29 by Sebastian Huber

Status: newaccepted

comment:5 Changed on 05/12/16 at 11:34:06 by Sebastian Huber <sebastian.huber@…>

In 6e4f929296b1cfd50fc8f41f117459e65214b816/rtems:

score: Introduce thread state lock

Update #2556.

comment:6 Changed on 05/20/16 at 14:13:25 by Sebastian Huber <sebastian.huber@…>

In 7dfb4b970cbd22cef170b2f45a41f445406a2ce5/rtems:

score: Add per scheduler instance maximum priority

The priority values are only valid within a scheduler instance. Thus,
the maximum priority value must be defined per scheduler instance. The
first scheduler instance defines PRIORITY_MAXIMUM. This implies that
RTEMS_MAXIMUM_PRIORITY and POSIX_SCHEDULER_MAXIMUM_PRIORITY are only
valid for threads of the first scheduler instance. Further
API/implementation changes are necessary to fix this.

Update #2556.

comment:7 Changed on 05/20/16 at 14:13:35 by Sebastian Huber <sebastian.huber@…>

In 8a040fe4eeb9f7ba5c9f95f8abd45b9b6d5f7c4b/rtems:

score: Use _RBTree_Insert_inline()

Use _RBTree_Insert_inline() for priority thread queues.

Update #2556.

comment:8 Changed on 06/22/16 at 12:47:47 by Sebastian Huber <sebastian.huber@…>

In 77ff5599e0d8e6d91190a379be21a332f83252b0/rtems:

score: Introduce map priority scheduler operation

Introduce map/unmap priority scheduler operations to map thread priority
values from/to the user domain to/from the scheduler domain. Use the
map priority operation to validate the thread priority. The EDF
schedulers use this new operation to distinguish between normal
priorities and priorities obtain through a job release.

Update #2173.
Update #2556.

comment:9 Changed on 06/22/16 at 12:48:45 by Sebastian Huber <sebastian.huber@…>

In 9bfad8cd519f17cbb26a672868169fcd304d5bd5/rtems:

score: Add thread priority to scheduler nodes

The thread priority is manifest in two independent areas. One area is
the user visible thread priority along with a potential thread queue.
The other is the scheduler. Currently, a thread priority update via
_Thread_Change_priority() first updates the user visble thread priority
and the thread queue, then the scheduler is notified if necessary. The
priority is passed to the scheduler via a local variable. A generation
counter ensures that the scheduler discards out-of-date priorities.

This use of a local variable ties the update in these two areas close
together. For later enhancements and the OMIP locking protocol
implementation we need more flexibility. Add a thread priority
information block to Scheduler_Node and synchronize priority value
updates via a sequence lock on SMP configurations.

Update #2556.

comment:10 Changed on 07/27/16 at 08:56:15 by Sebastian Huber <sebastian.huber@…>

In f4d1f307926b6319e5d3b325dbe424901285dec1/rtems:

score: Split _Thread_Change_priority()

Update #2412.
Update #2556.
Update #2765.

comment:11 Changed on 07/27/16 at 08:56:26 by Sebastian Huber <sebastian.huber@…>

In ac8402ddd6e4a8eb6defb98220d39d4c20a6f025/rtems:

score: Simplify _Thread_queue_Boost_priority()

Raise the priority under thread queue lock protection and omit the
superfluous thread queue priority change, since the thread is extracted
anyway. The unblock operation will pick up the new priority.

Update #2412.
Update #2556.
Update #2765.

comment:12 Changed on 07/27/16 at 08:56:36 by Sebastian Huber <sebastian.huber@…>

In 3a58dc863157bb21054a144c1a21b690544c0d23/rtems:

score: Priority inherit thread queue operations

Move the priority change due to priority interitance to the thread queue
enqueue operation to simplify the locking on SMP configurations.

Update #2412.
Update #2556.
Update #2765.

comment:13 Changed on 07/27/16 at 08:56:46 by Sebastian Huber <sebastian.huber@…>

In 1fcac5adc5ed38fb88ce4c6d24b2ca2e27e3cd10/rtems:

score: Turn thread lock into thread wait lock

The _Thread_Lock_acquire() function had a potentially infinite run-time
due to the lack of fairness at atomic operations level.

Update #2412.
Update #2556.
Update #2765.

comment:14 Changed on 07/27/16 at 08:56:57 by Sebastian Huber <sebastian.huber@…>

In d79df38c2bea50112214ade95776cb90d693e390/rtems:

score: Add deadlock detection

The mutex objects use the owner field of the thread queues for the mutex
owner. Use this and add a deadlock detection to
_Thread_queue_Enqueue_critical() for thread queues with an owner.

Update #2412.
Update #2556.
Close #2765.

comment:15 Changed on 08/03/16 at 11:58:00 by Sebastian Huber <sebastian.huber@…>

In ff2e6c647d166fa54769f3c300855ef7f8020668/rtems:

score: Fix and simplify thread wait locks

There was a subtile race condition in _Thread_queue_Do_extract_locked().
It must first update the thread wait flags and then restore the default
thread wait state. In the previous implementation this could lead under
rare timing conditions to an ineffective _Thread_Wait_tranquilize()
resulting to a corrupt system state.

Update #2556.

comment:16 Changed on 08/04/16 at 06:29:02 by Sebastian Huber <sebastian.huber@…>

In 1c1e31f788b85bf3bcadea675110eec35a612eb4/rtems:

score: Optimize _Thread_queue_Path_release()

Update #2556.

comment:17 Changed on 09/08/16 at 07:57:08 by Sebastian Huber <sebastian.huber@…>

In e27421f38661ea18b2a663776ad524afadeba607/rtems:

score: Move scheduler node to own header file

This makes it possible to add scheduler nodes to structures defined in
<rtems/score/thread.h>.

Update #2556.

comment:18 Changed on 09/08/16 at 07:57:18 by Sebastian Huber <sebastian.huber@…>

In 15b5678dcd72a11909a54b63ddc8e57869d63244/rtems:

score: Move thread wait node to scheduler node

Update #2556.

comment:19 Changed on 09/08/16 at 07:57:27 by Sebastian Huber <sebastian.huber@…>

In 52a661e8f8124b77b29a2ed44c7814fd0a7cf358/rtems:

score: Add scheduler node implementation header

Update #2556.

comment:20 Changed on 09/21/16 at 07:05:43 by Sebastian Huber <sebastian.huber@…>

In 300f6a481aaf9e6d29811faca71bf7104a01492c/rtems:

score: Rework thread priority management

Add priority nodes which contribute to the overall thread priority.

The actual priority of a thread is now an aggregation of priority nodes.
The thread priority aggregation for the home scheduler instance of a
thread consists of at least one priority node, which is normally the
real priority of the thread. The locking protocols (e.g. priority
ceiling and priority inheritance), rate-monotonic period objects and the
POSIX sporadic server add, change and remove priority nodes.

A thread changes its priority now immediately, e.g. priority changes are
not deferred until the thread releases its last resource.

Replace the _Thread_Change_priority() function with

  • _Thread_Priority_perform_actions(),
  • _Thread_Priority_add(),
  • _Thread_Priority_remove(),
  • _Thread_Priority_change(), and
  • _Thread_Priority_update().

Update #2412.
Update #2556.

comment:21 Changed on 09/21/16 at 07:05:58 by Sebastian Huber <sebastian.huber@…>

In 5d6b21198140f406a71599a2d388b6ec47ee3337/rtems:

score: Add scheduler node table for each thread

Update #2556.

comment:22 Changed on 09/21/16 at 07:06:10 by Sebastian Huber <sebastian.huber@…>

In 266d3835d883f908c0e4cbf547359d683f72dcc4/rtems:

score: Manage scheduler nodes via thread queues

Update #2556.

comment:23 Changed on 09/21/16 at 07:06:34 by Sebastian Huber <sebastian.huber@…>

In 8123cae864579219e5003a67b451ca4cc07d998b/rtems:

rtems: Add rtems_task_get_priority()

Update #2556.
Update #2784.

comment:24 Changed on 09/21/16 at 07:06:48 by Sebastian Huber <sebastian.huber@…>

In f6142c19f192e40ee1aa9ff67eb1c711343c157d/rtems:

score: Scheduler node awareness for thread queues

Maintain the priority of a thread for each scheduler instance via the
thread queue enqueue, extract, priority actions and surrender
operations. This replaces the primitive priority boosting.

Update #2556.

comment:25 Changed on 11/02/16 at 09:06:30 by Sebastian Huber <sebastian.huber@…>

In d097b54633fe98d4370154de5bdea44c32e81648/rtems:

score: Rename scheduler ask for help stuff

Rename the scheduler ask for help stuff since this will be replaced step
by step with a second generation of the scheduler helping protocol.
Keep the old one for now in parallel to reduce the patch set sizes.

Update #2556.

comment:26 Changed on 11/02/16 at 09:06:40 by Sebastian Huber <sebastian.huber@…>

In 501043a18bae037ca7195ce6989d3ffa8cc72660/rtems:

score: Pass scheduler node to update priority op

This enables to call this scheduler operation for all scheduler nodes
available to a thread.

Update #2556.

comment:27 Changed on 11/02/16 at 09:06:50 by Sebastian Huber <sebastian.huber@…>

In 2df4abcee2fd762a9688bef13e152d5b81cc763e/rtems:

score: Pass scheduler node to yield operation

Changed for consistency with other scheduler operations.

Update #2556.

comment:28 Changed on 11/02/16 at 09:07:00 by Sebastian Huber <sebastian.huber@…>

In e382a1bfccdecf1dcf01c452ee0edb5afa0660b3/rtems:

score: Pass scheduler node to block operation

Changed for consistency with other scheduler operations.

Update #2556.

comment:29 Changed on 11/02/16 at 09:07:10 by Sebastian Huber <sebastian.huber@…>

In 72e0bdba4580072c33da09fcacbd3063dbc4f2c1/rtems:

score: Pass scheduler node to unblock operation

Changed for consistency with other scheduler operations.

Update #2556.

comment:30 Changed on 11/02/16 at 09:07:20 by Sebastian Huber <sebastian.huber@…>

In 3a724113f953490e05704582fb1effbf6c8e9601/rtems:

score: Simplify _Scheduler_SMP_Node_change_state()

Update #2556.

comment:31 Changed on 11/02/16 at 09:07:30 by Sebastian Huber <sebastian.huber@…>

In 1c9688a9a11c08eabd6443d8bb9ccd439dce82e5/rtems:

score: Add _Scheduler_Node_get_scheduler()

Update #2556.

comment:32 Changed on 11/02/16 at 09:07:40 by Sebastian Huber <sebastian.huber@…>

In 36d7abad13474c6c7211dc07643287747d4594bb/rtems:

score: Add _Thread_Scheduler_add_wait_node()

Update #2556.

comment:33 Changed on 11/02/16 at 09:07:50 by Sebastian Huber <sebastian.huber@…>

In 70c22d939513dd05171d99cb053dc8f71135ee25/rtems:

score: Add _Thread_Scheduler_remove_wait_node()

Update #2556.

comment:34 Changed on 11/02/16 at 09:08:00 by Sebastian Huber <sebastian.huber@…>

In 07a32d193257f150e5237970a7fa864ab71817b3/rtems:

score: Add thread scheduler lock

Update #2556.

comment:35 Changed on 11/02/16 at 09:08:10 by Sebastian Huber <sebastian.huber@…>

In a7a8ec03258a7e59a300919485cbbd5f37782416/rtems:

score: Protect thread scheduler state changes

Update #2556.

comment:36 Changed on 11/02/16 at 09:08:20 by Sebastian Huber <sebastian.huber@…>

In edb020ca678c78e4a1a7ba4becbc46a2d6bf24c7/rtems:

score: Protect thread CPU by thread scheduler lock

Update #2556.

comment:37 Changed on 11/02/16 at 09:08:30 by Sebastian Huber <sebastian.huber@…>

In ebdd2a343181ef5f3fc2f1330930b0ea5c0ed8a4/rtems:

score: Add scheduler node requests

Add the ability to add/remove scheduler nodes to/from the set of
scheduler nodes available to the schedulers for a particular thread.

Update #2556.

comment:38 Changed on 11/02/16 at 09:08:39 by Sebastian Huber <sebastian.huber@…>

In 240347331d45b0d424077a8b74ee02efc651e003/rtems:

score: Add _Thread_Scheduler_process_requests()

Update #2556.

comment:39 Changed on 11/02/16 at 09:08:50 by Sebastian Huber <sebastian.huber@…>

In 351c14dfd00e1bdaced2823242532cab4bccb58c/rtems:

score: Add new SMP scheduler helping protocol

Update #2556.

comment:40 Changed on 11/02/16 at 09:08:59 by Sebastian Huber <sebastian.huber@…>

In 6a82f1ae8c1cd3d24b4ad6dc78431ffffb214151/rtems:

score: Yield support for new SMP helping protocol

Update #2556.

comment:41 Changed on 11/02/16 at 09:09:09 by Sebastian Huber <sebastian.huber@…>

In 913864c0b85c9e94140515a44e79d13e999ff9a2/rtems:

score: Use scheduler instance specific locks

Update #2556.

comment:42 Changed on 11/02/16 at 09:09:19 by Sebastian Huber <sebastian.huber@…>

In 3a2724805421098df505c0acea106fb294bc2f6a/rtems:

score: First part of new MrsP implementation

Update #2556.

comment:43 Changed on 11/02/16 at 09:09:30 by Sebastian Huber <sebastian.huber@…>

In 73a193fdd672486f57ec6db5f9beb50e5264ffac/rtems:

score: Delete unused functions

Delete _Scheduler_Thread_change_resource_root() and
_Scheduler_Thread_change_help_state().

Update #2556.

comment:44 Changed on 11/02/16 at 09:09:40 by Sebastian Huber <sebastian.huber@…>

In 97f7dac6604b448f0c4ee10f02d192ea42bc6aaa/rtems:

score: Delete _Scheduler_Ask_for_help_if_necessary

Delete Thread_Control::Resource_node.

Update #2556.

comment:45 Changed on 11/02/16 at 09:10:09 by Sebastian Huber <sebastian.huber@…>

In 6771359fa1488598ccba3fd3c440b95f64965340/rtems:

score: Second part of new MrsP implementation

Update #2556.

comment:46 Changed on 11/02/16 at 09:10:19 by Sebastian Huber <sebastian.huber@…>

In 1cafc4664689040d67033d81c9d2e25929d44477/rtems:

score: Delete Resource Handler

Update #2556.

comment:47 Changed on 11/02/16 at 09:10:29 by Sebastian Huber <sebastian.huber@…>

In b5f1b249028ea2be69a4ad06aa822c16cb4ac57e/rtems:

score: Delete Scheduler_Node::accepts_help

Update #2556.

comment:48 Changed on 11/02/16 at 09:10:39 by Sebastian Huber <sebastian.huber@…>

In c0f1f52763b3a231a329da0162979207519a6db6/rtems:

score: Delete Thread_Scheduler_control::node

Update #2556.

comment:49 Changed on 11/02/16 at 09:10:48 by Sebastian Huber <sebastian.huber@…>

In 7f7424329eafab755381bc638c2cdddc152a909b/rtems:

score: Delete Thread_Scheduler_control::own_node

Update #2556.

comment:50 Changed on 11/02/16 at 09:10:59 by Sebastian Huber <sebastian.huber@…>

In 2dd098a6359d9df132da09201ea0506c5389dc80/rtems:

score: Introduce Thread_Scheduler_control::home

Replace Thread_Scheduler_control::control and
Thread_Scheduler_control::own_control with new
Thread_Scheduler_control::home.

Update #2556.

comment:51 Changed on 11/02/16 at 09:11:08 by Sebastian Huber <sebastian.huber@…>

In 63e2ca1b8b0a651a733d4ac3e0517397d0681214/rtems:

score: Simplify yield and unblock scheduler ops

Update #2556.

comment:52 Changed on 12/23/16 at 14:10:09 by Sebastian Huber

Priority: normalhigh

comment:53 Changed on 02/01/17 at 06:59:55 by Sebastian Huber

Resolution: fixed
Status: acceptedclosed
Version: 4.12

comment:54 Changed on 02/03/17 at 09:58:05 by Sebastian Huber <sebastian.huber@…>

In ca1e546e7772838b20d0792155e2c71514d6b5d3/rtems:

score: Improve scheduler helping protocol

Only register ask for help requests in the scheduler unblock and yield
operations. The actual ask for help operation is carried out during
_Thread_Do_dispatch() on a processor related to the thread. This yields
a better separation of scheduler instances. A thread of one scheduler
instance should not be forced to carry out too much work for threads on
other scheduler instances.

Update #2556.

comment:55 Changed on 03/07/17 at 12:21:24 by Sebastian Huber <sebastian.huber@…>

In 088acbb0/rtems:

score: Fix scheduler yield in SMP configurations

Check that no ask help request is registered during unblock and yield
scheduler operations. There is no need to ask for help if a scheduled
thread yields, since this is already covered by the pre-emption
detection.

Update #2556.

comment:56 Changed on 05/11/17 at 07:31:02 by Sebastian Huber

Milestone: 4.124.12.0

comment:57 Changed on 11/09/17 at 06:27:14 by Sebastian Huber

Milestone: 4.12.05.1

Milestone renamed

Note: See TracTickets for help on using tickets.