Notice: We have migrated to GitLab launching 2024-05-01 see here: https://gitlab.rtems.org/

#3361 assigned defect

4.11: Improved Priority Inheritance Protocol

Reported by: Gedare Bloom Owned by: Needs Funding
Priority: normal Milestone: Indefinite
Component: score Version: 4.11
Severity: normal Keywords:
Cc: Blocked By:
Blocking:

Description

See also #2412 and #3359.

Problem

The RTEMS mutexes implement only a very simple approximation of the priority inheritance protocol. The real priority of a thread is only restored once it releases its last mutex. Lets consider this scenario. We have a file system instance protected by one mutex and a dynamic memory allocator protected by another mutex. A low priority thread performs writes some log data into a file, thus it acquires the file system instance mutex. The file system allocates dynamic memory. Now a high priority thread interrupts and tries to allocate dynamic memory. The allocator mutex is already owned, so the priority of the low priority thread is raised to the priority of the high priority thread. The memory allocation completes and the allocator mutex is released, since the low priority thread still owns the file system instance mutex it continues to execute with the high priority (the high priority thread is not scheduled). It may now perform complex and long file system operations (e.g. garbage collection, polled flash erase and write functions) with a high priority.

Functional requirements

  • The mutex shall use the priority inheritance protocol to prevent priority inversion.
  • The mutex shall allow vertical nesting (a thread owns multiple mutexes).
  • The mutex shall allow horizontal nesting (a thread waits for ownership of a mutex those owner waits for ownership of a mutex, and so on).
  • Threads shall wait in priority order. The highest priority thread shall be dequeued first.
  • The highest priority waiting thread shall wait in FIFO order.
  • The mutex shall provide an acquire operation with timeout.
  • In case a mutex is released, then the previous owner shall no longer use the priorities inherited by this mutex.
  • In case a mutex acquire operation timeout occurs, then the current owner of the mutex shall no longer use the priorities inherited by the acquiring thread.
  • The order of the mutex release operations may differ from the order of the mutex acquire operations.
  • Priority changes not originating due to the priority inheritance protocol shall take place immediately.

Performance requirements

  • The mutex acquire operation shall use only object-specific locks in case the mutex is not owned currently.
  • The mutex release operation shall use only object-specific locks in case no threads wait for ownership of this mutex.

Invariants

  • A mutex shall be owned by at most one thread.
  • A thread shall wait for ownership of at most one mutex.

Proposed Implementation

Use a recursive data structure to determine the highest priority available to a thread for each scheduler instance, e.g.

typedef struct Thread_Priority_node {
  Chain_Node Node;
  Priority_Control current_priority;
  Priority_Control real_priority;
  struct CORE_mutex_Control *waiting_to_hold;
  Chain_Control Inherited_priorities;
} Thread_Priority_node;

typedef struct {
  ...
  Thread_Priority_node Priority_node;
  ...
} Thread_Control;

Note: An RBTree could be used, but usually we expect the Inherited Priorities should be quite short lists.

Initially a thread has a priority node reflecting its real priority. The Thread_Priority_node::owner is NULL. The Thread_Priority_node::current_priority is set to the real priority. The Thread_Priority_node::Inherited_priorities is empty.

In case the thread must wait for ownership of a mutex, then it enqueues its priority node in Thread_Priority_node::Inherited_priorities of the mutex owner.

In case the thread is dequeued from the wait queue of a mutex, then it dequeues its priority node in Thread_Priority_node::Inherited_priorities of the previous mutex owner (ownership transfer) or the current mutex owner (acquire timeout).

In case the minimum of Thread_Priority_node::real_priority and Thread_Priority_node::Inherited_priorities changes, then Thread_Priority_node::current_priority is updated. In case the Thread_Priority_node::owner its not NULL, the priority change propagates to the owner, and so on. In case Thread_Priority_node::current_priority changes, the scheduler is notified.

The biggest issue is the locking on SMP configurations in case of recursive minimum updates.

Change History (4)

comment:1 Changed on 03/23/18 at 15:47:01 by Gedare Bloom

Milestone: 4.11.54.11.4

comment:2 Changed on 10/01/20 at 01:28:34 by Chris Johns

Milestone: 4.11.44.11.5

comment:3 Changed on 10/02/20 at 15:42:02 by Gedare Bloom

Owner: changed from Gedare Bloom to Needs Funding

I can't fix this one trivially. The fix for 4.10 (#3359) does not work directly in 4.11 because of scheduler/thread changes and the SMP scheduling. We could probably fix the UP scheduling in 4.11, but I'm doubtful that the SMP scheduler can be fixed for this issue. Nested PIP mutexes should be documented as not working in 4.11 as per this ticket. I believe nested PCP mutexes will work properly, but I haven't confirmed that yet. The behavior works fine in 5+ due to SMP scheduler improvements.

Last edited on 10/02/20 at 15:42:34 by Gedare Bloom (previous) (diff)

comment:4 Changed on 10/07/20 at 00:33:07 by Chris Johns

Milestone: 4.11.5Indefinite

Thanks. I have moved the milestone to Indefinite.

Note: See TracTickets for help on using tickets.