#2510 new enhancement

Improve the SMP scheduler with arbitrary processor affinity support

Reported by: Sebastian Huber Owned by:
Priority: normal Milestone: Indefinite
Component: score Version:
Severity: normal Keywords: SoC executive
Cc: Blocked By:
Blocking:

Description

The SMP scheduler with arbitrary processor affinity support uses a proof of concept implementation.

A potentially more efficient implementation is available here:

  1. Cerqueira, A. Gujarati, and B. Brandenburg, "Linux’s Processor Affinity API, Refined: Shifting Real-Time Tasks towards Higher Schedulability", Proceedings of the 35th IEEE Real-Time Systems Symposium (RTSS 2014), pp. 249-259, December 2014.

http://www.mpi-sws.org/~bbb/papers/pdf/rtss14f.pdf
http://www.mpi-sws.org/~bbb/papers/talks/rtss14f.pdf

Change History (14)

comment:1 Changed on Jan 5, 2016 at 5:35:28 PM by Gedare

Do you intend to implement this?

comment:2 Changed on Jan 5, 2016 at 8:23:56 PM by Sebastian Huber

I currently do not plan to implement this. I came across this paper by accident. It would be a useful improvement from my point of view.

comment:3 Changed on Feb 1, 2016 at 1:17:56 PM by Darshit

We would first need to identify the potential functional changes that this would bring into the existing scheduler and the expected performance improvements. Irrespective of expected improvements, it would still be nice to actually provide the option for such a scheduler.

I've been looking into the existing schedulers implemented in RTEMS but am unsure on how exactly to evaluate them with respect to the new scheduler being proposed here.

comment:4 Changed on Feb 2, 2016 at 6:56:58 AM by Sebastian Huber

You find the basic SMP scheduler implementation here:

https://git.rtems.org/rtems/tree/cpukit/score/include/rtems/score/schedulersmpimpl.h?id=265312a5be98d59e9758cc78c0f313920fa40016

We have currently three specialized implementations. All are clustered FP schedulers.

Simple Priority SMP

https://git.rtems.org/rtems/tree/cpukit/score/src/schedulersimplesmp.c?id=265312a5be98d59e9758cc78c0f313920fa40016

It uses a sorted list for the ready threads.

Priority SMP

https://git.rtems.org/rtems/tree/cpukit/score/src/schedulerprioritysmp.c?id=265312a5be98d59e9758cc78c0f313920fa40016

It uses a table based priority queue.

Priority Affinity SMP

https://git.rtems.org/rtems/tree/cpukit/score/src/schedulerpriorityaffinitysmp.c?id=265312a5be98d59e9758cc78c0f313920fa40016

This is the one that should be improved. To evaluate its current time complexity, just look at the while and for loops in the code.

comment:5 Changed on May 2, 2016 at 10:09:30 AM by Sebastian Huber <sebastian.huber@…>

In 981eed21761cd0036f0ac61042adea09d8a595a2/rtems:

score: Add dummy Strong APA scheduler

Start with a copy of the Priority SMP scheduler implementation.

Update #2510.

comment:6 Changed on Jul 5, 2016 at 10:28:50 AM by Darshit

Processor Affinity Set Data Structures

For the scheduler, we need to store the "processor affinity sets" in the nodes. I have reused the code for the Priority Affinity SMP Scheduler to add support for affinity sets. This uses the cpu_set_t data structure to store the affinity set information in Scheduler_strong_APA_Node. For the implementation of the algorithm, certain logical operations on the lists will be useful, especially, set intersection and set union. These operations are already supported by the cpu_set_t data structure, which is why I decided to stick with this structure instead of using Processor_mask or writing something new.

The Algorithm

On arrival of a new task in the system, the _Scheduler_strong_APA_Unblock action will be triggered. The core of the algorithm, the iterative MVM solution will be implemented in this method itself. The output of the algorithm will be a mapping of processors to the thread that they are executing. This mapping will look something like this:

Thread_control *processor_to_task_mapping [ _SMP_Get_processor_count() ];

Where index 0 points to the thread executing on CPU 0, and so on. Initially, the entire array will point to the idle thread.

Then, during the Enqueue_ordered operation, if _get_lowest_scheduled() returns a thread that has a lower priority than the arriving task, the case becomes trivial and we only need to carry out the standard forced migrations during the call to move_from_scheduled_to_ready.

However, if the lowest scheduled task according to the affinity set has a higher priority than the arriving task, then we may need to carry out non-standard task migrations for the most optimal configuration. These migrations will be performed based on the MVM solution set created during the Unblock operation and will be implemented in the _Scheduler_strong_APA_Insert_ready_fifo method.

comment:7 Changed on Aug 13, 2017 at 11:46:52 PM by Chris Johns

Milestone: 5.0Indefinite
Version: 4.11

comment:8 Changed on Oct 10, 2017 at 6:27:10 AM by Sebastian Huber

Component: SMPscore

comment:9 Changed on Oct 10, 2017 at 6:29:01 AM by Sebastian Huber

Component: scorecpukit

comment:10 Changed on Mar 15, 2019 at 3:22:18 PM by Joel Sherrill

Keywords: SoC added

comment:11 Changed on Mar 15, 2019 at 3:23:34 PM by Joel Sherrill

Keywords: kernel added

comment:12 Changed on Mar 15, 2019 at 3:24:26 PM by Joel Sherrill

Keywords: SoC, kernelSoC kernel

comment:13 Changed on Mar 15, 2019 at 8:00:06 PM by Gedare

Keywords: SoC kernelSoC,kernel

comment:14 Changed on Mar 15, 2019 at 8:01:21 PM by Gedare

Keywords: executive added; kernel removed
Note: See TracTickets for help on using tickets.