Notice: We have migrated to GitLab launching 2024-05-01 see here:

Version 16 (modified by Chris Johns, on 11/25/14 at 06:10:07) (diff)


RTEMS Algorithmic Complexity

RTEMS is composed of multiple software classes and is layered. Each software class has its own execution characteristics. Although every effort was made to ensure that after an instance of an object is created, that as many operations as possible execute in a fixed length of time. In this light, the designers of RTEMS focused on the algorithms used and their algorithmic complexity. Ideally, all algorithms in RTEMS would be O(constant) which is also know as stable. However some algorithms have variations based upon what they are doing (e.g. blocking or non-blocking) or how many instances of an object are in the system. If the algorithm varies in execution time based upon the number of instances of an object, it is said to be O(n) where n is the number of instances.

This page attempts to capture information regarding the algorithmic complexity of the algorithms in RTEMS with a focus on those that are not O(constant). We do not include operations that create or delete objects. We will ignore variations due to a particular call blocking instead of a non-blocking since it is obvious blocking takes longer. We will also ignore variations caused by an interrupt occurring during the execution of a call.


This section presents the non-constant order algorithms found in the SuperCore. The SuperCore is used to implement all supported RTEMS APIs so they inherit these core behaviours.

Message Queue Handler

Variable Length Messages

Longer messages take longer to copy.

Insertion of Message by Priority

When priority ordering of messages is used, the insertion is into a simple list and is O(n) where n is the number of messages pending.

Pavel Pisa's Suggestion

Posted to the RTEMS ML (

GAVL can solve this

insert    = search O(log(n)),
            max one re-ballance O(1) with max O(log(n)) heights update.

cut first = O(log(n)) find first and O(1) cut, zero re-balances
            O(log(n)) heights update
   First Last Enhanced Speed (FLES) alternative
            O(1) find first and O(1) cut, zero rebalance
            O(log(n)) heights update and next prepare

premature delete = O(1) detach, O(log(n)) heights update
            and max O(log(n)) updates and re-balances,
            may be, it is possible to prove, that count of re-balances
            is lower or can be decreased. It is possible to omit
            re-balances at all, but AVL tree properties would be lost
            => height is not in range from
               log2(n+1)+1 to 1.44*log2(n+2)-0.328+1.
            The +1 one in GAVL is cost  for zero re-balanced
            cut first operation.

More about this code is on my page:

There is no problem with RTEMS compatible license, in the fact uLUt is extensively used in our RTEMS based application for drivers and GUI code.

Code is used in more components from OCERA project and proven to be extremely stable. It is used even in and SuiTk? sources. GAVL code does not need any dynamic memory allocations and it significantly outperforms even heap-tree algorithm with preallocated array for all elements (Heap tree is suggested solution for priority queues in the classic theory texts).

GAVL is result of my personal rethinking of AVL algorithm. My C implementation can even ensure type safety, but macro magic can be too much for you. Base of algorithm can be rewritten from generic code to one purpose one. In the fact "gcc -E" does the work and I have ported this way with some manual simplifications some uLUt code even on 8051 based chips.

Message Broadcast

Broadcasting a message results in every task currently blocking on a particular message queue waiting for a message to receive the same message. The time required to execute this service is O(n) where n is the number of blocked tasks.

Watchdog Timer

Insertion of a Timer

Timers are managed as a delta chain. Insertion is O(n) where n is the number of timers on the chain (e.g. active).

Pavel Pisa's Suggestion

See GAVL and Htimer. Based on FLES GAVL variant, same timing.

Linux Igno Molnar decided to go through RB-Tree for high resolution timers, but it has not cut-first primitive and I believe, that GAVL can outperform this in all aspects except premature timer delete. AVL height is even lower than RB-tree there.


The heap is used for variable memory allocation.

Memory Allocation

The list of free blocks is managed as an unsorted listed. Allocating a memory block requires a linear search of all free blocks until one large enough to satisfy the request is located.

Pavel Pisa's Suggestion

Posted as

TLSF dynamic storage allocator

This is nice approach, O(1) allocate, O(1) delete. Needs to do find first bit set to compute log of size and then two find first bit set instructions and atomic double linked list delete for allocate.

Only log computation and some set bit and list insert for free. Fragmentation is kept on reasonable level

As for license, I know the people and I expect no problems. Even rewrite of the code onto RTEMS primitives from scratch is possible. Idea is so clean, simple but unique that there is no problem to re-implement it.

Semaphores with Simple Priority Inheritance

Currently, RTEMS does not manage correctly the threads priority of multiple semaphores with simple priority inheritance. Just a note, in the literature, priority inheritance is divided into "simple priority inheritance" and priority ceiling. In this section we will talk about "simple priority inheritance".

According to the definition in the literature when using semaphore simple priority inheritance protocol, a taskA priority (task which holds the semaphore) must be equal to the highest priority of the task blocked on any of the semaphores that taskA holds.

Currently, RTEMS supports the priority increase correctly, i.e., the thread that holds the semaphore (holder thread) has its priority increased as new threads are trying to obtain the semaphore(s). However, when the thread releases a semaphore, its priority is not always lowered to the highest priority of the threads which are blocked on other semaphores hold by the same thread (thread that released the semaphore).

The straightforward implementation to solve this problem consists of searching through all the other threads which has the greatest priority and to assign that value to the holder thread. This solution is O(n) where n is the number of threads blocked in the other semaphores.

A possible solution to make it O(1) would be to create a table with 256 positions (like the thread ready queue) for every thread. When a thread blocks when trying to obtain a semaphore, it places itself in the table of the holder thread. This way, when the holder thread releases the semaphore, it can easily search through the table the "second" highest priority thread (the "first" highest priority thread will, in principle - unless it is suspended - obtain the semaphore). The downside of this approach is the memory increase: needs a 4*256 = 1KiB increase in the TCB for each thread. Since the minimum size for the stack for each thread is about 4 KiB, this value may not be too significant.

Classic API

The Classic API inherits the following non-constant time algorithms from the SuperCode

  • Variable length message support
  • Message broadcast
  • Timer insertion (timer manager, delays, and timeouts)
  • Memory Allocation

Region Manager

Freeing Memory

Although freeing memory is O(constant), freeing a large block of memory may result in the outstanding requests of multiple blocked tasks being satisfied.


The POSIX API inherits the following non-constant time algorithms from the SuperCode

  • Variable length message support
  • Priority based message insertion
  • Timer insertion (timer manager, delays, and timeouts)

Message Queue Manager


There is a linear search for the message queue name.


There is a linear search for the message queue name.

Semaphore Manager


There is a linear search for the semaphore name.


There is a linear search for the semaphore name.

Timer Manager

Timer Id Management

There is a linear search of slots in a timer array.

FIX: This is a bad implementation which was originally submitted by a user who did not understand the standard pattern of object management. This pattern requires objects be preallocated at system initialization time and object creation only removes one from an inactive object list. Normally, RTEMS object Id to pointer translation is constant order so, this manager is fixable.

UPDATE: As of 1 Feb 2006, Joel Sherrill is testing a new implementation of POSIX timers which will not have O(n) object Id management.