UDI Regions

From OSDev Wiki
(Redirected from User:Gravaera/UDI Regions)
Jump to: navigation, search

This article seeks to explain a foundational principle to UDI drivers: the UDI Region. This can be considered advanced reading, and is probably not useful to persons who do not understand asynchronous programming, or are not interested in implementing a UDI environment.


Introduction to UDI driver regions

A region is a set of readable/writable data. UDI drivers do not have global, writable variables or data structures. All writable data is to be kept within a region and allocated at runtime. All global data is to be purely read-only and treated as "constant" and immutable.

If there is a variable which is shared among two "threads" of execution ("thread" being used loosely here, since UDI does not dictate any particular threading model), this shared variable is to be modified via IPC between the two threads, and only one of the threads may "own" the variable. The other shall send IPC messages to the "owning" thread when it wishes to modify that data.

As a result of this, it is impossible for two threads to modify a single variable at the same time. The UDI "Region" is the unit of synchronization of UDI shared data.

Threading models for UDI drivers

In the above introduction, the article loosely refers to regions as having "threads" attached to them, as if a "region" implies a new "thread". This is not necessarily true, and is simply one model of implementation. The key requirement is that region data not be writable by two threads at the same time, regardless of the threading model used in a kernel.

For the purpose of being as accommodating as possible to the reader, the author has decided to discuss some of the conceivable threading models that can be used with UDI drivers. Note that for all of these cases, the driver does not know or care how the kernel decides to apply threading to it, if threads are used at all.

One thread per region

In this model, the requirement that no two threads be executing on the same region data at once is met by instantiating a thread for each region in the driver. Subsequently, the thread will de-queue all messages sent to that region (via channel IPC calls), and dispatch them one by one. In this model, it is impossible for another region's thread to attempt to read or write another region's data since each thread is only responsible for, and interested in its own data.


  • Very naturally supports asynchronous programming.
  • Scales very well to compute clusters, NUMA machines, etc.
  • Migrating regions as single units across NUMA nodes, machines in a cluster, etc is very simple.
  • No locking is needed.
  • Applies the full benefits of asynchronous programming while also eliminating the need for costly locking.


  • IPC messages between regions may incur overhead that can be avoided in many cases.

This approach is best suited to high scaling microkernels or hybrid kernels and distributed systems.

One thread per channel

In this model, every channel of IPC established between regions is given its own thread. Each IPC channel thread de-queues messages from its IPC queue, and attempts to modify the region data that the IPC request would require it to. In this model, when a channel thread attempts to read or write to region data, it may encounter another channel thread already accessing that data. Locking would be needed for each region data block to ensure that two channel thread do not access the region data at once.


  • Very naturally supports an asynchronous programming model.
  • Conceptually (not practically) scales well to NUMA systems and clusters.
  • Migrating regions to other nodes is "easy", though more difficult than the "One thread per region" approach.


  • Requires the use of locking on per-region data.
  • Introduces a large amount of contention, as the number of IPC channels and the complexity of the driver increase.
  • Introduces a large amount of kernel metadata overhead since Thread Control Blocks and Thread Stacks require kernel memory.
  • Uses resources which could have been saved by a better design.

This approach is probably not a best fit for any implementation (at all), but is one way to preserve and benefit from the asynchronous nature of UDI drivers.

One thread per driver

In this model, every driver is given its own single thread, and only one thread is used regardless of how many region data blocks the driver has, or how many IPC channels it uses. This may be the case for a kernel that implements Separate Address Space drivers, but does not support multithreading. The single thread would sleep on all of the IPC queues, and wake whenever one of them has a new request message. It will then service the request, modifying any per-region data required, and sleep again until a new request comes in, and so on. Because there is only one thread in the process, it is impossible for two threads to modify the region data in any region at the same time, so locking is not needed in this case either.


  • Very simple to implement.
  • Does not require locking.
  • Takes advantage of the asynchronous UDI programming model.
  • Easy to debug.
  • No concurrency issues whatsoever, because the kernel only initializes one thread per driver.


  • Under heavy load, this model will fail to take advantage of the presence of multiple processors or cores, because there is only one thread per driver. If there are other CPUs available to ease the load, they will be sitting idle, while throughput decreases.

This approach is best suited to a hybrid kernel which uses separate address spaces for drivers, but has no interest or support for multithreaded drivers. It can also work well for monolithic kernels which can load a UDI driver into their address space as a monolithic module, and then instantiate a thread for the driver, treating it as an autonomous kernel thread, though not a separate process.

No threads per driver

This is the general case for drivers in a purely monolithic kernel. All "IPC calls" are turned into direct function calls between processes, the kernel and drivers. Drivers are directly monolithically linked into the kernel, or loaded as re-entrant monolithic modules. The kernel attaches locks to each region and thereby synchronization is achieved.

No explicit threads are spawned and bound to the driver, but rather userspace and kernel threads directly enter the driver code via syscalls.


  • Directly complements an existing monolithic kernel design.


  • Receives none of the benefits of the UDI asynchronous programming model, since all calls are treated as synchronous calls.
  • Does not scale well at all.

This approach probably best suits monolithic kernels seeking to port a UDI environment.

Personal tools