Uniform Driver Interface

From OSDev Wiki
Jump to: navigation, search
Logo
The official Project UDI logo

The UDI revival effort maintains an IRC channel on Freenode (irc.freenode.net), called #udi. Feel free to join and ask questions.

UDI stands for "Uniform Driver Interface". It is the specification of a framework and driver API / ABI that enables different operating systems (implementing the UDI framework) to use the same drivers. Conceived by several large industry corporations, it has fallen dormant, despite being functional and delivering on its promise.

UDI drivers are binary compatible across all UDI-compliant operating systems running on the same CPU family. They are also source compatible across all UDI-implementing operating systems. This means, a driver only has to be developed once.

While Microsoft Windows gets all the hardware drivers they want, and GNU discourages UDI for philosophical reasons, its advantages for hobbyist OS developers are obvious.

Contents

Why UDI?

The Uniform Driver Interface would, should it be widely adopted, provide a common driver framework for implementation across kernels and platforms, enabling drivers to be provided without respect to the target kernel, and to a large extent, the target hardware platform. UDI has several projected advantages over existing driver interfaces which may motivate the reader to adopt it:

Advantages

  • Portability (both cross-OS and cross-platform), which was mentioned in the above section, is perhaps the primary concern for which UDI was developed in the first place. All we can hope for is that enough operating systems will embrace the model so we can actually take advantage of it.
  • Performance is comparable or better than that of custom, native API drivers for a native UDI implementation. For environments where performance is critical, UDI does not inhibit service quality. UDI is explicitly designed to be non-blocking and lockless, featuring a synchronization model for increased MP scalability without locking and many other high-scalability focused features.
  • UDI can integrate seamlessly into existing kernel environments regardless of the OS architecture (monolithic kernel vs. microkernel, POSIX vs. non-POSIX, etc.) with little or no extra performance overhead.
  • Reliability and stability have been explicitly provided for by the design. UDI tries to eliminate some categories of potential bugs, such as (but not limited to) resource leaks and deadlocks (all interfaces can potentially be implemented without any locking at all).
  • Flexibility is another thing UDI has been designed mind with: not only in the way the specification was conceived (i.e., to be extensible), but also in the sense that it permits system programmers to apply techniques such as driver isolation, shadow drivers, etc. if they see fit to do so.
  • The interface is fully asynchronous, in every respect; high scaling systems are becoming increasingly predominant and asynchronicity is slowly becoming an "expected" feature for modern kernels. UDI moves ahead of the herd to enable a compliant kernel to slowly adopt asynchronous interfaces without having to do major redesign later on.

Disadvantages

  • Moderately complex, and it will generally take a while to understand the specification.
  • It cannot simply be ported to immature kernels. A kernel must have a certain minimum level of maturity to reasonably attempt to become UDI compliant.
  • Not at all viable for "casual" projects: requires a significant amount of foreknowledge and prior work.

Core components of UDI Drivers

Environment
High level view of UDI environments

An implementation of the Uniform Driver Interface specification is known as a UDI Environment. There is a reference implementation available (see link below) which provides usable code for several existing kernels (Linux, BSD, Solaris), and it can be used the basis for a fresh implementation. Kernel environment implementations are responsible for providing the Service Call interfaces specified by the UDI Standard; a kernel may choose to implement these as native system calls, or via library extensions -- the decision is up to the implementer. There are two types of service calls recognized by the UDI paradigm: synchronous (which will return immediately to the caller - i.e., to the driver) and asynchronous (which work through a callback mechanism).

UDI drivers also actively take part in identifying their child devices and helping to build the host kernel's device tree. Bus drivers enumerate children, and so on. Each of these buses may have several controllers attached. UDI drivers for these devices will interact in a tree-like fashion just as the hardware does. Let's take a closer look at drivers themselves!

Drivers are split into one or more modules, and each module has at least once region. A driver that has been instantiated (executed, so to speak) uses IPC calls ("Channel operations") to communicate between modules and regions. If a driver is used to instantiate more than one device (say, a disk driver used to instantiate two separate disk devices), the choice of whether the actual driver code is mapped using Copy-on-Write, or duplicated in memory, etc is up to the environment.

Modules

A module is essentially a single executable code object. Specifically, drivers can be broken into multiple executables. A large driver that may only need to load certain components and may not need all of its code in memory all the time may be implemented as a multi-module driver. This partitioning of the driver code into modules is up to the driver vendor of course. Most UDI drivers are expected to be single-module drivers, but complex drivers such as graphics card drivers, etc may be best implemented as multi-module drivers. For example, if a graphics driver exports an OpenGL 3D API along with a Direct3D API, it is very likely that both front-ends have a lot of code behind them that would occupy a lot of memory should both be loaded. Most kernels will use either OpenGL or Direct3D, so if such a graphics driver was to split its OpenGL and Direct3D implementations into separate modules, this would enable kernels loading that driver to avoid allocating memory for the code and data for the API it isn't using.

Regions

Main article: UDI Regions

Regions are nothing more than blocks of related data. For example, a network card may have a set of register states that are specific to its send() function, and a different set of stats and variables specific to its receive() function. Data is explicitly separated into functionality regions in UDI. A region is nothing more than driver-allocated data for its state variables. The most intuitive way to split driver data is into functionality sub-components of the device in question. So a network card driver may choose to have a send region and a receive region. A graphics driver write may choose to partition the driver into a framebuffer writing region, a transformation region, etc, etc. Then IPC request messages can be sent over UDI IPC channels to each region based on the purpose of that region.

Regions also form the unit of concurrent execution in UDI. Since Regions are nothing more than data, they are also the units which must be synchronized against concurrent writes. Generally, this means making sure that no two threads can modify region data at once. The design of the UDI interfaces is perfectly capable of working without the use of locking, and it is left up to the host OS Environment to choose whether it will use lockless algorithms, spinlocks, waitqueues, or some other method to ensure that no two threads modify region data at the same time. See the main article for a detailed explanation of several practically usable UDI synchronization models.

Another attribute of UDI regions is that they are location- and instance-independent, meaning that they can be moved from one place to another without affecting any of the other regions because they share no common state. That is, a driver can be marshaled and moved from one NUMA node to another, or one physical machine to another over a network, or any other similar type of migration. This is particularly interesting in multiprocessor systems (esp. NUMA), and high-scaling compute clusters because an environment may separate regions due to performance and resource constraints. It's worth mentioning that, because of the separate states, the tasks performed by regions are mutually-exclusive (for instance a network driver might have one region that handles sending packets and another receiving). This is a potential area where host OSs can make huge optimizations to remove performance bottlenecks.

Channels

Main article: UDI Channels

The only way for regions to communicate is through channels. Channels are an IPC-agnostic abstraction of a bi-directional communication mechanism. Each of the two channel endpoints provide an ops vector, which is a set of entry points. They are referenced via handles of type udi_channel_t (check the definition of handles below). The channel operations along with the associated functionality is defined by metalanguages. Metalanguages are separately defined for each class of drivers, but we'll get to that soon.

Metalanguages

Main article: Metalanguages

Metalanguages define extensions to the core specification for various purposes, and can also be used to define custom IPC protocol APIs between modules/regions. An example of a case where a custom protocol API may be needed is where, for example, a network card driver has a "Control" region which takes commands from the kernel for power management ("Go to sleep", "prepare to shutdown", etc), and then it has a Send() region and Receive() region, which handle its send() and receive() functions respectively.

It follows naturally that if the driver receives a "Go to sleep" command from the kernel on its Control region, it would need to send messages to its Send and Receive regions to cause them to cease operation. There is no generic IPC_Send() function defined for IPC across UDI channels -- all IPC must be done according to the protocols APIs defined by a Metalanguage, whether standardized by the UDI spec, or custom-defined. Thankfully, driver writers do not need to define custom protocols for every such case where they want to simply send custom messages between regions: the UDI Core specification defines a "Generic I/O Metalanguage" IPC protocol API which covers a wide range of generic IPC needs and can be extended with custom messages as desired.

Apart from APIs/IPC protocols, Metalanguages also cover extensions to the core specification. For example, the already-defined UDI Bus/Bridge Metalanguage can be extended to support new buses as needed; PCI bus drivers, ISA bus drivers, etc do not all need new Metalanguages, because UDI has defined a core UDI Bus/Bridge Metalanguage. This core UDI Bus/Bridge Metalanguage can be extended using Bus/Bridge Metalanguage extensions specific to each bus. This is a case where a Metalanguage is already defined by the UDI Standard, and that metalanguage itself is extended as needed for each bus.

Entirely new Metalanguages can also be created where necessary; for example, an SCSI Host Bus Adapter is not really a bus, but it is an I/O Microcontroller device that acts as a parent device to SCSI devices (mostly disks). It looks like a bus, but isn't really a bus, and is better handled with an IPC protocol and API of its own. So the UDI specification defines an SCSI Host Bus Adapter Metalanguage API which manages communication (IPC) between SCSI Peripheral Devices (disks) and SCSI Host Bus Adapters. On any given motherboard, a commonly seen arrangement may be as follows in the ASCII art below. The SCSI HBA is not a bus, and the IPC communication between SCSI disks and the SCSI HBA cannot be constrained to follow the same format as communication between a bus and its child devices. This is a case where a new Metalanguage API for communication is a good idea.

As an honourable mention, it would also have been possible to just use the UDI Generic I/O Metalanguage for communication between the SCSI disks and their parent SCSI HBA -- the Generic I/O Metalanguage is equally adequate for that purpose as well.

RootNode
|- PCI-Bus-0
|  |- ...
|  +- ...
|
|- PCI-Bus-1
|  +- SCSI HBA
|     |- SCSI-Peripheral-0 (disk)
|     +- SCSI-Peripheral-1 (disk)
|
+- PCI-Bus-2

Metalanguages are essentially UDI IPC Channel protocol definitions or API definitions, and definitions of extensions to the core specification. Hence the name: Meta-LANGUAGES.

Driver configuration

There's a special configuration method for static properties of UDI drivers using a file called udiprops.txt. This file is distributed independently in each driver package for source code distributions and linked into a special section (called .udiprops) for binary distributions.

The udiprops.txt file doesn't only allow for static configuration options, but is also used in the building process for UDI drivers since they do not use makefiles - not that it would be technically unfeasible. The UDI specification defines tools for building, packaging and installing UDI drivers for simplicity's sake since, unlike POSIX tools, they don't require operating systems to have any extra functionality (e.g., a VFS). Luckly, these tools are available in the reference implementation, all you need to do is build them.

Below you can see a sample udiprops.txt:

  properties_version 0x101
  
  message 1 Project UDI
  message 2 http://www.project-UDI.org/participants.html
  message 3 Pseudo-Driver
  message 4 Generic UDI Pseudo-Driver
  release 3 1.01
  
  supplier	1
  contact	2
  name		3
  shortname	pseudod
  
  ##
  ## Interface dependencies
  ##
  requires udi	 	0x101
  requires udi_gio 	0x101
  
  ##
  ## Build instructions.
  ##
  
  module pseudod
  compile_options -DPSEUDO_GIO_META=1
  source_files pseudo.c pseudo.h
  region 0
  
  ##
  ## Metalanguage usage
  ##
  
  meta 1 udi_gio		# Generic I/O Metalanguage
  
  child_bind_ops 1 0 1		# GIO meta, primary region, ops_index 1
  
  # Orphan driver; no device line
  
  #
  # Initialization, shutdown messages
  #
  message 1100  pseudod: devmgmt_req %d
  message 1500  pseudod: final_cleanup_req

Of course, udiprops.txt can be a lot more complex than this, I only wanted you to see what one looks like. You should check the specification for all compile options, statements and configuration options.

Programming Model

All UDI function calls are asynchronous in nature; this means that they implicitly do not block. A compliant UDI driver will always be implicitly non-blocking. Whether or not the host kernel supports non-blocking programming models is up to that kernel, and for any particular kernel, it may be necessary to use locking, mutexes and blocking. Naturally, for a kernel that fully supports a non-blocking, asynchronous model, UDI will simply scale seamlessly.

UDI drivers, because of their asynchronous nature, behave like servers to a large extent and they have very good throughput, owing to the fact that the driver itself will only block if the host kernel imposes a limitation on it. For a host kernel which does not have scaling limitations, UDI drivers will innately also scale without limitations -- the throughput of a compliant UDI driver is dependent solely on the limitations of the host kernel.

UDI drivers do not implicitly assume the use of locking, blocking, or any specific threading or synchronization model. They fit perfectly into any kind of host environment. As such, the UDI specification does not define any locking operations. It is completely possible for a host kernel to run UDI drivers locklessly.

Driver failures

When illegal behavior is detected by the environment, the misbehaving region will usually be region-killed and all neighouring regions will be notified. All channels to that regions will be closed and all resources owned by that region will be freed.

See also

Existing Implementations

External Links

Personal tools
Namespaces
Variants
Actions
Navigation
About
Toolbox