User:Shukantpal/Silcos Kernel

From OSDev Wiki
< User:Shukantpal(Redirected from Silcos Kernel)
Jump to: navigation, search

Contents

Introduction

The Silcos kernel is designed to build a portable and reliable software platform above it. It abstracts system services and policies within the kernel in a object-oriented and modular approach. It is built in C++, C, and intel-syntax assembly. It uses infrequent extended inline asm with GCC support. It has become a open-source project which can be further developed by the open-source community. It was first commited on GitHub on October 19, 2017 and is undergoing rapid development since then. It has become a fairly mature kernel waiting for becoming ripe and supporting user-level software. The Silcos kernel project is a part of the larger view of a open-source, modular and secure microkernel-based operating system. It incorporates several system services in the kernel space to reduce the amount of IPC overhead seen in traditional microkernels. But it keeps the drivers strictly outside the kernel, except interrupt-handlers and special cases, and also aims to implement the Uniform Driver Interface. It follows the key principles and techniques of other kernel-mode software (like LINUX) and aims to form a similar interface for user-level software, allowing them to be seamlessly ported to the Silcos application environment. At the same time, it incorporates a generic interface for most services allowing them to be easily extended using loadable kernel modules.

Supported Architectures

The kernel will eventually come to all major hardware architectures. It is currently being built for the Intel IA32 architecture and handles the most recent hardware developments. Right now, it doesn't focus on the older hardware bugs and is keen to run only on recent CPUs after 2010. It wants to be more compatible towards advanced hardware rather than being bloated with older-hardware compatibility software. Eventually it will support the following architectures -

1. IA32 and IA64

2. ARM

3. PowerPC

Overview

The kernel source is stored online at [GitHub https://github.com/SukantPal/Silcos-Kernel]. The kernel is highly modular in terms of source code and binary interface. The whole kernel is divided into various modules which separates the highly-used code from low-priority code. This improves memory-locality and cache efficiency. All of the boot-time modules are stored in the first 2-MB huge page of the kernel-space (3GB higher-half space). This means only TLB entry is required to map all of the kernel boot-time modules.

Initialization

Although not listed in the section below, the Initor module in the /Initializer directory of the workspace, is actually loaded by the bootloader. It is listed as the main kernel in the grub.cfg configuration file, but it is actually responsible for loading rest of the kernel-modules.

Modules

The kernel currently divides itself into the following modules -

1. KernelHost

2. HAL

3. ModuleFramework

4. ObjectManager

5. ResourceManager

6. ExecutionManager (formerly the Microkernel which also comprised of KernelHost & HAL but was split into three)

Each module is loaded at runtime by the bootloader and passing into the multiboot structure given to the kernel. The kernel host initializes itself - physical memory allocator, kernel-page allocator, object (slab) allocator, module-loader. After initializing itself, the KernelHost links the other kernel modules and if any, will show a linking error and stop. Otherwise, it will continue and call the initialization function of each module. This is in the form of __init functions in each module. This function can be utilized to setup any objects that have their own specific allocator (in slab allocation techniques).

KernelHost

The KernelHost module is the one which is given control by the bootloader. It uses multiboot and expects protected mode with uniform segments. The first thing is does is load the defaultBootGDT and enable higher-half paging. Then it parses the multiboot table and calls the Main() function. This early-boot code is present in KernelHost/Source/Boot/IA32/InitRuntime.asm which co-operates with paging code. Then the Main() function is given control (after loading the ESP with a kernel boot stack of 4KB size).

KernelHost comprises of these following services -

1. Physical Memory Allocator (see Memory/KFrameManager.h)

2. Kernel Page Allocator (see Memory/KMemoryManager.h)

3. Slab Allocator (see Memory/KObjectManager.h)

4. Module Loader & Linker (see ModuleLoader/*)

Memory Allocation Techniques

The Silcos kernel uses the same technique for allocating kernel pages and page frames in physical memory. To save space, the kernel developer(s) have separated the algorithm from the actual allocator. This means that the page-frame allocator & kernel-page allocator actually share the code which allocates their memory blocks. This technique saves binary file-size and improves memory-locality. The kernel uses the zone allocation algorithm to take blocks from individual memory zones, which relies on the buddy allocation technique to manage blocks in powers of 2 in each zones.

BuddyAllocator - Memory/BuddyAllocator.hpp

ZoneAllocator - Memory/ZoneAllocator.hpp

Buddy Allocator - The buddy allocator uses a table of block descriptors provided by the actual allocator. These blocks are the smallest unit of allocation from the client allocator. In both cases, the page-frame allocator and kernel-page allocator pass a table of descriptors which hold information on 4KB blocks of page-frames & kernel-pages. The allocator receives allocation in the form of pointer to the first block in the amount of memory allocator. For example, if at 64KB address 16KB is allocated then the block at 64KB will be returned. The allocator must convert the block to its address.

The Silcos kernel uses improvements over the traditional binary buddy allocator. It introduces the concept of super-blocks and lists for each (lowerOrder, upperOrder) pair of buddy blocks. It is explained in greater detail in the GitHub wiki.

Zone Allocator - The zone allocator divides the memory into various zones which have fixed & aligned boundaries. These boundaries are aligned to the maximum size of the block that can be allocated. For example, the page-frame allocator allows upto 2MB allocations so all zones are aligned by 2MB boundaries. A zone allocator configures itself from a global block descriptor table. In the page-frame allocator, for example, page-frames are represented by MMFRAME descriptors which are located at a fixed address in physical memory (depending on the architecture). The zone allocator will have separate buddy allocators for each memory zone. These buddy allocators will hold pointers to the table of blocks from which they will allocate.

The zone allocator also provides zone preferences. For example, the page-frame allocator provides multiple zones for kernel, code and data allocations. This reduces concurrent access to a zone in multiple-processor systems and reduces the overall synchronization overhead. But if the CODE zone is filled then allocating from DATA zone is okay and vice versa is also okay. Similar is for all three zones - kernel, data, and code. So all of these zones are of the same preference.

Zones of the same preference are kept in the same circular list. But each preference is given a separate list of zones and kept at a specific index in an array. For example, the DRIVER zone in 32-bit system is for 32-bit devices which use maximum 4GB addresses. So if DRIVER zone is fully used, then going to CODE zone above it is wrong essentially. Thus, it is of lower preference. Similar is for the DMA zone where the limit is 16MB. The page-frame allocator uses only three preferences - (DMA), (DRIVER), (CODE, DATA, KERNEL).

The kernel-page allocator has two zones with only one preference - (ZONE_KOBJECT, ZONE_KMODULE) for objects & modules specifically.

Object Allocation

The kernel uses a slab allocator for getting space for all kernel objects. Interfaces are provided for creating & deleting object allocators at runtime. The allocator has a front-end and a back-end. The back-end allocates kernel-pages as 'slabs' and creates a slab-descriptor at the very end of that page. These slabs are just 4KB blocks of raw memory that are mapped to any arbitrary physical address which contain a descriptor at the end. Now, the maximum no. of objects are carved out of the slab and linked to the descriptor at the end in a stack. Whenever a object is allocated from the slab it is popped out of the stack and whenever a object is freed back to the slab it is pushed to that stack.

The allocator maintains two lists of slabs - partial and full list. The partial list contains those slabs which have some objects in their stacks and can be used for allocation. The full list contain slabs which are saturated and cannot be used for allocating fresh objects. A cache of one empty slab is maintained in the allocator too. This is to reduce slab-allocation overhead on small fluctuations in object requirement.

Module Loader

The module loader is a essential component of the kernel host. It is the main reason it is called the kernel host - it loads and manages the other modules. The module loader maintains records of other modules and their dynamic-link structs which contain information about their symbols. It unpacks the file-image of the module and forms segments in kernel-memory. This part of the KernelHost makes it bloated and large due to which the former Silcos microkernel was divided into KernelHost, HAL and ExecutionManager.

The module loader uses the Executable and Linkable Format as the basis of its ABI and module infrastructure. Each module is implemented as a shared library (other than the KernelHost itself, as earlier mentioned, it is a PIE).

Recent changes to the kernel have allowed the KernelHost to contain undefined symbols that actually link to other modules. This means memory-allocators can use features of other modules after a sign indicating that modules are linked changes.

ExecutionManager

The kernel abstracts all execution-based services into this module. It controls critical aspects of the system which directly affect its performance & reliability. All these services were packed into a single module that would fit into one 32-KB L1 cache. Scheduling and task management come under its jurisdictions.

Task Management

The kernel manages tasks in a generic manner. Any task descriptor can refer to a thread-group, scheduler activation, or just a traditional thread. This generalization evolves the techniques of group scheduling and user-level scheduling. A task is more primitive than the thread and may or may not have a user stack (but a kernel stack is required). It is dispatched by a dispatcher function which can continue to execute the thread or jump to another scheduling context (which may be in user-space).

Scheduler

The kernel generalizes the concept of scheduling by using the concept of scheduling class, which is already present in most modern kernels. Each processor has three scheduler task queues - which are abstracted in the (C++) class ScheduleRoller, which contains mostly pure virtual functions. All processor structs contains three pointers to unique ScheduleRollers which are actually child-classes of the ScheduleRoller at a fixed offset. This allows the size of the scheduler's task queue to changed effectively.

The kernel (will) provide these three scheduling classes -

1. RoundRobin

2. CFS

3. Soft Real-time Scheduling

TODO: Finish documentation on all other modules

Personal tools
Namespaces
Variants
Actions
Navigation
About
Toolbox