Linux Kernel Symbols

Kernel symbols are names of functions and variables. Global symbols are those
which are available outside the file they are declared in. Global symbols
in the Linux kernel currently running on a system are available through
`/proc/kallsyms` file. This includes symbols defined inside kernel modules
currently loaded.

Global symbols are of two types:

  1. those explicitly exported through EXPORT_SYMBOL_GPL and EXPORT_SYMBOL
    macros, and
  2. those which are not declared with `static` C keyword and hence visible to
    code which is statically linked with the kernel itself and may be available
    outside the kernel image.

The first type, explicitly exported ones, are denoted with capital letter in
output of `cat /proc/kallsyms` – e.g. T if the symbol is in text section, i.e.
a function name. The second type are denoted with small letter – e.g. t for a
function which isn’t exported via EXPORT_SYMBOL_GPL or EXPORT_SYMBOL.

Inside kernel code, we can access symbols which are exported explicity by
simply using them like other variables, e.g. by calling printk() function.

For global symbols which aren’t explicitly exported, but are still available,
we can attempt to access them by calling kallsyms_lookup_name() function,
defined in kernel/kallsyms.c:

unsigned long kallsyms_lookup_name(const char *name);

This takes symbol name as argument and returns its address in memory, i.e. a
pointer to it. The calling code can dereference the pointer to make use of that
symbol. If the symbol isn’t found, the function returns NULL.

Advertisements

try/catch in Linux Kernel

In higher level languages like Java and C#, one can recover from unexpected bahaviour using try/catch like mechanism. Things are different inside Linux kernel. The code is considered trusted and reliable. Faults and exceptions have severe penalities. For example, division by zero will likely cause a kernel oops and the whole system to hang. So how would you run some code which you know can fault? Enter extable, a kernel mechanism by which one can have try/catch like facility. It is not a high-level extension to C programming language. Instead it works at assembly level.

We will take following piece of blatant division by zero and make it safe to run inside kernel using this exception handling mechanism. We will be assuming x86_64 platform in this article.

void blatant_div_by_zero(void)
{
        /* quotient and divisor */
        int q, d;

        d = 0;
        asm ("movl $20, %%eax;"
        "movl $0, %%edx;"
        "div %1;"
        "movl %%eax, %0;"
        : "=r"(q)
        : "b"(d)
        :"%eax"
        );

        pr_debug("quotient is %d\n", q);
}

extable

extable is basically an ELF section inside Linux kernel binary image which contains mappings between potentially faulting instructions and their respective handlers. Users interface with extable through _ASM_EXTABLE* family of macros. All macros delegate to one macro:

_ASM_EXTABLE_HANDLE(from, to, handler)

`from`: address of faulting instruction, i.e. the ‘try’ part
`to`: address to which control will be transferred when fault occurs, i.e. the ‘catch’ part
`handler`: is the function which transfers execution to catch part

`handler` function is important here. It has following signature:

bool handler(const struct exception_table_entry *fixup,
        struct pt_regs *regs, int trapnr)

`fixup` contains address of catch part, i.e. ‘to’ argument of _ASM_EXTABLE_HANDLE. `regs` contains registers, as they were when the fault happened and `trapnr` is the fault number. handler transfers control to catch part by simply setting instruction pointer register in regs argument to fixup address. However, before doing that, it has an opportunity to set up environment for catch part. That’s where the different wrappers around _ASM_EXTABLE_HANDLE come in. Each wrapper uses a differnt handler function. Let’s take a couple of examples from arch/x86/include/asm/asm.h.

# define _ASM_EXTABLE(from, to) \
_ASM_EXTABLE_HANDLE(from, to, ex_handler_default)

# define _ASM_EXTABLE_FAULT(from, to) \
_ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)

ex_handler_default() is a handler function which simply sets instruction pointer to catch part. ex_handler_fault() moves fault number to rax register in addition to setting instruction pointer to catch part, so catch part can access fault number.

Context of handler()

At this point, some might be wondering which context does handler() function execute in. Short answer is interrupt handler context. If that works for you, you may jump to the next section. Here we will do a very quick detour of how this interrupt context comes about.

After executing an instruction and before going into next instruction, the control unit checks if the just-executed instruction resulted in an interrupt or exception. If an exception did occur, then it determines vector associated with it. In case of our divide-by-zero fault, it will be vector 0. The control unit will then read the corresponding entry – zeroth entry for divide-by-zero – of Interrupt Descriptor Table (IDT). It is an array of entries inside processor’s RAM. Address of that table is stored inside the ‘idtr’ register. That entry in IDT in turn points to an entry in another table of entries called Global Descriptor Table (GDT). The entry inside GDT then info – such as base address of segment – related to handler of the exception whose vector we started out with. Processor then performs some checks and stores context when the exception occurred (e.g. register contents) on stack. When the handler() function discussed above sets instruction pointer, it actually sets instruction pointer in this saved context. After that process jumps to interrupt handler. It is this interrupt handler which then looks up extable section to find the handler() function for the faulting instruction. If such a handler function is found, the processor jumps to it. Therefore our handler function is executed inside interrupt context. Ultimately, when execution returns from interrupt context, the context that was saved on the stack before interrupt handler was executed, is restored. Since our handler function above sets instruction pointer and potentially other registers which are part of that pre-interrupt context which will be restored, execution will jump to ‘catch’ part after interrupt handler returns.

Solution and How it Works

Using extable macro’s we can re-write our blatant_div_by_zero() function in a safe way. Let’s first see what the final solution looks like. Then we will break it down into easily understandable parts.

void blatant_div_by_zero(void)
{
        /* quotient and divisor */
        int q, d;

        d = 0;
        asm volatile ("movl $20, %%eax;"
        "movl $0, %%edx;"
        "1: div %1;"
        "movl %%eax, %0;"
        "2:\t\n"
        "\t.section .fixup,\"ax\"\n"
        "3:\tmov\t$-1, %0\n"
        "\tjmp\t2b\n"
        "\t.previous\n"
        _ASM_EXTABLE(1b, 3b)
        : "=r"(q)
        : "b"(d)
        :"%eax"
        );

        pr_debug("quotient is %d\n", q);
}

There are two key differences here. First, we are adding some code to a section called .fixup. Second, _ASM_EXTABLE() macro. Code in .fixup section is the ‘catch’ part. _ASM_EXTABLE(), as we saw above, uses a default handler which merely sets instruction pointer to ‘catch’ part:

__visible bool ex_handler_default(const struct exception_table_entry *fixup,
        struct pt_regs *regs, int trapnr)
{
        regs->ip = ex_fixup_addr(fixup);
        return true;
}

First argument of _ASM_EXTABLE is label 1b for faulting instruction “1: div %1;”. Letter ‘b’ in 1b means backwards and doesn’t have functional significance in our context. Second argument, 3b, is the catch part which is inside .fixup section. Now let’s look at the execution flow with this set up. Division by zero instruction is executed, it faults, goes into exception handler at vector 0, which finds the correct handler – in our case ex_handler_default() – which in turn, sets instruction pointer to address at label 3, our catch part. As noted, label 3 is inside .fixup section. Inside label 3, we set quotient q to -1 and jump to label 2 which is known safe point after faulting instruction. From there, execution continues to next instruction like normal. Note that the next instruction is neither “\t.section .fixup,\”ax\”\n” (which is a directive to linker) nor “3:\tmov\t$-1, %0\n” (which lives in a different section from where instruction at label 2 is).

Now let’s see what a real object file looks like when compiled with above extable code. Here is a hello-world kernel module, modified to contain blatant_div_by_zero() function. After compiling it, we can inspect its sections using readelf*:

$ readelf -S hello.ko
There are 34 section headers, starting at offset 0x10b8:

Section Headers:
[Nr] Name   Type Address Offset
        Size EntSize Flags Link Info Align
[ 0] NULL 0000000000000000 00000000
       0000000000000000 0000000000000000 0 0 0
[ 1] .note.gnu.build-i NOTE 0000000000000000 00000040
       0000000000000024 0000000000000000 A 0 0 4
[ 2] .text PROGBITS 0000000000000000 00000064
       0000000000000000 0000000000000000 AX 0 0 1
[ 3] .init.text PROGBITS 0000000000000000 00000064
       0000000000000034 0000000000000000 AX 0 0 1
[ 4] .rela.init.text RELA 0000000000000000 00000cd8
       0000000000000060 0000000000000018 I 31 3 8
[ 5] .fixup PROGBITS 0000000000000000 00000098
       000000000000000a 0000000000000000 AX 0 0 1
[ 6] .rela.fixup RELA 0000000000000000 00000d38
       0000000000000018 0000000000000018 I 31 5 8
[ 7] .exit.text PROGBITS 0000000000000000 000000a2
       000000000000000c 0000000000000000 AX 0 0 1
[ 8] .rela.exit.text RELA 0000000000000000 00000d50
       0000000000000030 0000000000000018 I 31 7 8
[ 9] .rodata.str1.1 PROGBITS 0000000000000000 000000ae
       000000000000001c 0000000000000001 AMS 0 0 1
[10] __ex_table PROGBITS 0000000000000000 000000cc
       000000000000000c 0000000000000000 A 0 0 4
[11] .rela__ex_table RELA 0000000000000000 00000d80
       0000000000000048 0000000000000018 I 31 10 8
[...]

Above is partial output, containing the sections which are relevant to our discussion. First, there is .fixup section at number 5. Its type of PROGBITS means it will be loaded in memory and flag X means it will be executable, which is what we want since this will execute the catch part. Further down at number 10 is __ex_table section. This is loaded into memory (type PROGBITS) but not marked as executable. We will only use it to read address of handler function but won’t need to execute the section itself. There is a function in kernel/extable.c which searches extables for correct handler for a given faulting address. It is reproduced below:

/* Given an address, look for it in the exception tables. */
const struct exception_table_entry *search_exception_tables(unsigned long addr)
{
        const struct exception_table_entry *e;

        e = search_extable(__start___ex_table,
        __stop___ex_table - __start___ex_table, addr);
        if (!e)
                e = search_module_extables(addr);
        return e;
}

As you can see, if the search in kernel image fails, the function looks in kernel modules’ extables for a corresponding entry through the call to search_module_extables(addr). This is the function which will search our hello.ko module for entry corresponding to division-by-zero instruction.


* Apologies for the formatting. Please bear until I find a way to fix it in the wordpress editor.

Title Image: Catch taken when playing cricket in street – a popular pastime in South Asia. Taken from https://www.flickr.com/photos/flickcoolpix/

Intel Virtualisation: How VT-x, KVM and QEMU Work Together

VT-x is name of CPU virtualisation technology by Intel. KVM is component of Linux kernel which makes use of VT-x. And QEMU is a user-space application which allows users to create virtual machines. QEMU makes use of KVM to achieve efficient virtualisation. In this article we will talk about how these three technologies work together. Don’t expect an in-depth exposition about all aspects here, although in future, I might follow this up with more focused posts about some specific parts.

Something About Virtualisation First

Let’s first touch upon some theory before going into main discussion. Related to virtualisation is concept of emulation – in simple words, faking the hardware. When you use QEMU or VMWare to create a virtual machine that has ARM processor, but your host machine has an x86 processor, then QEMU or VMWare would emulate or fake ARM processor. When we talk about virtualisation we mean hardware assisted virtualisation where the VM’s processor matches host computer’s processor. Often conflated with virtualisation is an even more distinct concept of containerisation. Containerisation is mostly a software concept and it builds on top of operating system abstractions like process identifiers, file system and memory consumption limits. In this post we won’t discuss containers any more.

A typical VM set up looks like below:

vm-arch

 

At the lowest level is hardware which supports virtualisation. Above it, hypervisor or virtual machine monitor (VMM). In case of KVM, this is actually Linux kernel which has KVM modules loaded into it. In other words, KVM is a set of kernel modules that when loaded into Linux kernel turn the kernel into hypervisor. Above the hypervisor, and in user space, sit virtualisation applications that end users directly interact with – QEMU, VMWare etc. These applications then create virtual machines which run their own operating systems, with cooperation from hypervisor.

Finally, there is “full” vs. “para” virtualisation dichotomy. Full virtualisation is when OS that is running inside a VM is exactly the same as would be running on real hardware. Paravirtualisation is when OS inside VM is aware that it is being virtualised and thus runs in a slightly modified way than it would on real hardware.

VT-x

VT-x is CPU virtualisation for Intel 64 and IA-32 architecture. For Intel’s Itanium, there is VT-I. For I/O virtualisation there is VT-d. AMD also has its virtualisation technology called AMD-V. We will only concern ourselves with VT-x.

Under VT-x a CPU operates in one of two modes: root and non-root. These modes are orthogonal to real, protected, long etc, and also orthogonal to privilege rings (0-3). They form a new “plane” so to speak. Hypervisor runs in root mode and VMs run in non-root mode. When in non-root mode, CPU-bound code mostly executes in the same way as it would if running in root mode, which means that VM’s CPU-bound operations run mostly at native speed. However, it doesn’t have full freedom.

Privileged instructions form a subset of all available instructions on a CPU. These are instructions that can only be executed if the CPU is in higher privileged state, e.g. current privilege level (CPL) 0 (where CPL 3 is least privileged). A subset of these privileged instructions are what we can call “global state-changing” instructions – those which affect the overall state of CPU. Examples are those instructions which modify clock or interrupt registers, or write to control registers in a way that will change the operation of root mode. This smaller subset of sensitive instructions are what the non-root mode can’t execute.

VMX and VMCS

Virtual Machine Extensions (VMX) are instructions that were added to facilitate VT-x. Let’s look at some of them to gain a better understanding of how VT-x works.

VMXON: Before this instruction is executed, there is no concept of root vs non-root modes. The CPU operates as if there was no virtualisation. VMXON must be executed in order to enter virtualisation. Immediately after VMXON, the CPU is in root mode.

VMXOFF: Converse of VMXON, VMXOFF exits virtualisation.

VMLAUNCH: Creates an instance of a VM and enters non-root mode. We will explain what we mean by “instance of VM” in a short while, when covering VMCS. For now think of it as a particular VM created inside QEMU or VMWare.

VMRESUME: Enters non-root mode for an existing VM instance.

When a VM attempts to execute an instruction that is prohibited in non-root mode, CPU immediately switches to root mode in a trap-like way. This is called a VM exit.

Let’s synthesise the above information. CPU starts in a normal mode, executes VMXON to start virtualisation in root mode, executes VMLAUNCH to create and enter non-root mode for a VM instance, VM instance runs its own code as if running natively until it attempts something that is prohibited, that causes a VM exit and a switch to root mode. Recall that the software running in root mode is hypervisor. Hypervisor takes action to deal with the reason for VM exit and then executes VMRESUME to re-enter non-root mode for that VM instance, which lets the VM instance resume its operation. This interaction between root and non-root mode is the essence of hardware virtualisation support.

Of course the above description leaves some gaps. For example, how does hypervisor know why VM exit happened? And what makes one VM instance different from another? This is where VMCS comes in. VMCS stands for Virtual Machine Control Structure. It is basically a 4KiB part of physical memory which contains information needed for the above process to work. This information includes reasons for VM exit as well as information unique to each VM instance so that when CPU is in non-root mode, it is the VMCS which determines which instance of VM it is running.

As you may know, in QEMU or VMWare, we can decide how many CPUs a particular VM will have. Each such CPU is called a virtual CPU or vCPU. For each vCPU there is one VMCS. This means that VMCS stores information on CPU-level granularity and not VM level. To read and write a particular VMCS, VMREAD and VMWRITE instructions are used. They effectively require root mode so only hypervisor can modify VMCS. Non-root VM can perform VMWRITE but not to the actual VMCS, but a “shadow” VMCS – something that doesn’t concern us immediately.

There are also instructions that operate on whole VMCS instances rather than individual VMCSs. These are used when switching between vCPUs, where a vCPU could belong to any VM instance. VMPTRLD is used to load the address of a VMCS and VMPTRST is used to store this address to a specified memory address. There can be many VMCS instances but only one is marked as current and active at any point. VMPTRLD marks a particular VMCS as active. Then, when VMRESUME is executed, the non-root mode VM uses that active VMCS instance to know which particular VM and vCPU it is executing as.

Here it’s worth noting that all the VMX instructions above require CPL level 0, so they can only be executed from inside the Linux kernel (or other OS kernel).

VMCS basically stores two types of information:

  1. Context info which contains things like CPU register values to save and restore during transitions between root and non-root.
  2. Control info which determines behaviour of the VM inside non-root mode.

More specifically, VMCS is divided into six parts.

  1. Guest-state stores vCPU state on VM exit. On VMRESUME, vCPU state is restored from here.
  2. Host-state stores host CPU state on VMLAUNCH and VMRESUME. On VM exit, host CPU state is restored from here.
  3. VM execution control fields determine the behaviour of VM in non-root mode. For example hypervisor can set a bit in a VM execution control field such that whenever VM attempts to execute RDTSC instruction to read timestamp counter, the VM exits back to hypervisor.
  4. VM exit control fields determine the behaviour of VM exits. For example, when a bit in VM exit control part is set then debug register DR7 is saved whenever there is a VM exit.
  5. VM entry control fields determine the behaviour of VM entries. This is counterpart of VM exit control fields. A symmetric example is that setting a bit inside this field will cause the VM to always load DR7 debug register on VM entry.
  6. VM exit information fields tell hypervisor why the exit happened and provide additional information.

There are other aspects of hardware virtualisation support that we will conveniently gloss over in this post. Virtual to physical address conversion inside VM is done using a VT-x feature called Extended Page Tables (EPT). Translation Lookaside Buffer (TLB) is used to cache virtual to physical mappings in order to save page table lookups. TLB semantics also change to accommodate virtual machines. Advanced Programmable Interrupt Controller (APIC) on a real machine is responsible for managing interrupts. In VM this too is virtualised and there are virtual interrupts which can be controlled by one of the control fields in VMCS. I/O is a major part of any machine’s operations. Virtualising I/O is not covered by VT-x and is usually emulated in user space or accelerated by VT-d.

KVM

Kernel-based Virtual Machine (KVM) is a set of Linux kernel modules that when loaded, turn Linux kernel into hypervisor. Linux continues its normal operations as OS but also provides hypervisor facilities to user space. KVM modules can be grouped into two types: core module and machine specific modules. kvm.ko is the core module which is always needed. Depending on the host machine CPU, a machine specific module, like kvm-intel.ko or kvm-amd.ko will be needed. As you can guess, kvm-intel.ko uses the functionality we described above in VT-x section. It is KVM which executes VMLAUNCH/VMRESUME, sets up VMCS, deals with VM exits etc. Let’s also mention that AMD’s virtualisation technology AMD-V also has its own instructions and they are called Secure Virtual Machine (SVM). Under `arch/x86/kvm/` you will find files named `svm.c` and `vmx.c`. These contain code which deals with virtualisation facilities of AMD and Intel respectively.

KVM interacts with user space – in our case QEMU – in two ways: through device file `/dev/kvm` and through memory mapped pages. Memory mapped pages are used for bulk transfer of data between QEMU and KVM. More specifically, there are two memory mapped pages per vCPU and they are used for high volume data transfer between QEMU and the VM in kernel.

`/dev/kvm` is the main API exposed by KVM. It supports a set of `ioctl`s which allow QEMU to manage VMs and interact with them. The lowest unit of virtualisation in KVM is a vCPU. Everything builds on top of it. The `/dev/kvm` API is a three-level hierarchy.

  1. System Level: Calls this API manipulate the global state of the whole KVM subsystem. This, among other things, is used to create VMs.
  2. VM Level: Calls to this API deal with a specific VM. vCPUs are created through calls to this API.
  3. vCPU Level: This is lowest granularity API and deals with a specific vCPU. Since QEMU dedicates one thread to each vCPU (see QEMU section below), calls to this API are done in the same thread that was used to create the vCPU.

After creating vCPU QEMU continues interacting with it using the ioctls and memory mapped pages.

QEMU

Quick Emulator (QEMU) is the only user space component we are considering in our VT-x/KVM/QEMU stack. With QEMU one can run a virtual machine with ARM or MIPS core but run on an Intel host. How is this possible? Basically QEMU has two modes: emulator and virtualiser. As an emulator, it can fake the hardware. So it can make itself look like a MIPS machine to the software running inside its VM. It does that through binary translation. QEMU comes with Tiny Code Generator (TCG). This can be thought if as a sort of high-level language VM, like JVM. It takes for instance, MIPS code, converts it to an intermediate bytecode which then gets executed on the host hardware.

The other mode of QEMU – as a virtualiser – is what achieves the type of virtualisation that we are discussing here. As virtualiser it gets help from KVM. It talks to KVM using ioctl’s as described above.

QEMU creates one process for every VM. For each vCPU, QEMU creates a thread. These are regular threads and they get scheduled by the OS like any other thread. As these threads get run time, QEMU creates impression of multiple CPUs for the software running inside its VM. Given QEMU’s roots in emulation, it can emulate I/O which is something that KVM may not fully support – take example of a VM with particular serial port on a host that doesn’t have it. Now, when software inside VM performs I/O, the VM exits to KVM. KVM looks at the reason and passes control to QEMU along with pointer to info about the I/O request. QEMU emulates the I/O device for that requests – thus fulfilling it for software inside VM – and passes control back to KVM. KVM executes a VMRESUME to let that VM proceed.

In the end, let us summarise the overall picture in a diagram:

overall-diag

How Does an Intel Processor Boot?

When we switch on a computer, it goes through a series of steps before it is able to load the operating system. In this post we will see how a typical x86 processor boots. This is a very complex and involved process. We will only present a basic overall structure. Also what path is actually taken by the processor to reach a state where it can load an OS, is dependent on boot firmware. We will follow example of coreboot, an open source boot firmware.

Before Power is Applied

Let us start with BIOS chip, also known as boot ROM. BIOS chip is a piece of silicon on the motherboard of a computer and it can store bytes. It has two characteristics which are of interest to us. First, it (or a part of it) is memory mapped into the CPU’s address space, which means that the CPU can access it in the same way it would access RAM. In particular, the CPU can point its instruction pointer to executed code inside BIOS chip. Second, the bytes that BIOS chip stores, represent the very first instructions that are executed by the CPU. BIOS chop also contains other pieces of code and data. A typical BIOS contains flash descriptor (a contents table for BIOS chip), BIOS region (the first instructions to be executed), Intel ME (Intel Management Engine) and GbE (gigabit ethernet). As you can see, BIOS chip is shared between serveral components of the system and not exclusive to CPU.

When power is applied

Modern Intel chips come with what is called Intel Management Engine. As soon as power is available – through battery or from mains – Intel ME comes on. It does its own initialisations which requires it to read BIOS’s flash descriptor to find where Intel ME region is and then from Intel ME region of BIOS, read in code and config data. Next when we press power button on the computer, the CPU comes on. On a multiprocessor system, there is always a designated processor, called Bootstrap Processor (BSP), which comes on. In either case, the processor always comes on in what is called 16-bit Real Mode with insruction pointer pointing to address 0xffff.fff0, the reset vector.

EDIT: (thanks to burfog for indicating that this needs explaination)

You might be wondering how could a 16-bit system address 0xffff.fff0 which is clearly beyond 0xffff, the max 16-bit value? In 16-bit mode,  physical address is calculated by left shifting code segment (CS) selector register by 4 bits and then adding instruction pointer (IP) address. On reset, IP cotains value 0xfff0 and CS has value 0xf000 [1]. By the above formula the physical address should be:

CS << 4 + IP = 0x000f.0000 + 0xfff0 = 0x000f.fff0

which is still not what we expected. This is because on reset, the system is in a “special” Real Mode, where the first 12 address lines are asserted. So all addresses look like 0xfffx.xxxx. This means in our case, we need to set the most significant 12 bits in the address we derived, which results in our expected address 0xffff.fff0. These 12 address lines remain asserted until a long JMP is executed, after which they are de-asserted and normal Real Mode addressing calculations resume.

The BIOS chip is also set up in such a way that first instruction to be executed from the BIOS is at physical address 0xffff.fff0 of the processor. Hence processor is able to execute the first instruction from BIOS region of the BIOS chip. This region contains what is called boot firmware. Examples of boot firmware are UEFI implementations, coreboot and the classic BIOS.

One of the first things that the boot firmware does is switch to 32-bit mode. It is also “protected mode”, i.e. segmentation is turned on and various segments of processor’s address space can be managed with different access permissions. Boot firmware however would have just one segment, effectively turning off segmentation. This is called flat mode.

Early Initialisations

It is worth noting that at this point in boot process, DRAM is not available. DRAM Initialisation is one of the main objectives of boot firmware. But before it can initialise DRAM, it needs to do some preparation.

Microcode patches are like patches for CPU to function correctly. Intel keeps publishing microcode patches for different CPUs. The boot firmware applies those patches very early on in boot process. Part of the processor is what is called south bridge or I/O controller hub (ICH) or peripheral controller hub (PCH). There are some initialisations to be performed for ICH also. For example, ICH may contain a watchdog timer which can go off which DRAM is being initialised. That watchdog timer must be turned off first.

Of course all of this is being done by firmware which is code written by someone. Now most of the code we know utilises stack. But we have mentioned that DRAM hasn’t been initialised yet so there is no memory. So how is this code written and run? Answer is that this is stackless code. Either it is hand written x86 assembly or, as in case of coreboot, it is written in C and compiled using special compiler called ROMCC which translates C to stackless assembly instructions. This of course comes with some restrictions so ROMCC compiled code is not how we want to execute everything. We need stack as soon as possible.

So, the next step is setting up what is called cache-as-RAM (CAR). Boot firmware basically sets up CPU caches so that they can be temporarily used as RAM. This way the firmware can run code which is not stackless, but still restricted in terms of stack size and general amount of memory available.

Memory Initialisation and Intel FSP

On Intel systems, memory initialisation is performed using a blob called Intel Firmware Support Package (FSP). This is supplied by Intel in binary form. Intel FSP does a lot of heavy lifting when it comes to bootstrapping Intel processors and is not just limited to memory init. It is basically a three stage API. The way boot firmware interacts with FSP is set up some parameters and a return address, and jump into an FSP stage. The FSP stage would execute taking into account the parameters and then use the return address to jump back into boot firmware. This continues across these three FSP stages and in that order:

  • TempRamInit(): This performs some init for RAM and hand control back to boot firmware. Boot firmware can kick off some actions and then go on to next stage. This is because the next step performs chipset and memory initialisation which may take some time. For example memory training is a time consuming operation. So this is an opportunity for boot firmware to kick off other initialisations, like spinning up hard drive, which can take time to stabilise.
  • FspInitEntry(): This is where actual DRAM is achieved. This also performs other silicon init, like PCH and CPU itself. After this finishes, it passes control back to boot firmware. However, since this time, the memory has been initialised, the passing back of control and data is different from TempRamInit stage. After this stage, firmware does most of the rest of initialisations – described in the next section ‘After Memory Init’ – before passing control to the next stage of FSP.
  • NotifyPhase(): This is where boot firmware would pass control back to FSP and set params which would tell FSP what sort of actions it needs to take before winding down. The types of things that FSP can do here are platform dependent but they include things like post PCI enumeration.

After Memory Init

Once DRAM is ready, it breathes a new life into boot process. First that the firmware does is copy itself into DRAM. This is done with help of “memory aliasing”, which means that reads and writes to addresses below 1MB are routed to and from DRAM. Then, firmware sets up the stack and transfer control to DRAM.

Next, some platform specific inits are done, such as GPIO configuration and re-enabling the watchdog timer in ICH which was disabled before memory init, paving the way for interrupts enabling. Local Advanced Programmable Interrupt Controller (LAPIC) sites inside each processor, i.e. it is local to each CPU in a multiprocessor system. LAPIC determines how each interrupt is delivered to that particular CPU. I/O APIC (IOxAPIC) lives inside ICH and there is one IOxAPIC for all processors. There can also be a Programmable Interrupt Controller (PIC) which is for use in Real Mode as is Interrupt Vector Table which contains 256 interrupt vectors – pointers to handlers for corresponding interrupts. Interrupt Descriptor Table on the other hand, is used to hold interrupt vectors when in Protected Mode.

Firmware then sets up various timers depending upon platform and the firmware. Programmable Interrupt Timer (PIT) is the system timer and sits on IRQ0. It lives inside ICH. High Precision Event Time (HPET) also sits inside ICH but boot firmware may not initialise it, letting the OS to set it up if needed. There is also a clock, the Real Time Clock (RTC) which too resides in ICH. There are other timers too, particularly LAPIC timer which is inside each CPU. Next, the firmware sets up memory caching. This basically means setting up different cache characteristics – write-back, uncached etc – for different ranges of memory.

Other Processors, I/O Devices and PCI

Finally, it is time to bring up other processors as all the work so far was being handled by the bootstrap processor. To find out about other application processors (AP) on the same package, BSP runs CPUID instruction. Then using its LAPIC, BSP sends an interrupt called SIPI, to each AP. Each SIPI points to the physical address at which the receiving AP should start executing. It is worth noting that each AP comes up in Real Mode, therefore the SIPI address must be less than 1MB, the maximum addressable in Real Mode. Usually soon after initialisation, each AP executes HLT instruction and gets into halt state, waiting for further instructions from BSP. However, just before OS gains control, APs are supposed to be in “waiting-for-SIPI” state. BSP achieves this by sending a couple of inter-processor interrupts to each AP.

Next come I/O devices like Embedded Controller (EC) and Super I/O, and after that PCI init. PCI init basically boils down to:

  1. enumerating all PCI devices
  2. allocating resources to each PCI device

This discussion here applies to PCIe also. PCI is a hierarchical bus system where for each bus, leaf is either a PCI device or a PCI bridge leading to another PCI bus. CPU communicates with PCI by reading and writing PCI registers. The resources needed by PCI devices are range inside memory address space, range inside I/O address space and IRQ assignment. CPU finds out about address ranges and their types (memory-mapped or I/O) by writing to and reading from Base Address Registers (BARs) of PCI devices. IRQs are usually set up based how the board is designed.

During PCI enumeration, firmware also reads Option ROM register. If that register is not empty then it contains address of Option ROM. This is ROM chip that is physically situated on the PCI device. For example the network card may contain Option ROM which holds iPXE firmware. When an Option ROM is encountered then it is read into DRAM and executed.

Handing Control to OS loader

Before handing over control to next stage loader which is usually an OS loader like GRUB2 or LILO, the firmware sets up some information inside memory which is later to be consumed by the OS. This information is things like Advanced Configuration and Power Interface (ACPI) tables and memory map itself. Memory map tells the OS what address ranges have been set up for what purposes. The regions can be gerenal memory for OS use, ACPI related address ranges, reserved (i.e. not to be used by OS), IOAPIC (to be used by IOAPIC), LAPIC (to be used by LAPICs). Boot firmware also sets up handlers for System Management Mode (SMM) interrupts. SMM is an operating mode of Intel CPUs, just like Real, Protected and Long (64-bit) modes. A CPU enters SMM mode upon receipt of an SMM interrupt which can be triggered by a number of things like chip’s temperature reaching a certain level. Before handing control to OS loader, the firmware also locks down some registers and CPU capability, so that it can’t be changed afterwards by the OS.

Actual transfer of control to the OS loader usually takes form of a JMP to that part of memory. An OS loader like GRUB2 will perform actions based on its config and ultimately pass controle to an operating system like Linux. For Linux, this will usually be a bzImage (big zImage, not bz compression). It is worth noting that the OS, like Linux would enumerate PCI devices again and may have other overlap with some of the final initialisations done by boot firmware. Linux usually picks up the system in 32-bit mode with paging turned off and performs its own initialisations which include setting up page tables, enabling paging and switching to long mode, i.e. 64-bit.


[1] userbinator on Hacker News pointed out that IP hasn’t always held the value 0xfff0 on a reset. On 8086/8088 it was 0x0. Here’s what he found from Intel’s documentation:

8086/88:   CS:IP = FFFF:0000 first instruction at FFFF0
80186/188: CS:IP = FFFF:0000 first instruction at FFFF0
80286:     CS:IP = F000:FFF0 first instruction at FFFF0
80386:     CS:IP = 0000:0000FFF0 or F000:0000FFF0[1], first instruction at FFFFFFF0
80486+:    CS:IP = F000:0000FFF0(?) first instruction at FFFFFFF0

 

Paging in Linux on x86 – part 2

Our last post left the story of paging in Linux on x86 incomplete. Today we will cover that. This is a vast topic whose tentacles go far and wide into almost all aspects of the kernel. So we will cover only some interesting highlights. Our focus is on x86 without Physical Address Extension (PAE) enabled.

Data structures:

Although x86 uses two-level page tables, there are architectures which use three or four levels too. For example, x86 with Physical Address Extension (PAE) uses three page tables and the 64-bit x86_64 uses four page tables. Linux aims to cover all architecures and therefore uses four page tables. They are:

  • Page Global Directory
  • Page Upper Directory
  • Page Middle Directory
  • Page Table

This means, Linux divides virtual address into five parts – one for indexing into each of the tables above and one offset into the physical page frame. An entry in each of the tables above is a 32-bit unsigned int on x86 (without PAE). These data types are used to represent each of them respectively: pgd_t, pud_t, pmd_t and pte_t. There is also a set of helper macros and functions used to manipulate them.

On x86:

On x86, there are only two levels of page tables. Linux reconciles that with its four levels by nullifying the effect of PUD and PMD. It does this by keeping just one record in each of them. So practically, it is only using PGD and Page Table.

Kernel typically divides virtual address space into 3GB from 0x00000000 to 0xbfffffff for user space and 0xc0000000 to 0xffffffff for kernel space. In kernel code, the macro PAGE_OFFSET contains virtual address at which kernel starts, i.e. 0xc0000000 on a typical x86 set up.

Early on during boot, kernel learns size of RAM by querying BIOS. Then it scans physcial addresses to find those addresses which are unavailable. They can be:

  • addresses which are mapped with hardware devices’ I/O (memory-mapped I/O)
  • addresses pointing to page frames containing BIOS data

Typically the kernel lodges itself at physical address 0x00100000, i.e. 2nd MB onwards. Reasons for skipping first MB are architecture specfic – not just x86, but other machines also do some special things in that first MB of physical memory. Those page frames in which kernel sits never get swapped out to disk. Kernel also never swaps out unavailable addresses mentioned above.

Kernel address mapping:

From the 1 GB of virtual address space that kernel occupies, 896MB is directly mapped to physical addresses, i.e. there is a one-to-one mapping. Since kernel pages are never swapped out, this mapping always holds true. Macro __pa() converts kernel virtual address into physical address. It basically does simple maths like

Physical address = virtual address – PAGE_OFFSET

Another macro __va does the same thing in reverse.

Kernel sets aside the highest 128MB of its 1GB address space for non-contiguous allocations (high mem) and what is called fix-mapped linear addresses. Non-contiguous allocations is a separate topic on its own but we will quickly describe fix-mapped linear addresses before concluding this article. They are basically constant mappings from virtual to physical addresses but unlike first 896MB, they don’t follow simple offset formula above. Their mapping is arbitrary but fixed nonetheless. Kernel uses them as pointers as they are more efficient in terms of memory accesses required.

80×86 segmentation & what Linux does with it

Background:

Address space segmentation basically means dividing all possible virtual addresses into groups – segments – and applying some properties on those segments, e.g. privilege level required to access them. Segmentation applies to virtual addresses so it comes into play before virtual-to-physical address translation takes place. In x86, segmentation is a relic from past. 286 didn’t have virtual addressing so it divided address space into segments so that processes could keep themselves to addresses in their own segments. Then 386 added virtual addresses but still kept segments.

Different types of addresses

In x86, there are three different types of addresses.

  • Logical
  • Linear
  • Physical

This requires two steps to translate from logical to physical address. Translation from logical to linear is described in this article. Translation from linear to physical is done using page tables and we might cover it in a follow-up article.

Logical address consists of two parts: segment and offset. Segment is basically an index into an array of 8-byte records (discriptors) stored in RAM. This array is called Global Discriptor Table (GDT). There is also a per-process Local Descriptor Table (LDT) but we will ignore it as it doesn’t play a significant role in this discussion.

Each entry inside GDT contains info about a segment that it represents: base address, range (max address), CPU privelege level needed to access it and some other info.

Linear address = base address from segment entry in GDT + offset part of logical address

So to convert a logical address into linear, take base address from segment entry in GDT and add offset to it.

What Linux does with it

Linux prefers to group addresses into sections and manage them during the linear-to-physical transition phase, instead of logical-to-linear transition phase. Therefore, it pretty much nullifies effects of segment part of logical address so that offset just represents linear address. It does create four different segments: two (code and data) for each user space and kernel space. But each segment’s base is zero and max range is 2^32 – 1, thereby nullifying segmentation. It does however use CPU privilege level so that CPU has to be in right privilege level for accessing segments in kernel space – kernel code and kernel data segments.