ELF Format: Segments and Program Header Table After understanding the ELF Header, the next critical component is the Program Header Table. This table describes segments - the portions of the file that will be loaded into memory when the program executes. What Are Segments? Segments are the runtime view of an ELF file. While sections (which we’ll cover later) are used during linking and debugging, segments are what the operating system cares about when loading and executing a program. ...
ELF Format: Part 1
ELF Format: ELF Header What is ELF? ELF (Executable and Linkable Format) is the standard binary format used by Unix-like systems (Linux, BSD, etc.) for: Executable files (a.out, /bin/ls) Object files (.o) Shared libraries (.so) Core dumps It’s a container format that describes: What parts of the file get loaded into memory, Where execution starts, How relocations and dynamic linking are handled. Contains useful information for the debuggers. General Structure of an ELF File An ELF file is organized into several key components that serve different purposes during compilation, linking, and execution. ...
x86 Assembly Part 1: Registers
When learning assembly, it’s easy to get lost in the “why” of CPU design, but this blog will stay focused on the x86 instruction set itself. The goal here isn’t to study computer architecture or dive into microarchitectural details — instead, we’ll build a working reference for how to write and understand x86 assembly code. Everything that follows is about the x86 family of processors, starting from the registers that form the foundation of all instructions. ...
Hello World in Real Mode
When your x86 computer first starts up, it’s in a surprisingly primitive state: No operating system - Obviously, since we haven’t loaded one yet No memory management - No virtual memory, no protection between processes No file system - Can’t open files, no directories, no abstraction layer No network stack - No TCP/IP, no internet connectivity No device drivers - No USB drivers, no graphics drivers, nothing What Services Are Available at Boot Time? Despite the barren landscape, the BIOS (Basic Input/Output System) gives us a few essential tools: ...
How does CPU Communicates With Peripheral Devices
Introduction: The Communication Challenges At its core, a CPU is designed for one primary task: processing data and executing instructions at incredible speed. But this processing power becomes meaningful only when it can interact with the rich ecosystem of peripheral devices that extend its capabilities. Why CPUs Need to Talk to Many Different Devices? Your CPU must read input from your mouse or keyboard, process that input to understand your intent, communicate with memory to load the browser application, send rendering commands to your graphics card, request data from your network interface to load the webpage, and potentially write temporary files to your storage device. Each of these interactions involves a different type of peripheral device, each with its own communication requirements, data formats, and timing constraints. ...
Processor Modes in x86
The 8086 Processor A Brief History The Intel 8086, released in 1978, marked a pivotal moment in computing history as Intel’s first 16-bit microprocessor. Designed by a team led by Stephen Morse, the 8086 was Intel’s answer to the growing demand for more powerful processors that could handle larger programs and address more memory than the existing 8-bit chips of the era. The processor introduced the x86 architecture that would become the foundation for decades of computing evolution. With its 16-bit registers and 20-bit address bus 1, the 8086 could access up to 1 megabyte of memory—a massive improvement over the 64KB limitation of 8-bit processors. However, it retained backward compatibility concepts that would prove both beneficial and constraining for future generations. ...
Characteristics of MBR Code
BIOS Boot Recap Previously, we saw that after the BIOS firmware is loaded, it searches for a bootable device from a list of storage options, such as a hard drive, SSD, USB, or network interface. The BIOS identifies a valid bootable device by checking for the 0x55AA signature at the end of the first sector. Once found, it loads the 512 bytes from this sector (LBA 0), which is known as the Master Boot Record (MBR). ...
What happens when you turn on computer?
1. Power‑On & Hardware Reset 1. Power‑Good Signal The power supply stabilizes voltages and asserts a “Power‑Good” (PWR_OK) line to the motherboard. All devices receive power and begin to initialize themselves. The Central Processing Unit (CPU) is initially held in a reset mode, meaning it’s not yet executing instructions. The memory layout is powered up, although the RAM itself has no content since it’s volatile. 2. CPU Reset Vector The reset vector is a predetermined memory address where the CPU begins execution after being powered on or reset. On x86 processors, this address is typically 0xFFFFFFF0 (near the top of the 4GB address space). When the CPU comes out of reset, its program counter (instruction pointer) is automatically set to this address. The motherboard’s memory mapping ensures that this address points to the BIOS/UEFI firmware ROM chip, so the very first instruction the CPU executes comes from the firmware. ...
Representation of Negative Numbers in Hardware
Representing negative numbers in binary poses unique challenges due to the inherent nature of binary systems. Unlike decimal systems, which can easily use a minus sign to indicate negative values, binary systems must encode this information within a fixed number of bits. This requirement leads to various methods of representation, each with its own set of advantages and limitations. The main challenge lies in developing a system that can accurately represent both positive and negative values while ensuring that arithmetic operations remain efficient and straightforward. In the following sections, we will explore several common approaches to representing negative numbers in binary, including their respective challenges and trade-offs. ...
Overview of MIPS Assembly
MIPS (Microprocessor without Interlocked Pipeline Stages) assembly is one of the RISC ISA’s. It was developed in the early 1980s at Stanford University by Professor John L. Hennessy. MIPS is widely used in academic research and industry, particularly in computer architecture courses due to its straightforward design and in various embedded systems applications for its efficiency and performance. History The first MIPS processor, the R2000, was introduced. It implemented the MIPS I architecture, which was one of the earliest commercial RISC processors. There are multiple versions of MIPS: including MIPS I, II, III, IV, and V; as well as five releases of MIPS32/64. MIPS I had 32-bit architecture with basic instruction set and addressing modes. MIPS III introduced 64-bit architecture in 1991, increasing the address space and register width. ...