The programmer creates a source file with an editor and saves it in cpp format.
Most computer systems represent text characters using the SCII standard that represents each character with a unique byte-size integer value.
Compiler driver reads the source file and translates it into an executable object file.
The preprocessor modifies the original C++ program according to directives that begin with the ‘#’ character. The result is another C++ program with the .i suffix.
The compiler translates the file into a file that contains an assembly-language program. Assembly language provides a common output language for different compilers for different compilers for different high-level languages.
The assembler translates the file into machine-language instructions, in a binary .o file
Linker handles merging of separated precompiled object files and generates the executable program.
Buses: Transfer fixed-size chunks of bytes known as words. The number of bytes in a word (word size) varies across systems. Most machines have word sizes of 4 bytes or 8 bytes.
I/O Devices: System connection to the external world. Each device is connected to the I/O bus by a controller or an adapter.
Main memory: A temporary storage device that holds a program and the data it manipulates while the processor is executing the program. It is organized as a linear array of bytes, each with its own unique address starting at zero.
Processor: Engine that interprets and executes instructions stored in main memory. At its core is a word-size storage device (register) called the program counter (PC). PC points at some machine-language instruction in main memory. The processor reads the instruction pointer by PC, interprets the bits in the instruction, performs some simple operation dictated by the instruction, and then updates the PC to point to the next instruction.
The register file is a small storage device that consists of a collection of word-size registers, each with its own unique name.
Load: Copy a byte or a word from main memory into a register, overwriting the previous contents of the register
Store: Copy a byte or a word from a register to a location in main memory, overwriting the previous contents of that location.
Operate: Copy the contents of two registers to the ALU, perform an arithmetic operation on the two words, and store the result in a register, overwriting the previous contents of that register.
Jump: Extract a word from the instruction itself and copy that word into the program counter (PC), overwriting the previous value of the PC.
As we type characters ./hello at the keyboard, the shell program reads each one into a register and then stores it in memory.
When we hit the enter key on the keyboard, the shell then loads the executable. Direct Memory Access (DMA) is used to move the data directly from disk to main memory.
The processor begins executing the machine-language instructions in the hello’s main routine. These instructions copy the bytes in “Hello world” string from memory to register file, to the display device.
A major goal for system designers is to make copy operations run as fast as possible.
Because of physical laws, larger storage devices are slower than smaller storage devices. Faster devices are more expensive to build than their slower counterparts. A common register file stores only a few hundred bytes of information as opposed to billions of bytes in the main memory. The processor can read data from the register file almost 100 times faster than from memory. It is easier and cheaper to make processors run faster than it is to make main memory run faster.
To deal with the processor-memory gap, system designers include smaller faster storage devices called cache memories. Cache memories serve as temporary staging areas for information that the processor is likely to need shortly. By setting up caches top hold data that are likely to be often accessed, we can perform most memory operations using the fast caches. By setting up caches to hold data that are likely to be often accessed, we can perform most memory operations using the fast caches. The main idea of a memory hierarchy is that storage at one level serves as a cache for storage at the next lower level.