COMPUTING AS WE know it today is a wild dance between the central processing unit (CPU) and memory. Instructions in memory are fetched, and the CPU executes them. In executing instructions, the CPU reads data from memory, changes it and then writes it back. Data and instructions that are used a lot are pulled in closer, via cache. Data and instructions that aren’t needed for the time being are swapped out of virtual memory onto disk.
To understand this dance you need an understanding of both the CPU and memory. Which, then, to study first? In most cases, the CPU is considered the star of the show and always begins the parade. That’s a mistake. There are multitudes of CPU designs out there, all of them different and all stuffed to the gills with tricks to make their own parts of the dance move more quickly. Memory, on the other hand, is a simpler and less diverse technology. Its moves in the dance are fairly simple: store data from the CPU and hand it back when requested, as quickly as possible. To a great extent, memory dictates the speed at which the dance proceeds. The designs of our CPUs are heavily influenced by the speed limitations of system memory.
That being the case, it makes sense to study memory first. If you understand memory technology thoroughly, you’re halfway to understanding anything else in a modern computer system.
For a long time, computers were really special-purpose haywire calculators. ...