Chapter 4. Midcentury Classical
In natural science, Nature has given us a world and we’re just to discover its laws. In computers, we can stuff laws into it and create a world.
Alan Kay
While the debate over the completeness and interpretation of quantum mechanics was captivating the attention of the physics community, the rest of the world was preoccupied with a very different topic in 1935. The previous two years had seen Hitler declare himself chancellor and then president of Germany, a self-styled “Führer,” and in 1935 he declared the country would begin re-arming itself, violating the Treaty of Versailles. Also in 1935 Mussolini invaded Ethiopia in defiance of the League of Nations, revealing its powerlessness to stop aggressions, and the world braced itself for war. In the same year, Robert Watson-Watt, a Scottish physicist, demonstrated a method that used radio waves to detect a British bomber at a significant distance, proving the feasibility of a system called “radio detection and ranging,” or radar. Daydreaming about the behavior of subatomic particles might be fascinating, but it did nothing for national security. Physicists turned from the theoretical to the practical and began to contribute to their nations’ readiness for war.
As the world descended into conflict, scientists continued to work on practical applications of the theoretical advances of the prior decades. Radar was improved through the collective work of allied scientists and engineers delivering improvements like the cavity magnetron, which used higher frequency microwaves to detect aircraft, greatly extending the range over radio waves. Wireless communication brought greater agility to the battlefield, but it also exposed a nation’s plans and strategies to a new vulnerability. Radio transmissions were routinely intercepted, so the warring nations began to increase the sophistication of their encryption methods through the use of complex mechanical devices like the German Enigma machine.
Even capturing an Enigma device, as the British Royal Navy did in 1941, was not enough to break the German code, as the elaborate scrambling scheme had so many variations based on the seed number chosen before encryption that the best mathematicians and puzzle solvers in the UK could not defeat it in a timely fashion. The UK launched two separate secret projects to overcome the German code, both at Bletchley Park. One effort sought to build machines using mechanical relays, called bombes, that would use the captured knowledge of the Enigma machine to accelerate the breaking of an encrypted message. Critical to the development of the bombes was a young mathematician named Alan Turing. While the bombe was not a computer per se, since it was only designed to carry out a narrow set of computations and was not programmable, the other project at Bletchley Park was. Known as “Colossus” and developed by engineer Tommy Flowers, it was the first electronic, programmable, digital computer, and it was used to crack an even more complex German code, the Lorenz cipher.
Flowers, son of a bricklayer, took evening classes at the University of London to earn a degree in electrical engineering, then joined the telecommunications branch of the Royal Post Office. When Turing asked his director in 1941 for help with the bombe, he was introduced to Flowers. The bombe used mechanical relays, and Turing was exploring how vacuum tubes—“valves,” as they were known in the UK—might improve his machine. Mechanical relays are electromagnetic switches, where an electrical current is used to physically open and close a circuit—a light switch with a magnet taking the place of a human finger to flick it up and down. Vacuum tubes had no moving parts, instead using current in one lead in the tube to induce current to flow between the other two leads. In effect, tubes moved the wall switch’s function inside the light bulb. Flowers had designed and built telephone networks entirely using electronic valves instead of electromechanical relays, and he provided some advice. Turing was impressed by Flowers and encouraged him to develop his ideas further. Using his own money to develop the prototype, Flowers eventually convinced the government his system could work. By December 1943, the Colossus Mark I, with its 1,800 vacuum tubes, was operational at Bletchley Park. A restored Colossus Mark I is seen in Figure 4-1.
The Colossus system was programmable using a variety of switches, patch cables, with data entered via a paper tape input, all of which in combination could provide the inputs the machine needed to decrypt the Lorenz cipher message. Most notably, the input on the tape consisted of a series of holes that represented the encrypted message in binary form. While it was not a general-purpose computer, the Colossus could be said to be the realization of Charles Babbage’s analytical engine. Conceived of in the 1830s, the engine (plans for which are seen in Figure 4-2) was a mechanical computer designed to take punched cards as input for computations performed by a central “mill,” with results delivered into a memory store. The analytical engine was never built, but in the sense that it was designed as a general-purpose computer, it could be said to have been more sophisticated than the Colossus. Babbage had initially designed and prototyped a simpler machine, the difference engine, which was more calculator than computer and not general purpose.
His collaborator from 1840 onward, Ada Lovelace, was the first to recognize the potential of a general-purpose computer for more than just math, writing that it
might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine...Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
Babbage and Lovelace form the canonical example of the inseparable dependence of computer engineers on computer scientists, and vice versa. The collaboration between experimentalists and theorists is sometimes called “codesign,” a term that attempts to capture the symbiotic and interdependent nature of the work done by both camps. In the middle of the 20th century, codesign was a vital ingredient in the astonishing pace of progress in computation. Before Alan Turing was helping design the bombes at Bletchley Park, he’d been a gifted mathematician, who in 1937 had published what has been called “easily the most influential math paper in history,” where he laid out the foundation for the entire field of computation.
Titled “On Computable Numbers, with an Application to the Entscheidungsproblem” the paper was Turing’s attempt to address a problem posed by the mathematician David Hilbert, the Entscheidungsproblem,” or “decision problem.” Hilbert had asked in 1928 if there was a standard method or process that can be applied to a mathematical proposition, which gives a decision as to the truth or falsehood of the proposition. Hilbert was pursuing a question first raised by Gottfried Leibniz, who had built a calculating engine in the 17th century, 200 years before Babbage’s analytical engine. The machine had caused Leibniz to contemplate the notion of calculable and incalculable problems. Hilbert’s reformulation into the “decision problem,” also known as the “halting problem,” set the stage for a revolution.
Turing approached the problem through the theoretical work of another mathematician and logician, Kurt Gödel, whose incompleteness theory in 1930 laid out a proof that any set of logical axioms will have limits to what they can prove. Extending that line of thinking to Hilbert’s question, Turing’s 1937 paper described a hypothetical machine that read instructions and data from a tape and carried out operations based on the input. Having described the idea of a digital computer as a culmination of a centuries-long conversation among theorists, Turing found himself just a few years later involved in the efforts to practically realize the dream of a universal computing machine.
Before the war efforts, Turing’s studies brought him to Princeton university in the US, where in 1936 he began his PhD with Alonzo Church as his advisor. Church had also been working on the problem of computability, and in fact had published a paper very much along the lines of Turing’s work. Their parallel efforts are recognized today as the Church-Turing thesis.
Flowers and Turing were not the only ones working on programmable computers. In fact, under the Nazis, the Germans had not only developed the Enigma and the Lorenz code machines, limited-purpose computers in their own right, but had also built, in 1941, the world’s first automatic, digital, programmable computer. Built by Konrad Zuse, who, like Flowers, was an engineer, the Z3 system used a punched tape to load instructions and was able to perform floating point arithmetic. Meant to speed the calculations required for aircraft design, the system failed to capture the imagination of the Nazis, and when it was destroyed in allied bombing raids in 1943, thankfully, there was no effort to replace it.
Meanwhile, in the US, similar scientific and engineering efforts were afoot. At MIT, Norbert Wiener—an American-born child of Jewish immigrants, a child prodigy, a philosopher, and mathematician who had studied with David Hilbert—worked on systems to control antiaircraft guns automatically, using analog electronic means to track and predict the path of enemy planes. The US army set up the Signal Intelligence Service as a sort of counterpart to Bletchley Park, with a focus on breaking the codes used by the Japanese in the Pacific theater. And at Bell Labs, a researcher by the name of Claude Shannon was working on theoretical understanding of information.
Working on analog computing devices during his graduate studies in electrical engineering at MIT in the ’30s, Shannon had drawn on the work of George Boole to create switching circuits that allowed the simplification of the electromechanical relays on the machine. This, like the experience of Tommy Flowers, had applications in the design and operation of telephone networks. In fact, Shannon had, as a child, used a barbed wire fence as a telegraph wire to communicate with a friend. Shannon’s work after earning his PhD at MIT was more theoretical in nature. He joined Bell Labs in 1941 to work on fire control systems. along the lines of Wiener’s cybernetics work, and on cryptographic systems.
His involvement in code breaking brought him in contact with Alan Turing, who in 1943 had been sent to the US to share techniques from Bletchley Park with the Navy. During a visit to Bell Labs, Turing and Shannon met in the cafeteria, and Turing walked Shannon through his paper “On Computable Numbers,” which had a huge impact on Shannon. Much of his work during the war at Bell Labs culminated in his 1948 paper “A Mathematical Theory of Communication,” where among his many contributions he coined the word “bit” to mean a binary digit with a value of 0 or 1. His theoretical work was to lay the foundation for information theory and much of digital communications networks for decades to come.
The work of Tommy Flowers to build Colossus had its doppelgänger in the US as well. The US Army, wanting a faster way to calculate the large number of differential equations required for artillery firing tables (used to aim artillery barrages), funded an effort at the University of Pennsylvania to build a computer. John Mauchly and J. Presper Eckert began building ENIAC, or Electronic Numerical Integrator and Computer, starting in 1943. Inspired by the design of Vannevar Bush’s differential analyzer, for which Shannon had created his logical circuits, the ENIAC was, like Colossus, a programmable, general-purpose computer that used vacuum tubes for gates.
With the problems of breaking Nazi and Imperial Japanese codes largely solved, the champions of computing began to look to other challenges that their increasingly capable machines could tackle. Even before ENIAC was fully operational, the idea of using it to ease the calculations needed for the Manhattan Project was raised. The project to build the atomic bomb was enormously complex, requiring the work of a small army of human “computers,” mostly women, to carry out the mathematics. Frustrated by the slow pace of progress, Richard Feynman, then managing a group of computers at Los Alamos for the Manhattan Project, came up with an innovative approach to breaking down a large problem into smaller ones that could be solved in parallel, a technique that closely resembles modern approaches to parallel digital computing.
Another key figure at Los Alamos, John von Neumann, was well aware of both the ENIAC effort and Feynman’s and others’ efforts to make human computers more efficient. Von Neumann had been at Princeton’s Institute of Advanced Study (IAS) since shortly after its creation in the early ’30s. Von Neumann began lining up support from Princeton and secured funding from the Office of Naval Research and the Atomic Energy Commission before starting to build his machine in 1946.
With the war over, von Neumann’s efforts were helped by the availability of surplus components such as vacuum tubes, which, like ENIAC or Colossus, the IAS machine used for logical gates. Memory was implemented using a novel approach where the cathode ray tubes were scanned continuously and what eventually would be called a pixel’s state was either read out or changed. The IAS team took advantage of the fact that the glow of an activated spot on the tube would persist between scans of the electron gun and that the gun itself could “sense” when a spot was illuminated or not. Forty tubes were used to store an array of 1,024 “words,” each 40 bits long, for a total 40,960 bits of memory. The IAS machine (see Figure 4-3) was also innovative in the way it loaded its program from punched tape into the same memory space as the data, creating an instance of Turing’s tape abstraction in those tubes.
The team was challenged by the quality of the surplus components, especially the 6J6 vacuum tubes, about 2,000 of which the team used in the machine. Faulty tubes were easy to detect with startup diagnostics and could be replaced. The real issue was the tubes operating out of spec, introducing subtle errors over time to the machine’s operations. Ingenuity and engineered redundancy led to solutions to the team’s problems, all of which have left a mark on contemporary computers.
What was potentially more challenging for von Neumann was to maintain support from the sources of funding in the new postwar context. His involvement in the Manhattan Project allowed him to create solid arguments for the use of his computer to continue research into nuclear weapons, a topic that seemed germane given the newly simmering rivalry with Soviet Russia. For those who wanted a postwar focus on peace and prosperity, von Neumann could emphasize the role of computation in the development of nuclear reactors for energy generation, beckoning to the promise of the “atomic age.” More broadly, von Neumann seemed to have an instinct for the ability of digital computers to aid in scientific inquiry, especially through the use of simulation or modeling.
In an effort to demonstrate the potential for computers to simulate natural processes other than the compression and blast of a nuclear weapon, von Neumann brought a leading meteorologist, Jule Charney, to IAS, where he built the first computer models to simulate weather patterns. Another researcher invited to use the IAS machine was Nils Barricelli, an Italian-Norwegian scientist who pioneered the field of artificial life, his programs representing mutations and evolutionary pressure among life forms simulated in the vacuum and cathode ray tubes of the IAS machine.
The researchers and engineers working around the IAS machine created the Monte Carlo method, an immensely powerful statistical approach that creates an approximate model of a complex system using random sampling. Invented by Stan Ulam, the Monte Carlo method remains tremendously useful to this day in physics, engineering, finance, and artificial intelligence.
Though the IAS machine was decommissioned in 1958, just 13 years after the beginning of the project, its impact on the development of classical computing was profound. Von Neumann seemed to have a keen sense for the most powerful theoretical and experimental ideas from the scientists, mathematicians, and engineers contributing to the war effort, which he drew on to powerful results. Most importantly, he demonstrated a very open attitude toward the creations of the team at the IAS, happily running off copies of the machine’s specification to anyone who asked. As a result of being shared freely in this manner, there were 15 documented “clones” of the IAS machine built at other institutions around the world.
In contrast, Eckert and Mauchly, who led the ENIAC and UNIVAC projects at the University of Pennsylvania, were quite territorial about their work. They complained that von Neumann and others were “snooping around” their work, and in 1947, they formed the Eckert-Mauchly Computer Corporation and began filing patents to protect what they viewed as their intellectual property. The legal fight over these foundational patents took decades to play out, first being granted to Eckert and Mauchly in 1964 and finally being overturned in the early ’70s. In the meantime, the IAS clone devices, with derivative and humorous names like “MANIAC” at Los Alamos or “Johnniac” at the RAND Corporation, had a profound and lasting influence on computing. In what might be the first example of the power of an “open source” strategy in computing, von Neumann’s influence is apparent to this day in classical computing architectures in everything from a web server to the smartphone in your pocket, while the ENIAC architecture is lost to the ages.
However, if the computational demands that provided the motivation and funding for the Bletchley Park Bombes, Colossus, ENIAC, and the IAS system had remained their sole motivation, the momentum gained in the 1940s may have dissipated in the postwar era. Instead, it seems that every researcher who got their hands on the machines or even read about the work that was being carried out on them was able to imagine more uses for the technology, and dreams of a digital future continued to gain steam. In fact, immediately after World War II came one of the critical defining moments in the history of classical computing: the invention of the transistor.
The devices built during the war had transitioned from mechanical relays to vacuum tubes, which had brought huge gains in speed and scale. However, like a light bulb, tubes were prone to physical failure as their components heated and cooled repeatedly through their duty cycle. Thankfully, as interest in machines that used dynamic circuits to solve Boolean logical problems surged during the war, physicists were hard at work building out our working knowledge of the universe in ways that could improve upon vacuum tubes. In particular, John Bardeen and Walter Brattain, two researchers at Bell Labs in New Jersey, were exploring the characteristics of materials known as “semiconductors.” The term was invented in the World War II era to describe a category of materials first identified at the end of the 19th century, which had a mix of characteristics of conductors like copper and insulators like glass. Of particular interest to researchers like Bardeen and Brattain was the ways in which semiconductors’ behavior could be manipulated and controlled. Their experiments resulted in a demonstration of a germanium strip with two electrodes, which, when a small current was applied to one end, would produce a much-amplified current from the other end.
Others at Bell Labs such as William Shockley built upon this experiment and, in 1947, created what they called the junction transistor. Consisting of three layers of semiconducting material sandwiched together, a junction transistor was capable of powerfully amplifying electrical signals, which would lend its name to the new, cheap, and compact “transistor radio.” More importantly, a junction transistor could act as a digital switch: when current was applied to the middle section of the junction, current could flow from one end to the other. Turn off the current to the middle, and the current could not flow.
The typical vacuum tube used in computing devices could last a few thousand hours in operation before its filaments burnt, its vacuum failed, or some other mechanical flaw ended its useful existence. With no moving or delicate components, the “solid state” transistor far exceeded the lifespan of tubes, with some transistors from the ’50s and ’60s in operation to this day. A huge barrier to building complex, useful computational devices that didn’t require the continuous application of specialized skills to maintain had been removed.
Bell Labs’ breakthrough ushered in the electronic age due to the simplicity, low cost, and durability of the transistor. The earliest pioneers of the vacuum tube era of computing—such as Remington-Rand, who had bought the UNIVAC from Mauchly and Eckert; IBM, with its model 701 “Defense Calculator”; or Ferranti and English Electric in the UK—all began transitioning to solid state machines. However, there were still challenges to building robust universal computers capable of productive work at scale. Transistor-based machines had to be assembled by hand, with workers soldering each tiny component to circuit boards. Solder joints are fallible, and a single cracked joint could cause the machine to immediately stop. Even the most skilled workers with the best materials and the most controlled environment will have a nonzero probability to create a joint that will fail. Multiply that probability by a sufficiently large number of components soldered to a board, and the overall chance of failure will rise. The largest transistor-based mainframes by the late ’50s, such as the IBM 7090, had 50,000 transistors.
This challenge was the dominant theme of the 1950s, with huge amounts of energy and R&D budget focused on potential solutions, none of which made much of a dent. Approaches like MERA, or Modular Electronics for Radar Applications, funded by the US Air Force, which tried to establish a LEGO-brick system of electronic components that snapped together, typified the attempts to solve the problem.
Then, at the very end of the decade, researchers at Texas Instruments and Fairchild Electronics realized at almost the exact same time that the solid state of transistors meant they could be “printed” on a monolithic sheet of semiconducting material. By 1962 the defense industry was using integrated circuits for the guidance system of the Minuteman missile, and in 1964 IBM launched the System/360, with a combination of printed resistors and soldered transistors. The electronic age gave way to the computer age, and integrated circuits paved the way to the microprocessor.
In the early, heady days of the computer age, it seemed as though these machines were capable of anything, with each generation yielding ever more powerful devices. Gordon Moore’s observation that compute power was doubling every year was originally made in 1965, and though the frequency had stretched to every two or so years by 2020, it’s been an astonishingly constant rate of innovation.
These miracles of science and engineering, however, were imagined, designed, and built in the context of Shannon and Turing’s classical information theory. While extremely powerful, the reliance on Boolean logic creates a weakness, with certain types of problems demanding exponentially more computing power to solve than even our most powerful machines can deliver. This shortcoming may be solved by the introduction of quantum mechanical concepts into information theory, if we can invent the machines to do it.
Get The New Quantum Era now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.