Chapter 1. Quantum Quinceañera
It’s easy for quantum computing to sound like a fanciful theorist’s dream…But just extrapolating Moore’s Law…somewhere around the year 2020 or 2030 is when we hit one bit per atom…So if we really want to keep getting faster…Quantum weirdness is essentially the only resource we have that’s still untapped for computing.
Google’s blog post “What Our Quantum Computing Milestone Means” was written by its CEO, Sundar Pichai, so it was clearly meant to be a big deal. That week in October 2019, it was accompanied by several articles and an op-ed in the New York Times, an article and a peer-reviewed paper in the prestigious journal Nature, and stories in several other tech outlets. Google had gone to great lengths to generate attention for its accomplishment. It was not, however, immediately apparent just what those claims meant. Pichai called it the “hello world” moment for the field of quantum computing, something Google had been working at for 13 years. Google had now accomplished this feat on its device named Sycamore, seen in Figure 1-1. But to the general public, to the tech journalists covering it, and to the vast majority of people, it was far from clear just what Google had accomplished. Pichai wrote in the blog post that his researchers had achieved “quantum supremacy,” and that meant that they had “used a quantum computer to solve a problem that would take a classical computer an impractically long amount of time [to accomplish].” This, the reader was informed, was a “milestone in our effort to harness the principles of quantum mechanics to solve computational problems.”
Journalists struggled to get to grips with the implications of what was being announced. First things first, what was meant by a “classical” computer? The term is borrowed from physics, where the behaviors at the subatomic scale are referred to as quantum, and above that scale physical behaviors are considered “classical.” So your laptop, your phone, even the largest supercomputer in the world—all are computing classically. This begged the question, what did it mean to build computers using the “principles of quantum mechanics”? In the blog post, Pichai had offered the following as a starting point:
A bit in a classical computer can store information as a 0 or 1. A quantum bit—or qubit—can be both 0 and 1 at the same time, a property called superposition. So if you have two quantum bits, there are four possible states that you can put in superposition, and those grow exponentially. With 333 qubits there are 2^333, or 1.7x10^100—a Googol—computational states you can put in superposition, allowing a quantum computer to simultaneously explore a rich space of many possible solutions to a problem.
A “1” followed by 100 zeros is called a googol and is the source of the company’s name. Including the Easter egg in the form of a very large number, as cute as it was, was not particularly helpful for building understanding. A common characteristic of attempts to explain quantum computing seems to be to utilize analogies that seem straightforward but don’t end up being very useful for building understanding. For example, Pichai’s statement that quantum computers can “simultaneously explore…many possible solutions” is entirely inaccurate. And yet these false analogies persist to this day, as even well-informed and competent professionals such as Pichai struggle with the inadequacy of our language to explain what is actually going on.
Cade Metz, a journalist writing in the New York Times the same day, side-stepped these challenges, writing merely that the technology “relies on the mind-bending ways some objects act at the subatomic level or when exposed to extreme cold.” Metz’s colleague at the New York Times, David Yaffe-Bellany, also weighed in on the same day, grappling with the topic a bit more deeply. Whereas classical computers represented information as “bits,” which can only assume the values of “1” or “0” and support computation based on that Boolean representation of two states called true and false, quantum computers used “qubits.” Qubits, a portmanteau of “quantum” and “bits,” Yaffe-Bellany wrote, can be 1 and 0 simultaneously, and therefore could hold an exponentially larger amount of information when compared to an equal number of classical bits.
The definition of qubit is accurate, but Yaffe-Bellany’s explanation of superposition is, like Pichai’s, another great example of an attempt to explain a quantum mechanical concept that drives physicists crazy: its inaccuracies leave an entirely incorrect impression. Quantum physicists will tell you that superposition doesn’t mean that the qubit has both values at the same time but instead has a probabilistic combination of both, which doesn’t exactly help shed light on the matter. There has been a struggle, largely unsuccessful, to come to grips with the “spooky” behaviors at the quantum scale, and the press coverage of the Google event didn’t fare any better. In IEEE Spectrum, qubits were likened to “marbles” instead of coins, with their hemispheres labeled “zero” and “one,” rolling around, while bits are coins that lie flat with only one value visible. A metaphor that physicists find particularly galling is that quantum computers evaluate “all possible answers at once” before settling on the correct answer, something they decidedly do not do. However, coverage of quantum computing continues to fall back on these flawed explanations.
The coverage around the Google supremacy experiment, having failed to create metaphors that created more clarity, turned to a framing that was more comfortable—industry competition. Metz’s piece noted that “many of the tech industry’s biggest names, including Microsoft, Intel and IBM as well as Google, are jockeying for a position in quantum computing.” In fact, the reactions to the supremacy announcement had also featured some uncharacteristically aggressive competitive posturing from tweedy IBM Research in anticipation of the puffy-vested Google team’s big announcement.
The announcement had been foreshadowed by some drama fueled by that competition. Someone at NASA, a collaborator on Google’s experiment, had accidentally posted the research paper to the NASA Technical Reports server a week before Pichai’s blog post, where it wouldn’t have been noticed if Google’s own search engine hadn’t ingested it and added it to Google Scholar, where it was fed into the email alerts of anyone monitoring the topic of quantum computing! IBM used the advance peek to formulate a rebuttal, seeking to undermine Google’s claims, including members of IBM’s quantum computing efforts.
IBM had a secret weapon in its effort to undermine Google’s claims: Summit, a supercomputer at the US Department of Energy’s Oak Ridge National Laboratory. IBM had installed Summit in 2018, and its 150 petaFLOPS, or quadrillions of operations per second, ranked it as the most powerful computer in the world at the time. IBM, leveraging their comprehensive knowledge of Summit’s architecture, hypothesized that using petabytes of hard drive storage as working memory, it would be able to complete the same experiment in two and a half days instead of hundreds of thousands of years. The rebuttal, posted to IBM’s own blog a week ahead of Google’s announcement, didn’t do much other than make it even harder to make sense of the whole thing.
Regarding investment in the private sector, Yaffe-Bellany cited a Nature article by Elizabeth Gibney, who found that between 2012 and 2019, “venture capitalists have invested more than $450 million into start-ups exploring the technology.” On the global stage, quantum computing was characterized as a strategic race at the superpower scale, with “China…spending $400 million on a national quantum lab” and “the Trump administration…promising to spend $1.2 billion on quantum research, including computers.” The US investment sounded like an attempt to catch up, because China had “filed almost twice as many quantum patents as the United States in recent years.”
This familiar context of a race to be first to market with a new technology, along with the sheer scale of global investment, may have played a role in creating a sense that quantum computing was imminent. The idea that this was the “hello world” moment for quantum computing, a phrase used by Pichai and repeated in Elizabeth Gibney’s article in Nature about the experiment, certainly helped create this impression. The true nature of the experiment seemed as difficult to explain as the quantum behaviors that underpinned the workings of the machine. In fact, the experiment was designed to take advantage of that “quantumness” in a way that would be particularly challenging for a nonquantum classical computer to emulate.
The most cogent explanation of just what Google’s announcement meant came from physicist Scott Aaronson, the director of the Quantum Information Center at the University of Texas in Austin. Aaronson had been involved in the design of the experiment carried out by Google’s research team, and had a reputation as a science communicator and prolific blogger. Writing an op-ed in the New York Times a week after the initial announcement, Aaronson set the context for understanding appropriately, writing, “the calculation doesn’t need to be useful: much like the Wright Flyer in 1903, or Enrico Fermi’s nuclear chain reaction in 1942, it only needs to prove a point.” Drawing parallels to both the invention of flight and the atomic bomb ultimately seemed to further confuse the search for clarity in the press.
The challenges faced by the journalists covering the event and their readers contributed to a fundamental misapprehension about the “supremacy” moment—that the technology was on the cusp of emergence, and that in some reasonable amount of time it could be expected to be useful for something, and further, that it was an evolutionary step forward for the information technologies with which we were already familiar. This is the understanding of how information technology works since the late 20th century: a new chip, a new application, faster wireless networking, better functionality, or some other measure of progress through innovation makes its way into the media, into our consciousness, and a year or three later, there it is in the store or on the ecommerce site, or in the App Store. Generations of technology iteratively improve on their predecessors, and Apple Newtons are supplanted by Palm Pilots and then RIMM Blackberries, which are in turn kicked to the curb by the iPhone.
It’s a comfortable narrative because it’s familiar, but that doesn’t make it appropriate framing for understanding what quantum computing was in 2019. It’s not like there wasn’t sufficient backstory for the Google accomplishment to signal an arrival, a coming out. Time magazine had run a cover story in 2014, five years earlier, heralding the launch of D-Wave’s “Infinity Machine,” the first commercially available quantum computer. The company, based in British Columbia, Canada, claimed it would soon be capable of cracking all computer encryption schemes. It was true: the math that made this feat theoretically possible did exist (and we will explore it in Chapter 7). However, given the limitations of all known quantum devices, the claims in the article were seen as highly dubious by the quantum physics and quantum computing communities. Even at press time, D-Wave’s claims of operating on the quantum level had been under fire by scientists, and by 2019, no one had managed to produce any type of performance advantage using the D-Wave machine. However, given the time that elapsed between the Time story on D-Wave and this Google accomplishment, it seemed reasonable to believe the technology had matured considerably.
But no matter what impression the collective coverage of the experiment conveyed, the reality of quantum computing was far less familiar and far less mature. What the Google team, the IBM team, and similar teams at various tech giants and startups as well as researchers in academic labs around the world were working toward was actually a complete paradigm shift, in literal terms. The machines they were building were attempting to rewrite the foundational information theory that ruled every transistor on every chip, every instruction in every piece of software, every interaction with a computer since computers were invented in the middle of the 20th century. As Aaronson wrote, quantum computing was an attempt to rewrite the “rules that Charles Babbage understood in the 1830s and that Alan Turing codified in the 1930s,” rules adhered to by “every computer on the planet—from a 1960s mainframe to your iPhone.” If successful, their machines would create solutions to classes of problems that were practically impossible to solve using the computers, no matter how powerful we made them. IBM’s Dario Gil had his own take on Pichai’s playful “googol” explanation, in an interview with the New York Times that was packaged with IBM’s rebuttal. He said that if one were to build a system with 100 perfect qubits, “you would need to devote every atom of planet Earth to store bits to describe that state of that quantum computer.”
In 2019, however, no one even had built a single perfect qubit, nor does one exist at the time of this writing. The Google experiment was contrived, pointless from the perspective of the kind of useful things we expect from computers. Tasked with anything we’d recognize as useful, the best quantum computer would lose in a head-to-head battle with a 10-year-old laptop. It wouldn’t even be close.
So why the excitement? Why all the investment? The context the media missed or glossed over was that the incredible progress delivered by classical computing has, in the past decade, been showing signs of coming to an end, and that struck fear into the captains of tech and the heads of state alike. A growing sense of panic was gripping the tech world, causing it to cast about for a new engine of growth. Despite quantum computing being nowhere near ready for its mainstream debut, the industry was looking further and further afield for a solution to a looming crisis. Moore’s law, the massively powerful driving force behind decades of technological advancement and economic production, was showing signs of falling apart. Computer industry insiders were increasingly concerned about finding ways to maintain their progress, and quantum was emerging as a potential savior.
In fact, Moore’s law is not a law at all. In 1965, Gordon Moore, the director of research and development at Fairchild Semiconductor, wrote an article in which he observed that the number of components on a chip had doubled each year since 1958, leading to roughly doubling what a chip could do as well. He expected this trend to continue for at least another 10 years. A decade passed, and Moore reassessed his prediction at a conference, where he said he believed the trend would continue, revising the period of doubling to every two years. Carver Mead, a famed professor at Caltech, coined the term “Moore’s law,” and it stuck. It stuck in the language, largely because it was demonstrated by the entire chip industry for the next 30 years, as illustrated in Figure 1-2.
Moore’s law explains the progression from the Intel 8088 chip in 1980, with its 29,000 transistors, to Intel’s Itanium chip in 2000, which had 220 million transistors. In many ways, global trade, economic growth, and geopolitics have all been calibrated in tight coordination with this reliable, remarkable progress. The progress has been so remarkable that it seems to defy probability, with predictions of its failure having been made again and again, only for a new chip to deliver yet another leap forward. However, in the early 2000s, the concerns began to accumulate. Some attributed slowing productivity in that era to the slowdown of microprocessor advancements.
As these concerns grew, the search for alternative technological approaches has gotten more attention. Exotic ideas like designing chips to operate more like our brains, computing using DNA, or even fungi have been able to mobilize more funding for research than they may have without the concerns about Moore’s law. Quantum computing is a very compelling idea from this perspective, as its mathematical ideas provide a neat solution to a fundamental bottleneck of classical computing that is present in certain types of problems that get exponentially more challenging as they increase in scale.
In fact, the Google experiment was explicitly designed to demonstrate this core characteristic of quantum computing. Its chip, known as Sycamore, had 53 operational qubits, on which a random set of quantum operations was executed, the equivalent of a very small computer program, and then their output was read out. A collection of quantum operations is known as a “circuit” to those in the field. Because of the way Google’s Sycamore chip exploited the characteristics of quantum mechanics, representing all the possible states of the circuit and therefore calculating the same output, it would take 253 classical bits, or 9 quadrillion bits. The largest number of transistors on a general-purpose microprocessor as of 2023 was Apple’s M-1 chip, with 114 billion. A quadrillion is a million billions.
This was the reasonable basis for Google’s claim that it had proven the “supremacy” of quantum computers over classical. Notwithstanding IBM’s hypothesized solution on Summit’s 250 petabytes of storage, the fact remained that it took Sycamore a few seconds to complete the task, still several orders of magnitude faster than the classical supercomputer.
The Google experiment established a credible claim that supremacy was provably real, no matter how regrettable the use of that particular word. It’s worth noting that none of the coverage of Google’s experiment seemed to notice the use of a word with prejudicial associations. Since then, many people have expressed their dislike of the word “supremacy” to describe the milestone, and John Preskill, the term’s originator, has voiced his own regrets on numerous occasions. A preferred term has emerged, “quantum advantage,” but since Google used “supremacy” originally, it’s less confusing to stick to that term when discussing the experiment.
This oversight regarding word choice aside, the news and analysis around the experiment left much to be desired. Overreliance on frames of reference from decades of emerging technologies, tech rivalries, venture capital investment trends, and global economic dynamics caused most pieces to miss the real news in the announcement. Appreciating the breakthrough requires a deeper dive into exactly what quantum computing is, what makes it so radically different from classical computing, what its relationship is to the origins of quantum mechanics, and how adding quantum to information theory is opening the door to an entirely new way of computing.
Quantum computation may provide a way to sustain the trajectory of Moore’s law, not by sustaining the relentless doubling of transistors but by replacing classical bits with quantum ones. It is hoped the quantum gambit will allow us to sidestep the plateau with an entirely different technology that may break classical computation bottlenecks and open up miraculous new applications in bioscience, climate science, material science, communications, and financial services, among others. These machines’ ability to simulate natural systems with far greater fidelity than any classical system was hoped to lead from the demonstration of a contrived “supremacy” ultimately to remarkable breakthroughs in our understanding of our universe itself. But first, the field was going to have to move past its Wright Flyer moment and prove there was a path to a productive, useful technology. This was going to require an unprecedented effort to bridge the worlds of theoretical physics with engineering and technology development as well as effort that had begun decades before Google’s results in 2019.
Get The New Quantum Era now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.