O'Reilly logo

WTF?: What's the Future and Why It's Up to Us by Tim O'Reilly

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required



THESE ARE TWO SEPARATE QUESTIONS: WHETHER THE KIND of cognitive work described in the previous chapter can ever replace the mass employment in the factories of the twentieth century; and whether it can be well paid enough for the flywheel of prosperity to continue.

In answer to the first question, let me simply say that it was inconceivable during the agricultural era that so many people could find employment in factories and in cities. Yet automation and far lower cost of production led to huge increases in demand for previously unavailable products and services. It is up to us once again to put people to work in fulfilling ways, creating new kinds of prosperity. The lessons of technology innovation remind us that progress always entails thinking the unthinkable, and then doing things that were previously impossible.

As to the second question, it is up to us to ensure that the fruits of productivity are shared. The first step is to prepare people for the future that awaits them.

From 2013 through 2015, I was part of the Markle Foundation Rework America task force, exploring the future of the US economy. The question before the task force was how to provide opportunities for Americans in the digital age. One of the moments that stuck with me was a remark from political scientist and author Robert Putnam, who said, “All of the great advances in our society have come when we have made investments in other people’s children.”

He’s right. Universal grade school education was one of the best investments of the nineteenth century, universal high school education of the twentieth. We forget that in 1910, only 9% of US children graduated from high school. By 1935, that number was up to 60%, and by 1970 nearing 80%. The GI Bill sent returning World War II veterans to college, enabling a smooth transition from wartime to peaceful employment.

In the face of today’s economic shifts, there were proposals in the 2016 presidential election for universal free community college. In January 2017, the city of San Francisco went beyond proposals and agreed to make the City College of San Francisco, its community college, free for all residents. This is a great step.

But we don’t just need “more” education, or free education. We need a radically different kind of education. “If the students we are training today are going to live to be 120 years old, and their careers are likely to span 90 years, but their training will only make them competitive for 10 years, then we have a problem,” notes Jeffrey Bleich, former US ambassador to Australia and now chair of the Fulbright scholarship board. Advances in healthcare and technology, and the changing nature of employment, are compounding to obsolete our current educational model, which viewed schooling as preparation for a lifetime of work at a single employer.

We need new mechanisms to support education and retraining throughout life, not just in its early stages. This is already true for professionals in every field, whether athletes or doctors, computer programmers or skilled manufacturing workers. For them, ongoing learning is an essential part of the job; access to training and educational resources is one of the most prized perks, used to attract top employees. And as “the job” is deconstructed, the need for education doesn’t go away. If anything, it is increased. But the nature of that education also needs to change. In a connected world where knowledge is available on demand, we need to rethink what people need to know and how they come to know it.


If you squint a little, you can see the Apple Store clerk as a cyborg, a hybrid of human and machine. Each store is flooded with smartphone-wielding salespeople who are able to help customers with everything from technical questions and support to purchase and checkout. There are no cash registers with lines of customers waiting with products pulled from the piles on the shelves. The store is a showroom of products to explore. When you know what you want, a salesperson fetches it from the back room. If you’re already an Apple customer with a credit card on file (and as of 2014, there were 800 million of us), all you need to provide is your email address to walk out the door with your chosen product. Rather than using technology to eliminate workers and cut costs, Apple has equipped them with new powers in order to create an amazing user experience. By so doing, they created the most productive retail stores in the world.

As a design pattern, this is remarkably similar to one of the key business model elements of Lyft and Uber, discussed in Chapter 3. The Apple Store has nothing to do with on-demand, the map that most people use to understand these new platforms, yet it has a great deal in common with them as a lesson plan for constructing a magical user experience made possible by networked, cognitively augmented workers connected to a data-rich platform that recognizes its customers and tailors its services to them.

The Apple Stores are also a testament to the truth that it is not technology itself that is transformative. It is its application to rethinking the way the world works, not inventing something new but applying newly latent capabilities to do an old thing so much better as to change it utterly.

Even the very first advances in civilization had this cyborg quality. The marriage of humans with technology is what made us the masters of other species, giving us weapons and tools harder and sharper than the claws of any animal, projecting our strength at greater and greater distance until we could bring down even the greatest of beasts in the hunt, not to mention engineer new crops that produce far more food than their wild forebears, and domesticate animals to make us stronger and faster.

I remember once reading an account of the crossing of the land bridge between Siberia and Alaska that used a curious fact as part of its analysis of the possible date. It couldn’t have happened before the invention of sewing, the authors noted, which made possible the piecing together of close-fitting garments that allowed humans to live in cold climes. Sewing! Sewing with bone needles was once a WTF? technology, making possible something that had previously been unthinkable.

Every advance in our productivity, getting more output from an equivalent amount of labor, energy, and materials, has come from the pairing of human and machine. It is the acceleration and compounding of that productivity that has produced the riches of the modern world. For example, agricultural production doubled over the hundred years from 1820 to 1920, but it took only thirty years for the next doubling, fifteen for the doubling after that, and ten for the doubling after that.

The ultimate source of productivity increases is innovation. Abraham Lincoln, no economist, but an acute judge of the forces of human history, wrote:

Beavers build houses; but they build them in nowise differently, or better, now than they did five thousand years ago. . . . Man is not the only animal who labors; but he is the only one who improves his workmanship. These improvements he effects by Discoveries and Inventions.

A discovery or invention only improves the livelihood of all, though, when it is shared. Consider one of the world’s most heralded inventions. Can you imagine the first woman (I like to imagine that it was a woman) who built a controlled fire? How amazed her companions were. Perhaps afraid at first. But soon warmed and fed by her boldness. Even more important than fire itself, though, was her ability to tell others about it.

It was language that was our greatest invention, the ability to pass fire from mind to mind. In periods where knowledge is embraced and widely shared, society advances and becomes richer. When knowledge is hoarded or disregarded, society becomes poorer.

The adoption of movable type and the printed book in fifteenth-century Europe led to our modern economy, a remarkable flowering of both knowledge and of freedom, as the discoverers of the new could pass the fire of knowledge to people not yet born and to those living thousands of miles away. Those inventions and discoveries took centuries to reach their full potential, as the value of literacy fed on itself, and a better-educated population further increased the rate of invention and the spread of new ideas, creating demand for even more learning, discovery, and consumption. The Internet was another great leap. But the web browser—words and pictures online—was only a halfway house. It was an increase in accessibility and the speed of dissemination of knowledge, but not a change in kind from the physical forms that preceded it.

The final step by which knowledge is shared is via embedding in tools. Consider maps and directions. The path from physical maps through GPS and Google Maps to self-driving cars illustrates what I call “the arc of knowledge.” Knowledge sharing goes from the spoken to the written word, to mass production, to electronic dissemination, to embedding knowledge into tools, services, and devices.

In the past, I could ask someone for directions. Or I could consult the stored knowledge in a paper map. The first online maps were merely facsimiles of paper maps. Now I can see exactly where I am and how to get where I want to go in real time. The next step is for me to forget about all that and just let the car take me to my destination. The step after that is to imagine what we might do differently when transportation is as reliable as running water.

This embedding of knowledge into tools isn’t something new. It has always been a critical enabler of the productivity gains that come from mastery over the physical world. And it inevitably leads to massive changes in society.

When Henry Maudslay built the first screw-cutting lathe in 1800, creating a machine that could reproduce exactly the same pattern every time—something impossible for even the most skilled human craftsman equipped only with hand tools—he made possible a world of mass production. From the first nuts and bolts with threads identical to within thousandths of an inch, first hundreds then thousands then millions of products descended, the children, grandchildren, and great-grandchildren of Maudslay’s mind.

So too, when Henry Bessemer invented the first process for cheaply mass-producing steel in 1856, he didn’t just remove carbon and impurities from iron: He added knowledge. Knowing how to make vast quantities of cheap steel made entirely different futures possible. Andrew Carnegie made his fortune and took over leadership of the worldwide steel industry from Britain by manufacturing the rails that tied together a country far vaster. Steel girders enabled skyscrapers; steel cables enabled elevators and vast suspension bridges. Each of these nineteenth-century WTF? technologies built on the others, much as today’s advances do.

The three-part process of creating new knowledge, sharing it, and then embedding it into tools so that it can be used by less skilled workers is illustrated neatly by the rise of big data technologies. Google had to develop entirely new techniques in order to deal with the growing scale of the web. One of the most important of these was called MapReduce, which splits massive amounts of data and computation into multiple chunks that could be farmed out to hundreds or thousands of computers working in parallel. MapReduce turned out to be relevant to a large class of problems, not just search.

Google published papers about MapReduce in 2003 and 2004, laying bare its secrets, but it didn’t take off more widely until Doug Cutting created an open source implementation of MapReduce called Hadoop in 2006. This allowed many other companies, at that time facing problems similar to those that Google had encountered years earlier, to more easily adopt the technique.

This process is key to the progress of software engineering. New problems beget new solutions, which are essentially handcrafted. Only later, when they are embodied into tools that make them more accessible, do these remarkable innovations become the workaday life of the next generation of developers. We are currently at the beginning of the transition from handcrafted machine learning models to tools that will make it possible for workaday developers to produce them. Once that happens, AI will infuse and change our entire society in the same way that mass manufacturing transformed the nineteenth and twentieth centuries.

The vastly improved productivity of agriculture provides a bit more nuance in understanding the mix of mind and matter in new tools. Agricultural productivity has come not just from the use of machines to do much of the work of planting and harvesting and from energy-intensive fertilizers (another industrial product), but through the development of more productive cultivars of the foods themselves. When Luther Burbank bred the Russet Burbank potato, now the most widely grown potato, he enhanced productivity with a very different balance of knowledge and material inputs than Hiram Moore did with the invention of the combine harvester.

In short, the two types of augmentation, physical and mental, are in a complex dance. One frontier of augmentation is the addition of sensors to the physical world, allowing data to be collected and analyzed at a previously unthinkable scale. That is the real key to understanding what is often called the “Internet of Things.” Things that once required guesswork are now knowable. (Insurance may well be the native business model of the “Internet of Things” in the same way that advertising became the native business model of the Internet, because of the data-driven elimination of uncertainty.) It isn’t simply a matter of smart, connected devices like the Nest thermostat or the Amazon Echo, the Fitbit and the Apple Watch, or even self-driving cars. It’s about the data these devices provide. The possibilities of the future cascade in unexpected ways.

When Monsanto bought Climate Corporation, the big data weather insurance company founded by former Google employees David Friedberg and Siraj Khaliq, and paired it with Precision Planting, the data-driven control system for seed placement and depth based on soil composition, they showcased that the new focus of productivity in agriculture is in data and control. Less seed, less fertilizer, and less water are needed when an eye in the sky can tell the farmer with precision the state of his land and the progress of his crop, and automatically guide his equipment to act on that knowledge.

This is true in engineering and materials science as well. Remember Saul Griffith’s comment: “We replace materials with math.” One of Saul’s companies, Sunfolding, sells a sun-tracking system for large-scale solar farms that replaces steel, motors, and gears with a simple pneumatic system made from an industrial-grade version of the same material used for soft drink bottles, at a tiny fraction of the weight and cost. Another project replaces the giant carbon containment vessels for natural gas storage with an intestine of tiny plastic tubules, allowing natural gas tanks to fit any arbitrary shape as well as reducing the risk of catastrophic rupture. It turns out that when you properly understand the physics, you can indeed replace materials with math.

“In 1660, Robert Hooke described what is now known as Hooke’s Law,” Saul told me. (Hooke’s Law states that the force needed to compress or extend a spring, or to deform a material, is proportional to the distance times the stiffness of the material.) “This meant that we could model all materials as linear springs,” Saul continued. “This was important in pre-computer days because it made the math simple when designing trusses and structures to take loads. In the real world, no materials are perfectly linear, and especially not plastics and rubbers. Now we have so much computation available we can design entirely new types of machines and structures that we simply couldn’t do the math on before.”

The new design capabilities go hand in hand with new manufacturing techniques like 3-D printing. 3-D printing doesn’t just provide low-cost prototyping and local manufacturing. It makes possible different kinds of geometries than traditional manufacturing. That requires software that encourages human designers to explore possibilities far afield from the familiar. The future is not just one of “smart stuff,” tools and devices infused with sensors and intelligence, but of new kinds of “dumb stuff” made with smart tools and better processes for making that stuff.

Autodesk, the design software firm, is all over that concept. Their next-generation tool set supports what is called “generative design.” The engineer, architect, or product designer enters a set of design constraints—functionality, cost, materials; a cloud-based genetic algorithm (a primitive form of AI) returns hundreds or even thousands of possible options for achieving those goals. In an iterative process, man and machine together design new forms that humans have never seen and might not otherwise conceive.

Most intriguing is the use of computation to help design radically new kinds of shapes and materials and processes. For example, Arup, the global architecture and engineering firm, showcases a structural part designed using the latest methods that is half the size and uses half the material, but can carry the same load. The ultimate machine design does not look like something that would be designed by a human.

The convergence of new design approaches, new materials, and new kinds of manufacturing will ultimately allow for the creation of new products as astonishing as the Eiffel Tower was to the world of 1889. Might we one day be able to build the fabled space elevator of science fiction, or Elon Musk’s Hyperloop transportation system?

The fusion of human with the latest technology doesn’t stop there. Already there are people trying to embed new senses—and make no mistake of it, GPS is already an addition to the human sensorium, albeit still in an external device—directly into our minds and bodies. Might we one day be able to fill the blood with nanobots—tiny machines—that repair our cells, relegating the organ and hip replacements of today, marvelous as they are, to a museum of antiquated technology? Or will we achieve that not through a perfection of the machinist’s art but through the next steps in the path trod by Luther Burbank? Amazing work is happening today in synthetic biology and gene engineering.

George Church and his colleagues at Harvard are beginning a controversial ten-year project to create from scratch a complete human genome. Ryan Phelan and Stewart Brand’s Revive and Restore project is working to use gene engineering to restore genetic diversity to endangered species, and perhaps one day to bring extinct species back to life. Technologies like CRISPR-Cas9 allow researchers to rewrite the DNA inside living organisms.

Neurotech—direct interfaces between machines and the brain and nervous system—is another frontier. There has been great progress in creating prosthetic limbs that provide sensory feedback and respond directly to the mind. On the further edges of innovation, Bryan Johnson, the founder of Braintree, an online payments company sold to PayPal for $800 million, has used the proceeds to found a company whose goal is to build a neural memory implant as a cure for Alzheimer’s disease. Bryan is convinced that it’s time for neuroscience to come out of the labs and fuel an entrepreneurial revolution, not merely repairing damaged brains but enhancing human intelligence.

Bryan is not the only high-profile neurotech entrepreneur. Thomas Reardon, the creator of Microsoft’s Internet Explorer web browser, retired from Microsoft to pursue a PhD in neuroscience and in 2016 cofounded a company called Cognescent to produce the first consumer brain-machine interface. As Reardon noted in an email to me, “Every digital experience can and should be controlled by the neurons which deliver the output of your thoughts, those neurons which directly innervate your muscles.” This is a brilliant combination of neuroscience and computer science. “The kernel of our work is held in the Machine Learning models which translate biophysical signals—yes, even at the level of individual neurons—to give you control over digital experiences.”

Elon Musk joined the parade in 2017 with a company called Neuralink, that is, according to Elon, “aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years.” But as Tim Urban, the author of the “Wait But Why” blog, who was given extensive access to the Neuralink team, explains, “[W]hen Elon builds a company, its core initial strategy is usually to create the match that will ignite the industry and get the Human Colossus working on the cause.” Proving that a profitable, self-sustaining business can be created in an untried area is a way to get everyone else piling on to the new opportunity. That is, like Bryan Johnson, Elon’s vision is not just to build a company, but to build a new industry.

In the case of Neuralink, that new industry is a generalized Brain-Machine interface that would allow humans and computers to interoperate far more efficiently. “You’re already digitally superhuman,” Elon notes, referring to the augmentation that our digital devices already give to us. But, he notes, our interfaces to those devices are painfully slow—typing on keyboards or even speaking aloud. “We should be able to improve that by many orders of magnitude with a direct neural interface.”

These technologies raise questions and fears as profound as any in the world of artificial intelligence. Like other tools of enormous power, they may come into common use through a tumultuous, violent adolescence. Yet I suspect that in the end, we will find ways to use them to make ourselves live longer, happier, more fulfilled lives.

When I was a kid, I read science fiction, a novel a day for years. And for so long, the future was a disappointment to me. We achieved so much less than I had hoped. Yet today, I am seeing progress toward many of my youthful dreams.

And that brings me back to AI. AI is not some kind of radical discontinuity. AI is not the machine from the future that is hostile to human values and will put us all out of work. AI is the next step in the spread and usefulness of knowledge, which is the true source of the wealth of nations. We should not fear it. We should put it to work, intentionally and thoughtfully, in ways that create more value for society than they disrupt. It is already being used to enhance, not replace, human intelligence.

“We’ve already seen chess evolve to a new kind of game where young champions like Magnus Carlsen have adopted styles of play that take advantage of AI chess engines,” notes Bryan Johnson. “With early examples of unenhanced humans and drones dancing together, it is already obvious that humans and AIs will be able to form a dizzying variety of combinations to create new kinds of art, science, wealth and meaning.” Like Elon Musk, Bryan Johnson is convinced that we must use neurotech to directly enhance human intelligence (HI) to make even more effective use of AI. “To truly realize the potential of HI+AI,” he says, “we need to increase the capacity of people to take in, process, and use information, by orders of magnitude.” But even without direct enhancement of human intelligence in the way that Bryan envisions, entrepreneurs are already building on the power of humans augmented by AI.

Paul English, the cofounder of Kayak, the travel search site that helped put many travel agents out of work, has a new startup called Lola, which pairs travel agents with an AI chatbot and a back-end machine learning environment, working to get the best out of both human and machine. Paul describes his goal with Lola by saying, “I want to make humans cool again.” He is betting that just as a human chess master paired with a chess computer can beat the smartest chess computer or the smartest human grandmaster, an AI-augmented travel consultant can handle more customers and make better recommendations than unaugmented travel agents—or travelers searching for deals and advice on their own using traditional search engines.

The arc between travel agents and Kayak and Lola, the embedding of what was once the specialized knowledge of a travel agent into ever-more-sophisticated tools, teaches us something important. Kayak used automation to replace travel agents with search-enabled self-service. Lola puts humans back into the loop for better service. And when we say “better service,” we usually mean “more human, less machinelike service.”

Sam Lessin, the founder and CEO of Fin, an AI-based personal assistant startup, makes the same point: “People in the technology community frequently ask me ‘how long will it take to replace the Fin operations team with pure AI?’” he wrote in an email. “At Fin, however, our mission is not automation for its own sake. Our guiding principle is providing the best experience for users of Fin. . . . Technology is clearly part of the equation. But people are also a critical part of the system that results in the best possible customer experience. And the role of technology at Fin is largely to empower our operations team to focus their time and effort on the work that requires decidedly human intelligence, creativity, and empathy.”

We are back to Clayton Christensen’s Law of Conservation of Attractive Profits. When something becomes a commodity, something else becomes valuable. As machines commodify certain types of human mental labor—the routine, mechanical parts—the truly human contributions will become more valuable.

Searching out the frontier for enhancing human value is the great challenge for the next generation of entrepreneurs, and for all of society.

In addition to enabling better, more human service, automation can expand access by making other jobs cheap enough to be worth doing. After receiving what he believed was an unfair parking ticket, Josh Browder, a young British programmer, took a few hours to write a program to protest the ticket. When the ticket was cleared, he realized he could turn this into a service. Since then, DoNotPay, which Josh calls “the Robot Lawyer,” has cleared more than 160,000 parking tickets. Josh has since moved on to building a chatbot in Facebook Messenger to automate the application for asylum in the United States, Canada, and the United Kingdom on behalf of refugees.

There are many jobs—like protesting unfair parking tickets—that don’t get done because they are too expensive, and making the job cheaper conflicts with the business model of existing companies. Tim Hwang, a programmer who is also trained as a lawyer, told me that when he worked at a law firm, he set out to make himself obsolete. “Every day, I’d get a set of tasks to do, and each night I’d go home and write programs to do them for me the next time I got asked to do them,” he said. “I got more and more efficient at doing the work more quickly, and this started to become a problem for the law firm because their business model depends on billable hours. I quit just ahead of getting fired.”


An Uber or Lyft driver demonstrates two different kinds of augmentation. The first is provided by Google Maps and similar services, which embed knowledge of the layout of city into a tool, so that drivers no longer need to know the city like the back of their hand. Google does that for them. The other augmentation is provided by the Uber or Lyft app itself. This app provides access to opportunity, alerting the driver that there are passengers to be picked up, and just where to find them. A real innovation in on-demand applications is the lighter-weight, more flexible methods they provide for matching workers with those who need their services.

Seth Sternberg, the founder of Honor, which matches home care workers with patients, describes better matching as central to what his company does. Unlike Uber, Honor’s caregivers are employees of the company, but the need for their services comes and goes. Some caregivers settle into an ongoing relationship with a patient, while others are called on demand for short-term needs. Getting the right match of caregiver and patient is important, Seth told me. It isn’t just location that matters but also skills. Some patients might need someone strong enough to lift them; others might need specialized nursing. A platform that helps the workers know in advance what they are getting into creates better, longer-lasting relationships, happier customers, and a more efficient system.

More effective matching is also an essential part of Upwork, the platform for connecting companies with freelancers in categories such as programming, graphic design, writing, translation, search engine optimization, accounting, and customer service. Stephane Kasriel, the CEO of Upwork, pointed out that if you want to understand the dynamics of job marketplaces, there is no better place to do it than on Upwork, because the “velocity of jobs” is so high. A typical job lasts days or weeks rather than years. Stephane told me that there are three kinds of workers on Upwork, and how the job of the platform is different for each of them.

First, Stephane said, are those who already have marketable skills, and good reputations on the platform, and are getting all the work they need because they are “in the flow.” The platform doesn’t need to do much to help these people.

Second, there are workers who have marketable skills but have not yet built a reputation and are not getting enough work. A lot of the focus of Upwork’s internal data science team is to find these people and to point them to the right open jobs. The challenge is not just helping them find a perfect match with the work they have the skills for; often it is pointing them to new areas where there is not enough supply, where some study or retraining will let them get a foothold in the virtuous circle of reputation and recommendation. For example, Stephane pointed out that a few years ago, there were plenty of Java developers, but not enough Android developers, and the best way for people in this second group to get traction in the system (and better pay, since Android was paying more than Java) was to gain new skills. Today there aren’t enough workers with data science skills, and there’s a pay premium to be had there.

The third group consists of workers who don’t have the right skills for the jobs that they are applying for. Here the right thing to do is to discourage people from applying to the wrong jobs. “The time they spend applying for the wrong jobs is time they could spend working,” Stephane told me.

Upwork has developed its own skills assessment system; the company performs 100,000 hours of assessment a month. What’s so fascinating about Upwork’s assessments is that they are immediately verifiable, because someone either is able to do a job to the satisfaction of the customer, or they aren’t. This is in stark contrast to many of the assessment tools sold by education companies, which provide paper certifications but little evidence that workers with those certifications can actually do the job.

All of these points suggest that we may be reaching a tipping point where we escape the shackles of the current labor mindset, and instead rediscover how to use technology to empower and augment workers, finding their strengths and matching them with opportunity, building tools that make it easier and more effective to work together, and creating dynamic labor marketplaces in which on-demand, “high freedom,” and the “high velocity” of work go hand in hand.


One key to understanding the future is to realize that as prior knowledge is embedded into tools, a different kind of knowledge is required to use it, and yet another to take it further. Learning is an essential next step with each leap forward in augmentation.

I’ve observed this throughout my career educating programmers about the next stages in technology. When I wrote my first computer manual in 1978, for Digital Equipment Corporation’s “LPA 11K Laboratory Peripheral Accelerator,” it described how to transfer data from high-speed laboratory data acquisition devices using assembly language, the low-level language corresponding closely to the actual machine code that is still hidden deep inside our computers. The directions to the computer had to be very specific: move the data from this device port to that hardware memory register; perform this calculation on it; put the result into another memory register; write it out to permanent storage.

While some programmers still need to delve into assembly language, machine code is usually produced by compilers and interpreters as output from higher-level languages like C, C++, Java, C#, Python, JavaScript, Go, and Swift, which make it easier for programmers to issue broader, high-level instructions. Meanwhile, those programmers in turn are creating user interfaces that allow people who don’t even know how to program to invoke powerful capabilities that a few decades earlier were impossible without knowing the exact memory layout and instruction set of the computer.

But even “modern” languages and interfaces are only an intermediate stage. Google, which employs tens of thousands of the most-sought-after software engineers on the planet, is now realizing that they need to retrain those people in the new disciplines of machine learning, which use a completely different approach to programming, training AI models rather than explicit coding. They are doing it not by sending them back to school, but with an apprenticeship.

This highlights a point that I’ve observed again and again through my career: Technology moves far faster than the education system. When BASIC was the programming language of the early personal computer, programmers learned it from each other, from books, and by looking at the source code of programs shared via user groups. By the time the first classes teaching BASIC appeared in schools, the industry had moved far beyond it. By the time schools were teaching how to build websites with PHP, the bigger opportunity was in building smartphone apps or in mastering statistics and big data.

That lag was key to O’Reilly’s success over the past few decades as the publisher of record on emerging technologies. No one was teaching what people needed to know. We had to learn it from each other. All of our bestselling books were created by finding people who were at the edges of innovation and either getting them to write down what they knew, or pairing experts with writers who could extract their knowledge. This led us to document the cutting edge of Linux; the Internet; new programming languages like Java, Perl, Python, and JavaScript; the best practices of the world’s leading programmers; and more recently, big data, DevOps, and AI. When, in 2000, our ad on the cover of Publishers Weekly baldly stated, “The Internet Was Built on O’Reilly Books,” everyone accepted it as the simple truth.

As the pace of technology has increased, bringing people together at live events became a more important part of our work. We also built a knowledge-sharing platform that allows anyone with unique technology or business skills to teach them to our customers. The platform, which we called Safari as a homage to the nineteenth-century woodcuts of animals that graced the covers of our books, now includes tens of thousands of ebooks from hundreds of different publishers, not just our own, plus thousands of hours of video training, learning paths, learning environments with integrated text, video, and executable code, and live online events with leading experts teaching cutting-edge techniques.

One of the big changes in our business is that technologies that were once the realm of adventurers at the edges of innovation have moved into the mainstream. Fortune 500 companies, not just individual programmers or small startups, have to learn at the pace at which technology itself evolves. What we do is in a period of profound transformation, but I know that whatever techniques and delivery methods we use for new knowledge, some things will remain constant:

People need a base—knowing enough to ask the right questions and to take in new knowledge.

People learn from each other.

People learn best by doing, solving real problems and pulling in the knowledge they need on demand.

People learn best when what they are doing is so compelling that they want to do it on their own time, not just because the job asks them to do it.


When we launched the first issue of Make: magazine in January 2005, the cover story featured Charles Benton, who’d built a rig so he could take aerial photos from a kite, before GoPro had shipped its first action camera and long before drone video was a gleam in anyone’s eye; another story described how to make a homebrew videocam stabilizer. Yet another explained how Natalie Jeremijenko had added sensors to Sony’s AIBO robotic dog so it could be used to sniff out toxic waste. A fourth story included plans for building a device that would let you see just what information was stored on the magnetic stripe of a credit card or hotel room key.

Dale Dougherty, who conceived Make:, had been struck by the fact that early issues of magazines such as Popular Mechanics were very different from their modern equivalents. The modern versions were whiz-bang tours of technology products you could buy. Forty years ago they were full of projects that you could do.

Go back to the days of the Wright Brothers and you’ll find how-to books like The Boy Mechanic. You couldn’t buy an airplane, but you could dream of building one.

This design pattern, that the future is built before it can be bought, is an important one to recognize. The future is created by people who can make and invent things and those who can tinker and improve and put inventions into practice. These are people who learn by doing.

In a later issue of Make:, Dale published an “Owner’s Manifesto,” which opened with the words, “If you can’t open it, you don’t own it.” The truth of that statement has been proven many times since, as companies have increasingly used “Digital Rights Management” software to drive up profit by locking in customers, denying them the right to repair or even resupply the devices they nominally own. Printers, coffee-makers, and most recently high-tech tractors and other farm equipment have been the locus of battles between companies and their customers over who controls products that the consumers nominally own.

But it wasn’t just the power grab represented by DRM and sealed hardware that you can’t open without special tools, or are forbidden to service under the terms of a shrink-wrapped license agreement, that bothered Dale and the makers he represents. The idea was that if we really want to have mastery over our tools, we have to be able to get inside them, understand how they work, and modify them.

When you get a smartphone, a tablet, or a computer today, you get a slick computer product that’s been designed to be easy to use, but is difficult to modify or repair. It wasn’t like that for those of us who began working with computing in the 1970s and 1980s (or even earlier). We started with something relatively primitive, a blank slate that we had to teach to do anything useful. That teaching is called programming. Only a small number of the billions of people who own a smartphone today know how to program; back then, with a few limited exceptions, a computer wasn’t very useful at all unless you learned to program it yourself.

We taught ourselves how to program by solving problems. Not random, artificial exercises to teach us programming. Real problems that we needed to solve. Since Dale and I were writers, this meant creating programs to help us write and publish—editing scripts to enforce consistency of terminology across a documentation set, to correct common grammar mistakes, to build an index, or to format and typeset a manuscript. We got so good at doing this that we wrote a book together called Unix Text Processing. And we put our newfound skills to work building a publishing company that could send a book to the printer within days after the author and editor finished working on it, rather than waiting months, as is still common with most traditional publishers.

Unix was a creature of the odd transition between the proprietary hardware systems of the first age of computing and the commodity PC architectures of the second. It was designed to be a portable software layer across a wide range of computers with different hardware designs. And so, whenever we heard about an interesting new program, we couldn’t just download it and run it, we often had to “port it” (modify it so it would run on the type of computer we were using). And because every computer had a programming environment, we could easily add our own custom software. When we started publishing and selling books via mail order in 1985, I didn’t buy an order-entry and accounting system; I wrote my own.

When we discovered the web, building things got even more fun. Because the web had been designed to format online pages using a text markup language—HTML, the Hypertext Markup Language—it played right to our strengths. HTML meant that whenever you saw a neat new feature on a web page, you could pull down a menu, select View Source, and see how the trick was done.

The early web was very simple. Clever new “hacks” were introduced all the time, and we gleefully copied them from each other with abandon. Someone came up with a clever solution; it quickly became the common property of everyone else who had the same problem.

In our early days at O’Reilly, we’d written documentation for hire. But we soon realized that there was an enormous opportunity in following the explosive wavefront of innovation, documenting technologies that were just being invented, capturing the knowledge of the people who were learning by doing, because they were doing things that had never been done before.

Emulation is key to learning. In our early years, we used to describe our books as trying to re-create the experience of looking over the shoulder of someone who knew more than you did, and watching how they worked. This was an important attraction of open source software. Back in 2000, when the software industry was trying to come to grips with this new idea, Karim Lakhani, then at MIT’s Sloan School of Management, and Robert Wolf of the Boston Consulting Group did a study of motivations of people working on open source software projects. What they found was that along with adapting software to meet their own specialized needs, learning and the sheer joy of intellectual exploration were more important than traditional motivators like higher salaries or career success.

Dale recognized this pattern playing out again in the world of new kinds of hardware. Cheap sensors, 3-D printers, and lots of old, disposable hardware waiting for creative reuse meant that the physical world was starting to experience the same kind of malleability that we’d long associated with software. But in order to take advantage of that opportunity, people had to be able to take things apart and put them back together in new ways.

That is the essence of the Maker movement. Making for the joy of exploration. Making to learn.

There’s no joy in our current education system. It is full of canned solutions to be memorized when it needs to be a vast collection of problems to be solved. When you start with what you want to accomplish, knowledge becomes a tool. You seek it out, and when you get it, it is truly yours.

Stuart Firestein, in his book Ignorance, makes the case that science is not the collection of what we know. It is the practice of investigating what we don’t know. Ignorance, not knowledge, drives science.

There’s also an essential element of play in both science and learning. In his autobiography, physicist Richard Feynman described the origin of the breakthrough that led to his Nobel Prize. He was burned-out and found himself unable to concentrate on work. Physics was no longer fun. But he remembered how it used to be. “When I was in high school, I’d see water running out of a faucet growing narrower, and wonder if I could figure out what determines that curve,” he wrote. “I didn’t have to do it; it wasn’t important for the future of science; somebody else had already done it. That didn’t make any difference. I’d invent things and play with things for my own entertainment.”

So Feynman resolved to go back to having fun, and stop being so goal-driven in his research. Within a couple of days, while watching someone in the Cornell University cafeteria spinning a plate in the air, he noticed that the wobbling rim of the plate was rotating faster than the university logo in its center. Just for fun, he began to calculate the equations for the rate of spin. Bit by bit he realized that there were lessons for the spin of electrons, and before long, he was deep into the work that eventually became known as quantum electrodynamics.

This is true in corporate learning as well. I remember a powerful conversation with David McLaughlin, director of developer relations at Google. We had both agreed to speak at a technology advisory meeting for a huge software firm. The company wanted to know how to get more developers for its platform. David asked a key question: “Do any of them play with it after work, on their own time?” The answer was no. David told them that until they fixed that problem, reaching out to external developers was wasted effort.

The importance of fun to learning was the source of Dale’s original subtitle for Make: magazine, “Technology on your own time.” That is, this is stuff you want to do even though no one is asking you to. In 2006, we followed up the magazine with Maker Faire, a vast “county fair with robots” that now draws hundreds of thousands of people each year. It is packed with kids eager to learn about the future, and parents rediscovering the wonder of learning.

We have far too little fun in most formal learning, and people are hungry for it. If you can’t inspire curiosity, chances are you are on the wrong path.


Once you have curiosity, the Internet has provided powerful new ways to feed it. In their book, The Power of Pull, John Hagel III, John Seely Brown, and Lang Davison outline a fundamental change in the nature of twenty-first century learning. The book opens with the story of a group of young surfers, on the brink of becoming professional competitors, who improved their surfing skills by creating, watching, and analyzing videos of themselves surfing, and by comparing themselves to surf footage of experts available online. They posted their own footage to YouTube, and as their skills grew, they were discovered by sponsors and invited to competitions.

This combination of learning by doing, social sharing, and on-demand expertise is central to how people—especially young people—learn today. Brit Morin, the founder and CEO of millennial lifestyle site Brit + Co, explains that “I’m beginning to feel like I’m no longer part of the popular crowd at school.” Marketers, she says, are now obsessed with what they call “Generation Z”—fourteen- to twenty-four-year-olds. This is a generation that doesn’t remember a time when you couldn’t use the Internet to look up anything you want. She notes that “sixty-nine percent of them say they go to YouTube to learn ‘just about everything’ and prefer it as a learning mechanism far beyond teachers or textbooks.

That’s certainly my experience with my thirteen-year-old stepdaughter. Recently we were having some guests over to the house for a business dinner. “Can I make dessert?” she asked. We agreed, not quite sure what to expect. What we got was astonishing, worthy of a high-end restaurant. Ice cream with a scattering of berries in perfect, eggshell-thin dark chocolate cups.

“How did you do that?” I asked.

“I melted the chocolate, then formed it on balloons.” She’d learned how to do it on YouTube. She is not someone who has spent years learning to cook either. She got interested when a friend of hers competed on the Food Network kids-baking reality TV show. She started watching food videos and duplicating them in the kitchen. This was one of her first efforts.

The power of on-demand access to information is the key to the next generation of learning. Those concerned about technology and the future of work should take note. So too the switch to short snippets of video as a preferred learning mechanism. More than 100 million hours of how-to videos were watched on YouTube in North America during the first four months of 2015.

“Employers must recognize this change and begin valuing skills/competencies that are learned in nontraditional ways,” says Zoë Baird of the Markle Foundation, who led the Rework America initiative with Starbucks CEO Howard Schultz. “The key is embracing skills-based hiring and employment practices. Too many employers use a four-year degree as a proxy for hiring, even for jobs that don’t require one.” She pointed out to me that a majority of the jobs projected to have the largest growth rates in America through 2024 don’t require a bachelor’s degree. If that is the case, we surely need to transform our outdated labor market into one that values skills. The Markle Foundation, LinkedIn, the state of Colorado, Arizona State University, and others have been working to address this over the past year through Skillful, an effort to transform America’s outdated labor market to reflect the needs of the digital economy.

There’s another important point to add. Access to an unlimited world of information is a powerful augmentation of human capability, but it still has prerequisites. Before she could learn how to make an exquisite dessert by watching a YouTube video, my stepdaughter had to know how to use an iPad. She had to know how to search on YouTube. She had to know that a world of content was there for the taking. At O’Reilly, we call this structural literacy.

Users without structural literacy about how computers work struggle to use them. They learn by rote. Going from an iPhone to Android, or the reverse, or from PC to Mac, or even from one version of software to another, is difficult for them. They aren’t stupid. These same people have no trouble getting into a strange car and orienting themselves. “Where is that darned lever to open the gas cap?” they ask. They know it’s got to be there somewhere. Someone with structural literacy knows what to look for. They have a functional map of how things ought to work. Those lacking that map are helpless.

When I used to personally write and edit computer books, the first chapter was always designed to provide a kind of structural literacy about the topic. My goal was for readers of that first chapter to understand the topic well enough that they could drop in anywhere in the book, looking for a specific piece of information, and have enough context to find their way around and understand what they come across.

The level and type of structural literacy required differs with the type of work you do. Today’s startups, increasingly embedding software and services into devices, require foundational skills in electrical and mechanical engineering, and even “trade” skills such as soldering. An experienced software developer today probably needs to up his or her game with regards to tensor calculus in order to work with machine learning algorithms. Teachers are far more effective if they are broadly familiar with the culture and context of their students.

One of the problems with many online learning platforms for teaching new technology is that structural literacy is all they provide. They are good for teaching beginners who know nothing about a topic—getting them to structural literacy about programming by teaching them JavaScript, for example, or providing a course on digital marketing—when what people need next is just-in-time learning about very specific topics.

We had a telling experience with one of our Safari customers, a large international bank, when it came time to renew their annual subscription. “No need to pitch us,” they said. “We had a failure in one of our systems, and we found the documentation we needed in Safari, averting millions of dollars in losses.” Pat McGovern, the founder of technology media giant IDG, once told me that his working principle was that as technology advances, “the specific drives out the general.”

In the end, on-demand education is not that dissimilar from on-demand transportation. You need a rich marketplace of people who know things, and others who need to know them. The way that knowledge is delivered—book, video, face-to-face teaching—gets a lot of attention, but the bigger question is how to bootstrap a rich knowledge network.


If being able to search for instructions on YouTube or on a specialized platform like Safari is the heart of today’s on-demand learning, augmented reality is surely tomorrow’s. Aircraft mechanics at Boeing are engaged in a pilot project using Microsoft HoloLens to give them schematics and diagrams overlaid on the work they are doing, guiding them through complex tasks that otherwise would take years of experience to master. At various architectural firms, architects and their clients equipped with augmented or virtual reality are stepping into their own models, modifying them, and seeing what they wish to build before they actually create anything in the physical world.

Despite the much-publicized failure of Google Glass and the premature hype around virtual reality platforms such as Oculus Rift, there is plenty of evidence that augmented reality and virtual reality will have a powerful impact in on-demand learning. Smartphones and tablets alone are already being used effectively in areas like telehealth and shop floor communications and on-the-job training, and with Microsoft’s investment in HoloLens, continued experiments like Snap’s Spectacles, and rumored new products from Apple, not to mention that a next generation of Google Glass is likely still under development, I’m confident that there will be plenty of news on this front.

Once you understand that a trend is happening, you can watch it unfold. Your mental map cues you to be alert to signs that it is gaining steam, and to explore ways that it can be applied.

You can start looking for and tracking interesting news, like the $200 head-mounted augmented reality display for infantry soldiers demonstrated at a DARPA event in 2015, or the deep commitment Microsoft has made to human augmentation of all kinds as a key part of its corporate strategy.


There’s a deeper economic story too, one that has been explored by James Bessen in his book Learning by Doing. He attempts to answer the question “Why does it take so long for the productivity advances from new technology to show up in people’s wages?” Looking at the history of the nineteenth-century cotton mills in Lowell, Massachusetts, as well as the introduction of modern digital technology, he comes to the conclusion that our traditional narrative about innovation is wrong. The bulk of the gains in productivity come over time, as innovations are implemented and put into practice.

Bessen describes how major innovations, such as the introduction of the steam mill, involve both de-skilling and up-skilling, the replacement of one set of skills with another. It is mythology, he notes, that automation replaced skilled crafters with unskilled workers. In fact, by measuring the productivity difference between beginners and fully competent crafters and doing the same for workers in the new factories, it is possible to determine that in the 1840s, it took a full-year investment in training for either to reach full productivity. Using training time as a proxy for skill, it is clear that they were equally but differently skilled.

The new skills, Bessen notes, were not the result of schooling. “They were mostly learned on the factory floor,” he notes. This continues today. “Economists’ common practice of defining ‘skilled workers’ as those with four years of college is particularly misleading,” he writes. “The skill needed to work with a new technology often has little to do with the knowledge acquired in college.”

That was certainly true of me. I studied Greek and Latin in college. Everything I learned about computers, I learned on the job. The knowledge I learned in college was useless to me. The habits of mind that I formed were what mattered, the foundational skills of study, and particularly the ability to recognize patterns. The struggle to parse complex Greek texts that were, quite frankly, beyond my skill in the language was great preparation when I took on the challenge of documenting programs written in programming languages that at first I barely understood. It is not just knowledge that we have to teach, it is the ability to learn. To learn constantly. Over the course of my career, learning itself has been the most important part of my ongoing work.

The struggle to find work, which affects far too many people in our economy, has many causes, but if there is one solution that anyone can take into his or her own hands, it is the power to learn. It is the one essential skill we must teach our children if they are to adapt to a constantly changing world. A broad general education and love of learning may be more important than specific skills that will soon be out of date.

During the industrial revolution, the new generation of workers was surprisingly well educated. Bessen notes that when Charles Dickens visited the mills in Lowell in 1842, he “reported several ‘surprising facts’ back to his English readers: the factory girls played pianos, they nearly all used circulating libraries, and they published quality periodicals.”

People typically entering the new workforce were less productive at first, and there was no pool of experienced workers to draw from. Turnover was high as people tried out the new style of work, and not all of them succeeded. The machine mills and looms didn’t become truly productive for decades after their introduction. Bessen explains that “what matters to a mill, an industry, and to society generally is not how long it takes to train an individual worker, but what it takes to create a stable, trained labor force.” This is also exactly what I’ve observed in my own career.

The skills needed to take advantage of new technology proliferate and are developed over time through communities of practice that share expertise with each other. Over time, the new skills are routinized and it becomes easier to train lots of people to exercise them. It is at that point that they begin to affect productivity and improve the wages and incomes of large numbers of people.

Part of the secret of Silicon Valley’s success, so difficult to replicate elsewhere, has been that there is a large pool of people who have the necessary skills to go to work at virtually any high-tech company and get productive fairly quickly. This concentrated labor force is not yet available everywhere. As the necessary knowledge penetrates society, though, we can expect the achievements of Silicon Valley to be both more replicable and less remarkable. The unicorn will be fading into the ordinary.

Writing in Wired, Clive Thompson asks a provocative question: Is coding becoming a blue-collar job? “These sorts of coders won’t have the deep knowledge to craft wild new algorithms for flash trading or neural networks,” he writes. “But any blue-collar coder will be plenty qualified to sling JavaScript for their local bank.” As coding becomes routinized, the educational needs of those practicing it become less demanding. For many types of programming, people need the equivalent of vocational training rather than an advanced software engineering or math degree. And that’s exactly what we see with the rise of coding academies and boot camps.

But there’s more to it than that. The rise of the web didn’t just require (and reward) people with the skills of programming. As the technology matured, it also called into being entirely new jobs. An early “webmaster” was the jack-of-all-trades, from programming and system administration to web design. But before long, a successful website needed specialized designers, front-end developers whose skills combined programming and design, back-end developers with deeper experience in databases, experts in search engine optimization and social media, and much, much more. The expertise embodied in a successful media website of 2016, like BuzzFeed, is radically different from the expertise at Yahoo! in 1995. As technology penetrates every sector of our society, it will create many more specialized jobs.

Ryan Avent, the author of The Wealth of Humans, has a further insight, that the success of new technology depends on social capital, which he describes as “contextually dependent know-how, which is valuable when shared by a critical mass of people.” He distinguishes this from the concept of human capital, which includes skills and knowledge that are not especially context dependent, and can belong to a single person. (It is also distinct from the notion of social capital as originally defined by Glenn Loury and James Coleman or popularized by Robert Putnam. For Loury and Coleman, it is the networks, whom we know, and how we can use our networks as resources, rather than the “know-how,” and for Putnam, how these networks are strengthened by civic engagement. But Avent’s usage overlaps profoundly: It is only when there is a substantial network of people with shared knowledge that a technology can really take hold in the economy.)

Anyone who has visited Google and seen the flyers in the bathroom stalls with names like “Testing on the Toilet” and “Learning on the Loo,” with many of the weekly updates focused on how to use Google’s internal systems, understands how even at a firm so rich in expertise there is a constant need to educate people about the specialized, context-specific knowledge of how Google itself operates.

This kind of social capital is key to the shared expertise that differentiates firms. Describing his work as a senior editor at the Economist, Avent notes: “The general sense of how things work lives in the heads of long-time employees. That knowledge is absorbed by newer employees over time, through long exposure to the old habits. What our firm is, is not so much a business that produces a weekly magazine, but a way of doing things consisting of an enormous set of processes. You run that programme, and you get a weekly magazine at the end of it.”

But, Avent continues, “[T]he same internal structures that make production of the print edition so magically efficient hinder our digital efforts.” And, he notes that “simply bringing in tech-savvy millennials isn’t enough to kick an organization into the digital present; the code must be rewritten.” (That is, of course, the central lesson of Amazon’s platform transformation as well.) One of the critical roles of the entrepreneur, Avent adds, is to create space for new ways of doing things. This is true within existing firms as well as at startups.

The process of integrating new technology into business and society is far from over. New skills are proliferating faster than they can be learned in any school. Meanwhile, the advantages accruing to firms from new technology are deeply wrapped up in their ability to train their workforce and change their workflows to accommodate it.

This retraining was central, for example, to former IBM CIO Jeff Smith’s attempt to transform IBM’s internal software development culture to one that mirrors the agile, user-centered, data-driven, and cross-functional approach that characterizes today’s Silicon Valley startups. Except that instead of doing it at a startup, he was doing it for a software development team of 20,000 people, in support of a company with more than 400,000 employees.

Laura Baldwin, president and COO at O’Reilly Media, tells our customers, “You have to go to war with the army you have.” Yes, it’s essential to bring in new talent with the latest skills, but retraining your existing team and building new ways for people to work together is also essential.

The presence of a stable, trained workforce is not something to be achieved and then taken for granted. The mill owners of Lowell invested in their workforce; the decisions in America over the past decades to ship manufacturing jobs overseas have effectively been a commitment to de-skilling without re-skilling. As new small-batch techniques now make manufacturing cost-effective in America again, the necessary skilled labor force is missing. According to a 2015 study by Deloitte and the Manufacturing Institute, more than two million manufacturing jobs will go unfilled over the next decade. Even if China’s costs rise to match those of the United States, the United States would not be competitive without a major investment in manufacturing skills development.

A lot of companies complain that they can’t hire enough people with the skills they need. This is lazy thinking. Graham Weston, the cofounder and chairman of managed hosting and cloud computing company Rackspace, based in San Antonio, Texas, proudly showed me Open Cloud Academy, the vocational school his company founded to create the workforce he needs to hire. He told me that Rackspace hires about half of the graduates; the rest go to work in other Internet businesses.

At the speed with which technology changes today, we can expect the traditional education establishment to provide a foundation, but it will be the job of every company that wants to succeed to invest in the unique and ever-changing skills of its workforce. Our education system must be rethought for a world of lifelong learning. If Bessen is right, it is not just technology innovation, but the diffusion of knowledge about how to use that technology through society that makes a difference in making us all richer. Accelerating that diffusion is one of the most important ways we can work to create a better future.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required