O'Reilly logo

WTF?: What's the Future and Why It's Up to Us by Tim O'Reilly

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required



AT THE OUTSET OF THE GREAT DEPRESSION, JOHN MAYNARD Keynes penned a remarkable economic prognostication: that despite the ominous storm that was then enfolding the world, mankind was in fact on the brink of solving “the economic problem”—that is, the quest for daily subsistence.

The world of his grandchildren—the world of those of us living today—would, “for the first time . . . be faced with [mankind’s] real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

It didn’t turn out as Keynes imagined. Sure enough, after a punishing depression and a great world war, the economy entered a period of unparalleled prosperity. But in recent decades, despite all the remarkable progress of business and technology, that prosperity has been very unevenly distributed. Around the world, the average standard of living has increased enormously, but in modern developed economies, the middle class has stagnated and for the first time in generations, our children may be worse off than we are. Once again, we face what Keynes called “the enormous anomaly of unemployment in a world full of wants,” with consequent political instability and uncertain business prospects.

But Keynes was right. The world he imagined, where “the economic problem” is solved is, in fact, still before us. Global poverty has sunk to all-time lows, and if only we play our cards right, we could still enter the world Keynes envisioned.

Technology and the spread of knowledge have greatly reduced poverty in the world, even as they have created economic challenges for workers in developed countries. As Max Roser, creator of Our World in Data, a remarkable collection of visualizations about how the world has been getting better over the last five hundred years, notes: “Even in 1981 more than 50% of the world population lived in absolute poverty—this is now down to about 14%. This is still a large number of people, but the change is happening incredibly fast. For our present world, the data tells us that poverty is now falling more quickly than ever before in world history.”

Much of Keynes’s essay, titled “Economic Possibilities for Our Grandchildren,” concerns the issue of what people might do with their time when productivity has increased to the point where the machines do all the work.

Is there really not enough work left for humans to do?

Keynes didn’t think so in 1930, and I don’t think so now. “We are suffering just now from a bad attack of economic pessimism,” he wrote. “It is common to hear people say that the epoch of enormous economic progress which characterised the nineteenth century is over; that the rapid improvement in the standard of life is now going to slow down; that a decline in prosperity is more likely than an improvement in the decade which lies ahead of us. I believe that this is a wildly mistaken interpretation of what is happening to us. We are suffering, not from the rheumatics of old age, but from the growing-pains of over-rapid changes, from the painfulness of readjustment between one economic period and another” (italics mine).

Sure enough, we are indeed once again hearing the chorus of pessimism and doubt. Automation is going to destroy white-collar jobs in the same way it once destroyed factory jobs. We have an economy that relies on growth, but the age of growth is over. And so on.

Keynes presciently gave a name to the heart of our current angst: technological unemployment. He defined it as our inability to find new uses for labor as quickly as we are finding ways to eliminate the need for it. He concluded, “But this is only a temporary phase of maladjustment.”

Like Keynes, I remain optimistic. There has already been enormous dislocation, with far more ahead, but if we make the right choices as a society, we will come through it in the end. The short-term pain is very real, and as we’ve discussed, we must rewrite the rules of our economy and strengthen our safety net to mitigate this pain. If we can manage through the transition without violent revolution, though, history provides plenty of reason for hope.

Back in 1811, weavers in Britain’s Nottinghamshire took up the banner of the mythical Ned Ludd (who had supposedly smashed mechanical knitting machines thirty years earlier) and staged a rebellion, wrecking the machine looms that were threatening their livelihood. They were right to be afraid. The decades ahead were grim. Machines did replace human labor, and it took time for society to adjust.

But those weavers couldn’t imagine that their descendants would have more clothing than the kings and queens of Europe, that ordinary people would eat the fruits of summer in the depths of winter. They couldn’t imagine that we’d tunnel through mountains and under the sea, that we’d fly through the air, crossing continents in hours, that we’d build cities in the desert with buildings a half mile high, that we’d stand on the moon and put spacecraft in orbit around distant planets, that we would eliminate so many scourges of disease. And they couldn’t imagine that their children would find meaningful work bringing all of these things to life.

What is possible with the aid of today’s technology that we can’t yet imagine?

Nick Hanauer once said to me, “Prosperity in human societies is best understood as the accumulation of solutions to human problems. We won’t run out of work until we run out of problems.”

Are we done yet?

I don’t think so. We have yet to deal with the enormous transitions to our energy infrastructure that will be required to respond to climate change; the public health challenges of new infectious diseases; the demographic inversion in which a growing class of elders will be supported by a smaller cohort of workers; rebuilding the physical infrastructure of our cities; providing clean water to the world; feeding, clothing, and entertaining nine billion people. How do we turn millions of displaced people into settlers in the cities of the future rather than refugees in squalid encampments? How do we reinvent education? How do we better care for each other?

History provides another story of jobs being taken by machines, more recent than the Luddites. Thanks to the makers of Hidden Figures, a moving film from 2016 about female African American mathematicians who worked at Langley Research Center during the space race of the early 1960s, millions now know how Dorothy Vaughan reacted when she saw what amounted to the Luddites’ machine looms. Vaughan supervised a segregated group of “computers,” in this case, all women and all African American, who did complex mathematical calculations by hand to power JFK’s space program. In the romanticized retelling of her story in the movie, when NACA (the precursor to NASA) bought an IBM 7090 computer (so big they had to break down walls to get it in), Vaughan saw the writing on the wall, and took it upon herself not only to learn FORTRAN, the programming language of this computer, but to teach her staff. Instead of ending up unemployed, they ended up with jobs that had never existed before, making possible something that had never been done before.

Tomorrow, that new work might not come in the form of what we think of as a job. Note that Nick said “we won’t run out of work,” not “we won’t run out of jobs.” Part of the problem is that “the job” is an artificial construct, in which work is managed and parceled out by corporations and other institutions, to which individuals must apply to participate in doing the work. Financial markets are supposed to reward people and corporations for accomplishing work that needs doing. But as discussed in Chapter 11, there is a growing divergence today between what financial markets reward and what the economy really needs.

This is what Keynes meant by “the enormous anomaly of unemployment in a world full of wants.” Because corporations have different motivations and constraints than individuals, it is possible that a corporation is not able or willing to offer “jobs” even as “work” goes begging. Because of the structure of employment, in uncertain times companies are hesitant to take on workers until they are sure of customer demand. And because of pressure from financial markets, companies often find short-term advantage in cutting employment, since driving up the stock price gives owners a better return than actually employing people to get work done. Eventually “the market” sorts things out (in theory), and corporations are once again able to offer jobs to willing workers. But there is a lot of unnecessary friction and consequential negative side effects—what economists call “externalities.”

We’ve seen how technology platforms are creating new mechanisms that make it easier to connect people and organizations to work that needs doing—a more efficient marketplace for work. You can argue that that is one of the key drivers at the heart of the on-demand revolution that includes companies like Uber and Lyft, DoorDash and Instacart, Upwork, Handy, TaskRabbit, and Thumbtack. The drawbacks of these platforms in providing consistent income and a social safety net shouldn’t blind us to what does work about them. We need to improve these platforms so that they truly serve the people who find work through them, not try to turn back the clock to the guaranteed employment structure of jobs in the 1950s.

There is also a leadership challenge: to correctly identify work that needs doing. Think of what Elon Musk has done to catalyze new industries with Tesla, SpaceX, and SolarCity.

Like Elon, I believe that climate change will be for our generation, and the next, what World War II was for our parents and grandparents, a challenge that we must rise to meet or will suffer dire consequences from. But it is in rising to challenges that we can build a better future. It’s already clear that transforming our energy infrastructure will provide a great many well-paid human jobs, but it is also clear that technology will play an enormous role. Already in data centers, for example, AI is radically increasing power efficiency. How do we rethink and rebuild our electric grid to be decentralized and adaptive? How do we use autonomous vehicles to rethink the layout of our cities, making them greener, healthier, better places to live? How do we use AI to anticipate ever-more-unpredictable weather, protecting our agriculture, our cities, and our economy?

Mark Zuckerberg and Priscilla Chan’s announcement in 2016 that they are funding an initiative that aims to cure all disease within their children’s lifetimes is another example of a bold dream that leaps over the feeble imagination of the current market. It’s hard to imagine that AI and machine learning won’t play a major role in striving toward that ambitious goal, along with our growing control over human genetics and biology. Already AI is being used to analyze millions of radiology scans at a level of resolution and precision impossible for humans, as well as helping doctors to keep up with the flood of medical research at a level that can’t be accomplished by a human practitioner. It’s also hard to imagine that there isn’t plenty of work for humans in eliminating disease and disability for everyone.

Markets are not infallible. Government can play a role, as it did with the Internet, GPS, and the Human Genome Project. That role is not limited just to investments in basic research or to projects that require coordinated effort beyond the capability of even the largest commercial actors. Government must also deal with market failure. This can be the failure of the commons, outright malfeasance by commercial actors, or the misdirected fitness function of financial markets and the bad maps of economists, which are strangling the economy today.

But the change can and must begin with corporate “self-interest, properly regarded.” Jeff Immelt, Jack Welch’s successor as CEO at GE, has rejected the purely financial calculus of the old GE, and has recommitted the company to “solving the world’s hardest problems,” as he told me at my 2015 Next:Economy Summit. Jeff believes that it should be a paramount concern for all of us that there is a shortage of good jobs around the world. “We need to be investing in this next generation of who’s employable and what skills they need. And that’s the purpose of companies just like it is of schools.” That is, good jobs, not just profits, or even great products, are one of the key outputs of a great company. Executives can’t just complain about not being able to hire the right people. They have to take responsibility for training the people they need for the jobs of the future. “If there’s going to be a competitive workforce,” he continued, “we need to be at the leading edge of who is going to create that.”

The question is not whether there will be enough work to go around, but the best means by which to fairly distribute the proceeds of the productivity made possible by the WTF? technologies of what Erik Brynjolfsson and Andy McAfee call “the second machine age.”

Reducing working hours for the same amount of pay is one of the most fundamental ways that the benefits of rising productivity have traditionally been distributed more widely. In 1870, the average American (male) worked 62 hours per week; by 1960, that number was down to just over 40 hours, where it has roughly hovered since. Yet our material standard of living is far higher. Unpaid work in the home (mostly done by women) has declined even more sharply, from 58 hours in 1900 to 14 in 2011. One key question is why external paid labor hours have not fallen further in the past fifty years, matching the increase in productivity for domestic labor. The case can be made that the entry of women into the paid external workforce, then global access to workers in low-wage countries, and direct legislative action have reduced the bargaining power of labor, allowing companies to allocate the surplus to corporate profits rather than reducing working hours and paying higher hourly wages, as happened in the past.

Education is another way that we effectively reduced working hours. Young children once went to work; in the nineteenth century, we sent them instead to school. In the first half of twentieth century, the high school movement extended schooling by another six years; in the second half, college added two to four more. As we will discuss in Chapter 15, education will need to be extended again to meet the changing needs of the twenty-first century.

Something must be done to end this “temporary phase of maladjustment,” which has gone on far too long and created so much economic pain for too many!

It is deeply unfortunate how difficult it is for humans to practice foresight. In his wise and insightful book, The Wealth of Humans, senior editor for the Economist Ryan Avent traces the lessons that we could and should take from the centuries of economic and political struggle that led from the innovations of the industrial revolution to the successful economies of the second half of the twentieth century. Prosperity came when the fruits of productivity were widely shared; enmity, political turmoil, and even outright warfare were the harvest of rampant inequality. It is obvious that generosity is the robust strategy.


Universal basic income (UBI) is one proposed mechanism for achieving the transition between today’s system and a more human-centered future. This proposal, that every human being should be given an income sufficient to meet the basic needs of life, appeals to progressives as a basic human right, and to conservatives as a way of radically simplifying the complex rules of the present welfare state.

Fabled labor leader Andy Stern left his job as the head of the Service Employees International Union (SEIU) to write a book making the case for UBI; Y Combinator Research has begun a pilot program in Oakland, California; and peer-to-peer charity GiveDirectly is asking its users to fund a pilot in Kenya. The GiveDirectly experiment is fascinating on two fronts: It is crowdfunded by ordinary people, who already use the platform to provide aid in the form of direct cash transfers to the needy; and in a developing country, the costs are lower so the program can be more extensive, and thus allows for a true randomized control trial.

These experiments tell us how far the idea has come since it was proposed by Thomas Paine in 1795, and more recently by Milton Friedman in 1962 (and Paul Ryan in 2014). There are many arguments against UBI, most notably the cost of making it truly universal, and that providing the income to people whether they need it or not will starve existing programs that provided targeted aid to those who actually need it. At the very least, though, UBI provides a compelling exercise in imagining a radically different way of building a social safety net, and, in thinking through how we might pay for it, a radically different way of dividing up the economic pie.

I asked MIT labor economist David Autor whether there were any natural experiments in universal basic income, and what they teach us. He cited the contrast between Saudi Arabia and Norway. Both countries have enormous oil wealth, he noted, but in Saudi Arabia, the bulk of the wealth goes to a small percent of the population. Much of the work of everyday society is looked down on and is done by an underclass of low-paid “guest workers” while an elite works at sinecure jobs or enjoys idle pursuits. In Norway, by contrast, Autor said, “All kinds of work are valued. Everybody works, they just work a little less.” The generous redistribution of oil profits and a strong social safety net funded by the wealth that is understood to belong to all makes Norway one of the happiest and wealthiest countries in the world.

For a technology perspective, I turned to Paul Buchheit, creator of Gmail and now a partner at Y Combinator, and Sam Altman, the head of Y Combinator. In a 2016 conversation, Paul said to me: “There may need to be two kinds of money: machine money, and human money. Machine money is what you use to buy things that are produced by machines. These things are always getting cheaper. Human money is what you use to buy things that only humans can produce.”

The idea that there should be different kinds of “money” is a provocative metaphor rather than a concrete proposal. Money is already a method for agreeing on the exchange rate between radically different kinds of goods and services. Why should we need different kinds of money? I’m not sure that Paul meant this literally. What he was pointing to is that at different times in history, the primary lever for the creation of money has changed. Ownership of land was once the key to great wealth. During the industrial era, we built mechanisms that were optimized for converting a regimented combination of human and machine labor into money. In the twenty-first century, we need to recognize and optimize for a different kind of value.

Paul’s argument is that the key thing that humans offer that machines do not is “authenticity.” You can buy a cheap table made by a machine, he said, or a handcrafted table made by a person. In the long term, the price of the former (in machine money) should decline, but the latter will always cost about the same in human money (some quantity roughly proportional to the number of hours required to make it).

Paul believes that the right name for what many are calling “universal basic income” should be “the citizen’s dividend,” the name given to it in Thomas Paine’s Agrarian Justice. Paine made the appeal for sharing the value of unimproved land with every citizen of the new United States; Buchheit suggests that all of mankind should have some claim on the fruits of technological progress. That is, we should use tax policy to capture some amount of the bounty from machine productivity, and provide that to all people as a stipend with which they can meet the needs of everyday existence. Similarly, in 2017, Bill Gates proposed a “robot tax,” with the proceeds being used to fund caring for children or the elderly, or for education.

Paul believes that the bounty from the next generation of machine productivity should be distributed sufficiently, so that everyone can have enough “machine money” to meet their basic needs. Meanwhile, that productivity should also provide goods at ever-lower costs, increasing the value of the citizen’s dividend. This is the world of prosperity that Keynes envisioned for his grandchildren.

How might we pay for a universal basic income? The entire amount the United States federal government spends on social welfare programs—$668 billion in 2014—would amount to only $2,400 per person. Rutger Bregman, the author of Utopia for Realists, a book about basic income, divides the pie differently, pointing out that rather than providing an income to those who don’t need it, we could use a negative income tax to give cash only to those who actually need it. Writers Matt Bruenig and Elizabeth Stoker calculated that in 2013, the amount needed to bring all of the Americans living below the poverty line up to at least its level would cost only $175 billion.

Sam Altman explained that those who argue about how we would pay for a universal basic income today miss the point. “I am confident that if we need it, we will be able to afford it,” he said in a 2016 discussion of UBI at venture capital firm Bloomberg Beta with Andy Stern and the Aspen Institute’s Natalie Foster. One major factor that isn’t being considered, as Sam expanded on it in our subsequent conversation, is that the possible productivity gains from technology are enormous, and these gains can be used to reduce the price of any goods produced by machines—a basket of goods and services sufficient to support basic needs that costs $35,000 today might cost $3,500 in a future where the machines have put so many people out of work that a universal basic income is required.

Hal Varian agrees. “In fact, it has to work that way,” he told me. “If people adopt a technology because it produces more output at a lower cost, then the size of the pie gets bigger. The real question is how that additional value is divided.”

Neither Paul nor Sam addressed the point that not all goods become evenly cheaper—in many cities, for instance, the price of housing has gone up far faster than the price of consumables has gone down. Nor do they address the political obstacles to dividing that bounty. Nonetheless, there is enough truth in this idea to support Paul’s metaphor that machine money could operate by different rules from human money. In a profound way, the value of machine money inflates not as a currency normally inflates, but because the lower costs provided by machine productivity constantly increase its purchasing power. Meanwhile, the declining cost of anything made by machines would argue that the work that humans alone can do should become more rather than less valuable.

The remainder of this chapter will discuss some ways in which that future is and is not unfolding.

The chorus of doubt about the jobless future sounds remarkably similar to the one that warned of the death of the software industry due to open source software. Clayton Christensen’s Law of Conservation of Attractive Profits holds true here too. When one thing becomes commoditized, something else becomes valuable. We must ask ourselves what will become valuable as today’s tasks become commoditized.


What might we do with our time, if there were a universal basic income sufficient to meet the necessities of life, or if paid working hours were reduced by the same amount as domestic labor, and wages increased? Keynes was right. The key question for mankind should be how to use our freedom from pressing economic cares, how to occupy our leisure, and how “to live wisely and agreeably and well.”

What might we do with our time, if we didn’t have to work for a living? The things that require a human touch, for starters. Caring for our parents and our friends. Reading aloud to a child. And things we do for love. Enjoying a meal with a loved one is not something that machines can make more efficient.

I love Paul’s distinction between two types of money, but I do wonder whether it is complete. His notion of human money encompasses two very different classes of goods and services: those that involve a human-to-human touch—parenting, teaching, caregiving of all kinds—and those that involve creativity.

Perhaps “human money” should be further subdivided into “caring money” and “creativity money.” Caring is a necessity of life, just as is food and shelter, and should not be denied to anyone in a just society. In an ideal world, caring is a natural outgrowth of family and community, as we care for those we love.

Time is a key currency of caring. And that brings us full circle, back to the on-demand economy as an alternative to traditional employment. For many people, an on-demand platform that allows a better blend of personal human time and machine money time may be a real step forward into a far better labor economy than an attempt to fit everyone back into the regimented industrial age world of jobs with regular forty-hour workweeks.

Anne-Marie Slaughter, the president of New America and author of Unfinished Business: Women Men Work Family, notes that the on-demand economy “will reshape not only ways of working but also patterns of consumption.” She hopes for a future where the choice to take off time to raise children or take care of parents will not be a career-ending move. “Care is unpredictable and work traditionally has been fixed. And that doesn’t work,” she told me in an onstage interview at my 2015 Next:Economy Summit in San Francisco. “So when you are able to schedule your own work, that is the solution to the care problem. But it is only a solution to the care problem if we can let people make a living and support the families they are caring for.”

However, economies thrive on exchange, and even in the world of caring, money is a substitute for time. And so there is also a caring economy of paid professionals, including teachers, doctors, nurses, eldercare assistants, babysitters, hairdressers, and massage therapists. Back in 1950, who would have guessed that in 2014 there would be nearly 300,000 “fitness trainers” in the United States?

If you look at the current shape of the economy, there are huge and growing numbers of service jobs of this kind. A study of UK census data by consulting firm Deloitte found that in 1871, caring economy jobs represented 1.1% of the total labor economy. By 2011, they were 12.2%. The report also noted that between 1992 and 2014, the number of nursing auxiliaries and assistants rose tenfold, and the number of teaching assistants rose nearly sevenfold.

In a society with an inverted demographic pyramid, in which there are far more of the elderly than young people to support them, as we will see in many developed countries by 2050, there may not be enough people to do the work of caring and machines may even be called on to fill the gap. This problem is not limited to developed countries; China’s rapidly growing middle class is an eager consumer of caring services.

On-demand technology has promise for growing the market even further. Seth Sternberg, founder of Honor, a service making it easier for older adults to remain in their own homes, pays its caregivers as full-time employees with benefits, but uses on-demand technology to make care more flexible and affordable for consumers. Seth told me that being able to purchase just the amount of care you need, when you need it, means that people who could never afford the service find themselves able to do so, and the market expands.

The economic problem is that caregiving is insufficiently valued in our society. If there were ever a case to be made for the Clothesline Paradox, this is it. Why is it that work that is so valuable to society is expected to be provided for free, or when paid, is paid so poorly?

If we are working from a new map, in which our objective is to value human effort, not to dispense with it, we surely must start by assigning an economic value to caregiving.

If you think about it, this is in fact what most countries (and progressive employers in the United States) are doing with extended paid parental leave for both women and men, or when countries provide public financing for eldercare services. (The United States is one of only two countries in the world that don’t mandate any paid maternity or paternity leave; the other is Papua New Guinea.)

Parental leave is just the beginning. Early childhood education could be revolutionized by an economic system that provided basic income and the flexibility for parents to spend time with their children. Hiring more teachers at better salaries and reducing class size in public schools to the level of the best private schools would be another pragmatic way to transition to the caring economy. It is slowly being recognized that the cost of insufficient care for children gets paid one way or another, if not up front, then in healthcare or prison costs later in life.

Even in the absence of changes in childcare, eldercare, or education spending, I suspect that if we successfully tackle the problem of creating a better income distribution across all levels of society by some other means, people will naturally allocate more of that income to caring, education, and similar activities. After all, we already know that given sufficient income, people routinely pay more for better, more personal service. The rich still live in a world where doctors make house calls and personal tutoring is the norm.

Might it not be the case that in a world where routine cognitive tasks are commoditized by artificial intelligence, it is the human touch that will become more valuable, the source of competitive advantage?

The issue remains whether a combination of market forces and political action can increase the earnings of those who do the work that is not going to be automated away. Even if we never run out of jobs, we must still ask what sort of lives those jobs will pay for. A world in which a small number of people enjoy productive, highly paid work and can indulge in expensive leisure activities and superb personal care while others are ground underfoot is not a world that any of us should aspire to.


As suggested in the first part of this chapter, the principal work of the twenty-first century should be to harness the power of today’s digital and cognitive technologies to achieve leaps of presently inconceivable progress analogous to those that our nineteenth- and twentieth-century forebears accomplished with their industrial tools. It may be that we will need fewer human hours to do that work, just as over the past centuries we have vastly reduced the amount of labor required to feed ever-larger numbers of people.

But the work of the nineteenth and twentieth centuries included innovations not just in food production, commerce, transportation, energy, sanitation, and public health but also in new ways for far more people to consume the vast variety of goods and services made possible by those innovations. So too, the cognitive era will bring forth new types of consumption. This is the realm of creativity money. Creativity is an indomitable wellspring within all of us. It is part of what makes us human, and in many ways, it is entirely independent of the monetary economy.

It is a mistake to think that “the creative economy” is limited to entertainment and the arts. Creativity is the focus of a competition for accumulation as intense as any that characterizes Paul Buchheit’s machine money. It is at the heart of industries like fashion, real estate, and luxury goods, all of which depend on the competition among people who are already rich to own more, to enjoy, or sometimes just to show off their wealth.

Creativity money is another way of saying that we pay a premium for the good things of life beyond the basics. Sports, music, art, storytelling, and poetry. The glass of wine with friends. The night out at the movies or the local music venue. The beautiful dress and the sharp suit. The combination of design, manufacturing, and marketing that goes into the latest LeBron James basketball shoe.

People at all levels of society pay that premium as a way of expressing and experiencing beauty, status, belonging, and identity. Creativity money is what someone pays for the difference between a Mercedes C-Class and a Ford Taurus, for a meal at a world-famous restaurant like the French Laundry rather than the local French bistro, or at that same bistro rather than at a McDonald’s. It is why those who can afford it pay three dollars for an individually crafted cappuccino rather than drinking Folger’s coffee from a five-pound can, as our parents did. It is why people pay huge prices or wait years to see Hamilton, while tickets for the local dinner theater are available right now.

Dave Hickey, an art critic and MacArthur “genius” grant winner, describes how Harley Earl of General Motors, the first-ever head of design for a major American company, turned the post–World War II automobile into “an art market.” Hickey defines that as a market in which products are sold on the basis of what they mean, not just what they do. The annual turnover of new models was one way that Detroit soaked up the enormous postwar productive capacity of America’s factories.

Turning the computer into an “art market” is also a perfect explanation of what Steve Jobs accomplished when he returned to Apple in 1997. “Think Different” was a powerful statement that buying from Apple was a statement about who you are. Yes, the products were beautiful and useful, but just like the automobile when it was the ultimate object of consumer desire, the Mac, and later the iPhone, became a statement of identity. Design was not just a functional improvement, but a way of making a statement. In a world where personal computers had become a commodity, design became a unique source of added value. Once again, attractive profits were conserved.

In the late eighteenth century, in his short novel Rasselas, Samuel Johnson wrote that the Great Pyramid “seems to have been erected only in compliance with that hunger of imagination which preys incessantly upon life, and must be always appeased by some employment. Those who have already all that they can enjoy must enlarge their desires. He that has built for use till use is supplied must begin to build for vanity, and extend his plan to the utmost power of human performance that he may not be soon reduced to form another wish.” That is, even in a world where every need is met, there will still be “a world full of wants.”

Given an income sufficient to the necessities of life, some people will choose to step off the wheel—to spend more time with family and friends, in creative pursuits, or whatever they damn well please. But even if the machines do most of the essential work and everybody gets a stipend to cover the cost of basic living expenses, competition for additional creativity money will likely drive an economy in which some people just get by while others develop solid middle-class incomes and still others amass vast fortunes.

I’m fascinated by a comment that Hal Varian, Google’s chief economist, made to me over dinner one night: “If you want to understand the future, just look at what rich people do today.” It’s easy to think of this as a heartless libertarian sentiment. Our dinner companion, Hal’s former student and coauthor Carl Shapiro, fresh from his stint at Obama’s Council of Economic Advisers, seemed horrified. But when you think about it for a moment, it makes sense.

Dining out was once the province of the wealthy. Now far more people do it. In our most vibrant cities, a privileged class experiences a taste of a future that could be the future for everyone. Restaurants compete on the basis of creativity and service, “everyone’s private driver” whisks people around in comfort from experience to experience, and one-of-a-kind boutiques provide unique consumer goods. Rich people once took the European grand tour; now soccer hooligans do it. Cell phones, designer fashion, and entertainment have all been democratized. Mozart had the Holy Roman Emperor as his patron; Kickstarter, GoFundMe, and Patreon extend that opportunity to millions of ordinary people.

This rings of bubble talk from the privileged coasts. Yet it is true far more broadly. Cell phones are found even in the poorest parts of the world. The variety of clothing, food, and consumer goods available at a Walmart would astonish even wealthy people from fifty years ago.

Restaurants—and food in general—teach us something profound about the future of the economy. Everywhere, food is blended with ideas to make it more valuable. As Korzybski said, “People don’t just eat food, but also words.” This isn’t just ordinary coffee. It’s fair-trade, single-origin coffee. And look, we have six different kinds. You must try them all. These aren’t ordinary fruits and vegetables. They are organic, farm-to-table. This bread is gluten-free. Is that North Carolina barbecue or Texas barbecue? KFC or Church’s Fried Chicken?

At every price level, there is competition to provide a unique experience. Food is a commodity, yet, just as Christensen pointed out, when one thing becomes a commodity, something adjacent becomes valuable. In a flourishing city, there is a dizzying array of creative, multicultural dining options.

In 2016, I met with a staffer from the White House who wanted my advice on which Silicon Valley entrepreneur President Obama should sit down with onstage at the Global Entrepreneurship Summit. “We’re here in a wonderful restaurant in Oakland,” I said. “Boot and Shoe Service is one of three restaurants created by a man named Charlie Hallowell. They are part of why people now say Oakland is a great place to live. We need more Charlie Hallowells more than we need another Mark Zuckerberg.” After all, a great platform like Facebook is a rare thing, not easily duplicated. You can count the people who succeed like Zuck did on your fingers; not so the tens of thousands of Charlie Hallowells who characterize a truly rich and diverse economy.

New industries driven by the human touch are everywhere. In the United States, more than 4,200 craft breweries now make up more than 10% of the market, and command a price double that of a mass-produced beer. In the first quarter of 2016, 25 million customers purchased handcrafted and artisan goods on Etsy. These are small green shoots in an economy dominated by mass-produced products, but they teach us something important.

What is happening in entertainment may be another interesting harbinger of the future. While blockbusters still dominate in Hollywood and New York publishing, a larger and larger proportion of people’s entertainment time is spent on social media, consuming content created by their friends and peers. Anne-Marie Slaughter notes that “millennials’ definition of quality of life now involves more time and less stuff.” They want to spend their money on experiences, not things.

That profound shift in media consumption has most visibly enriched Facebook, Google, and the current generation of media platforms, but it has also created new opportunities for professional media creators. The New York Times or Fox News article shared on Facebook has something added to it that a copy picked up from a newsstand never did: the endorsement of someone you know. The art of sharing things that will spread virally often now involves remixing them in some way—combining a quote with an image, or framing the subject with a pithy observation of your own.

Social media is also increasingly creating paying jobs for a growing number of individual media creators. YouTube star and VidCon impresario Hank Green wrote, “I started paying my bills with YouTube money around the time I hit a million views a month.” Millions of teens use “Hank and John EXPLAIN!” videos to learn about current events, and they get a deeper dive in a five-minute video than they would in hours of mass-produced “news.” Millions more learn math, science, music, and philosophy from other YouTube channels like Khan Academy, or One-Minute Physics, or Hank’s own Crash Course. When my young niece learned that I knew Larry Page and Mark Zuckerberg and Bill Gates, she said, “Meh!” But when she heard I knew Hank and John Green, she was really impressed.

Keep in mind that “YouTube money,” as Hank names it, is only one of many new forms of creative money that are available via online platforms. There’s Facebook money, Etsy money, Kickstarter money, App Store money, and more. Who would have thought ten years ago that people could make six-figure earnings playing video games while millions of others follow along on YouTube or Twitch?

To those who worry that these small signs of a new economy could not possibly replace the jobs of today, I would once again cite Gibson’s observation that “the future is here. It just isn’t evenly distributed yet.” Every flourishing harvest begins with the smallest of shoots poking their heads through the soil.

Some of these marketplaces are further along than others in creating opportunities for individuals and small companies to convert attention (the raw material of creativity money) into cash. The next few years will see an explosion of startups that find new ways to convert more and more of the attention that is spent online into traditional money.

Jack Conte, half of the musical duo Pomplamoose and founder and CEO of crowdfunding patronage site Patreon, told me that he founded Patreon after “Nataly and I got 17 million views of our music videos, and it turned into $3,500 in ad revenue. Our fans value us more than that.” Tens of thousands of artists now receive enough patronage via the platform that they can now concentrate on their work. As crowdfunding sites like Patreon (and, of course, Kickstarter, Indiegogo, and GoFundMe) show, there are increasingly new opportunities for ordinary people to compete for real currency, not just attention. These sites are still a relatively small part of the overall economy, but they have a lot to teach us about its possible future direction.

Perhaps the right answer, though, is not to monetize creativity in the old way, by converting it to machine money, but to build an entirely new kind of economy. In his 2003 novel, Down and Out in the Magic Kingdom, science fiction writer and activist for a better future Cory Doctorow wrote of a future economy where advanced technology has made it essentially free to meet any physical need. The economy instead is based on a reputation currency called “whuffie.” The economic competition is to get other people to approve of and support your creative projects. Kickstarter campaigns and Facebook likes may be early prototypes of that future currency.

Creativity can be the focus of an intense competition for status, so that “he who has built for use till use is supplied must begin to build for vanity,” but it can also be the key to a future human economy that would let all enjoy the fruits of leisure that are brought to us by machine productivity while also encouraging entirely new kinds of creative work and social consumption.

Work gives a sense of purpose, and it’s also worth considering how many things people work at that are currently unpaid, or low paid, that are actually far more valuable to them than things we have been mistakenly trained to pay for. Aspiring actors and musicians working as baristas to pay the rent consider the constant training and auditions in hope of future success to be their real work. It is not at all inconceivable to add “I’m working on my YouTube channel” or “I’m building my Facebook following” to the list of things that give a higher proportion of purpose than of remuneration. Dave Hickey writes that his dad “thought money was something you turned into music, and music, ideally, was something you turned into money.” It was music, not money, that gave him purpose and made him happy.

Purpose and meaning are also essential to the caring economy. Jen Pahlka told me the story of a Lyft driver she met in Indianapolis, who leaves a couple of hours early every morning to pick up strangers because he doesn’t get enough human contact in his job as a highly paid engineer. He donates his earnings from Lyft to charity.

The volunteer at a homeless shelter may derive far deeper meaning from that unpaid care for other human beings than from the rushed busywork of even a fulfilling career. The amateur athlete may consider her or his training and competitions more important to happiness than earning big bucks at the investment bank. A father or mother who stays home to raise children is not “opting out.” He or she is opting in to something potentially far more meaningful and important.

This is the possibility that Keynes foresaw when he wrote: “The strenuous purposeful money-makers may carry all of us along with them into the lap of economic abundance. But it will be those peoples, who can keep alive, and cultivate into a fuller perfection, the art of life itself and do not sell themselves for the means of life, who will be able to enjoy the abundance when it comes.”

Research on what demographers Gianni Pes and Michel Poulain called “blue zones”—areas with the highest percentage of centenarians, so-called because they originally marked them with blue circles drawn on a map—identified the key characteristics that lead to longer, happier lives. There were a number of dietary factors (an approach that author Michael Pollan summarized as “Eat food. Not too much. Mostly plants”), moderate, regular alcohol intake, especially wine, and moderate, regular physical activity. But even more important were a sense of purpose, engagement in spirituality or religion, and engagement in family and social life.

We know what the good life looks like. We have the resources to provide it to everyone. Why have we constructed an economy that makes it so difficult to achieve?

When faced with questions of how we adapt society and the economy to the current wave of technological change, our goal should not be to have the future look like the past. We must make it new. Writing about the political challenges we face today, Jen Pahlka put her finger on what must always be a key principle for thinking about the future:

The status quo isn’t worth protecting. It’s so easy to be in reaction, on the defensive, fighting for the world we had yesterday. Fight for something better, something we haven’t seen yet, something we have to invent.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required