5

NETWORKS AND THE NATURE OF THE FIRM

WHEN DALE AND I LAUNCHED GNN IN 1993, OUR MODEL WAS shaped by our experience as publishers. We curated a catalog that highlighted “the best of” the Web, we took over the NCSA “What’s New” page to announce new sites, and we did other things that made sense in the publishing world we’d grown up in, one of whose key functions was curation.

Our eyes were opened as Yahoo! took on the far more ambitious goal of cataloging everything on the web. Along with the rest of the media world, we watched in awe (though many also watched in dismay) as Google (and later Facebook) became media titans by algorithmically curating what would once have been an enormous “slush pile” so that it becomes valuable to its customers and advertisers.

Today, on-demand companies like Lyft and Uber in transportation and Airbnb in hospitality bring a similar model to the physical world.

Finnish management consultant Esko Kilpi beautifully describes the power of these new technology-enabled networks in an essay on Medium, “The Future of Firms.” Kilpi reflects on economist Ronald Coase’s theory of twentieth-century business organization, which explores the question of when it makes sense to hire employees rather than simply contracting the work out to an individual or small company with specialized expertise. Coase’s answer is that it makes sense to put people into one business organization because of the transaction costs of finding, vetting, bargaining with, and supervising the work of external suppliers.

But the Internet has changed that math, as Kilpi observes. “If the (transaction) costs of exchanging value in the society at large go down drastically as is happening today,” he writes, “the form and logic of economic and organizational entities necessarily need to change! The core firm should now be small and agile, with a large network.” He adds: “Apps can do now what managers used to do.”

As far back as 2002, Hal Varian predicted that the effect might be the opposite. “Maybe the Internet’s role is to provide the inexpensive communications that can support megacorporations,” he wrote. In a follow-up conversation, he said to me, “If transaction costs go down, coordination within firms becomes cheaper too. It’s not obvious what the outcome will be.”

Of course, networks have always been a part of business. An automaker is not made up of just its industrial workers and its managers, but also of its network of parts suppliers and auto dealerships and ad agencies. Similarly, large retailers are aggregation points for a network of suppliers, logistics companies, and other suppliers. Fast-food vendors like McDonald’s and Subway aggregate a network of franchisees. The entire film and TV industry consists of a small core of full-time workers and a large network of temporary on-demand workers. This is also true of publishing and other media companies. My own company, O’Reilly Media, publishes books, puts on events, and delivers online learning with a full-time staff of four hundred and an extended network of tens of thousands of contributors—authors, conference presenters, technology advisers, and other partners.

But the Internet takes the networked firm to a new level. Google, the company that ended up as the prime gateway to the World Wide Web, provides access to a universe of content that it doesn’t own yet has become the largest media company in the world. In 2016, Facebook’s revenues also surpassed those of the largest traditional media companies. Americans 13 to 24 years old already watch more video on YouTube, much of it user-contributed, than they watch on television. And Amazon has surpassed Walmart as the world’s most valuable retailer by offering virtually unlimited selection, including marketplace items from ordinary individuals and small businesses. These are companies that, to rephrase Kilpi, “are large and agile, with a large network.”

But perhaps most important, these companies have gone beyond being just hubs in a network. They have become platforms providing services on which other companies build, central to the operation and control of the network. And as we shall come to see in later chapters, when marketplaces become digital, they become living systems, neither human nor machine, independent of their creators and less and less under anyone’s control.

THE EVOLUTION OF PLATFORMS

On-demand companies like Uber and Lyft are only the latest development in an ongoing transformation of business. Consider the evolution of the retail marketplace as exemplified first by chain stores, and then by Internet retailers like Amazon, which have largely replaced a network of small local businesses that delivered goods through retail storefronts. Cost efficiencies led to lower prices and greater selection, drawing more consumers, which in turn gave more purchasing power to larger retailers, allowing them to lower prices further and to crush rivals in a self-reinforcing cycle. National marketing of these advantages led to the rise of familiar chains. The Internet added even more leverage, reducing the need to invest in real estate, reaching customers regardless of their geographical location, and building in new habits of customer loyalty and instant gratification. With delivery now same-day in many locations, anything you need is only a few clicks away.

Internet retailers like Amazon were also able to offer even larger selections of products, not just aggregating offerings from a carefully chosen network of suppliers, but opening up self-service marketplaces in which virtually anyone can offer products. Years ago, Clay Shirky described the move from “filter, then publish” to “publish, then filter” as one of the key advantages brought by the Internet to publishing, but the lesson applies to almost every Internet marketplace. It is fundamentally an open-ended network in which filtering and curation (known in other contexts as “management”) happens largely after the fact.

But that’s not all. While large physical retailers cut costs by eliminating knowledgeable workers, using lower prices and greater selection to hedge against worse customer service (compare an old-time hardware store with a chain like Home Depot or Lowe’s), online retailers did not make these same trade-offs. Instead of just eliminating knowledgeable workers, they replaced and augmented them with software.

Even though there are several orders of magnitude more products on Amazon than in physical stores, you don’t need a salesperson to help you find the right product—a search engine helps you find it. You don’t need a salesperson to help you understand which product is the best—Amazon has built software that lets customers rate the products and write reviews to tell you which are best, and then feeds that reputation information into their search engine so that the best products naturally come out on top. You don’t need a cashier to help you check out—software lets you do that yourself.

Amazon’s use of automation goes far beyond the use of robots in its warehouses (though Amazon Robotics is one of the leaders in the field). Every function the company performs is infused with software, organizing its workers, its suppliers, and its customers into an integrated workflow. Of course, every corporation is a kind of hybrid of man and machine, created and operated by humans to augment their individual efforts. But even the highest-performance traditional company has an internal combustion engine; a digital company is a Tesla, with high-torque electric engines in each wheel.

The greater labor efficiency of the online model can be seen by comparing the revenue per employee of Amazon versus Walmart. Walmart, already the most efficient offline retailer, employs 2.2 million people to achieve its $483 billion in sales, or approximately $219,000 per employee. Amazon employs 341,000 people to achieve $136 billion in sales, or approximately $399,000 per employee. Were it not for Amazon’s continuing investments in expansion and R&D, that number would be far higher.

NETWORKED PLATFORMS FOR PHYSICAL WORLD SERVICES

One way to think about the new generation of on-demand companies such as Uber and Lyft is that they are networked platforms for physical world services, bringing a fragmented industry into the twenty-first century in the same way that e-commerce transformed retail. Technology is enabling a fundamental restructuring of the taxi and limousine industry from that of a network of small firms to a network of individuals, replacing many middlemen in the taxi business with software, using the freed-up resources to put more drivers on the road.

The coordination costs of the taxicab business have generally kept it local. According to the Taxicab, Limousine & Paratransit Association (TLPA), the US taxi industry consists of approximately 6,300 companies operating 171,000 taxicabs and other vehicles. More than 80% of these are small companies operating anywhere between 1 and 50 taxis. Only 6% of these companies have more than 100 taxicabs. Only in the largest of these companies do multiple drivers use the same taxicab, with regular shifts. And 88% of taxi and limousine drivers are independent contractors.

When you as a customer see a branded taxicab, you are seeing the brand not of the medallion owner (who may be a small business of as little as a single cab) but of the dispatch company. Depending on the size of the city, that brand may be sublicensed to dozens or even hundreds of smaller companies. This fragmented industry provides work not just for drivers but for managers, dispatchers, maintenance workers, and bookkeepers. The TLPA estimates that the industry employs a total of 350,000 people, which works out to approximately two jobs per taxicab. Since relatively few taxicabs are “double shifted” (these are often in the largest, densest locations, where it makes sense for the companies to own the cab and hire the driver as a full-time employee), that suggests that almost half of those employed in the industry are in secondary support roles. These are the jobs that are being replaced by the efficient new platforms. Functions like auto maintenance still have to be performed, so those jobs remain.

The fact that Uber and Lyft use algorithms and smartphone apps to coordinate driver and passenger can lead us to overlook another fact—that, at bottom, Uber and Lyft provide dispatch and branding services much like existing taxi companies, only more efficiently. And like the existing taxi industry, they essentially subcontract the job of transport —except in this case, they subcontract to individuals rather than to smaller businesses, and take a percentage of the revenue rather than charging a daily rental fee for the use of a branded taxicab.

These firms thus use technology to eliminate the jobs of what used to be an enormous hierarchy of managers (or a hierarchy of individual firms acting as suppliers), replacing them with a relatively flat network managed by algorithms, network-based reputation systems, and marketplace dynamics. These firms also rely on their network of customers to police the quality of their service. Lyft even uses its network of top-rated drivers to onboard new drivers, outsourcing what once was a crucial function of management.

But focusing on the jobs that are lost is a mistake. Jobs are not lost so much as they are displaced and transformed. Uber and Lyft now deploy more drivers (albeit a majority of them part-time) than the entire prior taxi industry. (I have been told that Uber has about 1.5 million monthly active drivers worldwide. Lyft has 700,000.) They have also provided an additional source of customers for limousine drivers at the same time that they have provided punishing competition for traditional taxi companies.

There are other on-demand employers hiding in plain sight. I have been told that at current growth rates, Flex, Amazon’s network of on-demand delivery drivers, might well be larger than Lyft by 2018. Interestingly, Flex uses a model in which drivers sign up in advance for two, four, or six-hour shifts for a predetermined hourly rate. Amazon takes the risk of not having enough delivery volume to keep them busy. While drivers may earn slightly less than the most successful Uber or Lyft drivers, the greater predictability has made Flex highly desirable to drivers.

Even in a world of self-driving cars, it is possible to see how increases in the services being provided can lead to more employment, not less. If we play our cards right, jobs that are lost to automation can be equivalent to the kinds of “losses” that came to bank tellers and their managers with the introduction of the ATM. It turns out that there were fewer tellers per branch but a net increase in the total number of tellers, because automation made it cheaper to open new branches. The ATM also replaced boring, repetitive tasks with more interesting, higher-value tasks. Tellers who used to do mostly repetitive work became an important part of the “relationship banking team.”

We haven’t yet seen the equivalent of the “relationship banking team” in on-demand transportation (though there are signs of what that might be in Uber’s early experiments in making house calls to deliver flu shots and bringing elderly patients to doctors’ appointments). Uber and Lyft are on their way to becoming a generalized urban logistics system. It’s important to realize that we are still exploring the possibilities inherent in the new model.

This is not a zero-sum game. The number of things that people can do for each other once transportation is cheap and universally accessible also goes up. This is the same pattern that we’ve seen in the world of media, where network business models have vastly increased the number of content providers despite centralizing power at firms like Google and Facebook. It is also the opposite of what happens in old-style firms, where concentration of power often led to a smaller set of goods and services at higher prices.

Similarly, robots seem to have accelerated Amazon’s human hiring. From 2014 through 2016, the company went from having 1,400 robots in its warehouses to 45,000. During the same time frame, it added nearly 200,000 full-time employees. It added 110,000 employees in 2016 alone, most of them in its highly automated fulfillment centers. I have been told that, including temps and subcontractors, 480,000 people work in Amazon distribution and delivery services, with 250,000 more added at peak holiday times. They can’t hire fast enough. Robots allow Amazon to pack more products into the same warehouse footprint, and make human workers more productive. They aren’t replacing people; they are augmenting them.

BLINDED BY THE FAMILIAR

When the past is everything you know, it is hard to see the future. Often what keeps us from recognizing what lies before us is a kind of afterimage, superimposed on our vision even after the stimulus is gone. Afterimages occur when photoreceptors are overstimulated because you look too long at an object without the small movements (saccades) that refresh the vision, leading to a decrease in the signal to the brain. Or they may occur because your eyes are compensating for bright light, and then you suddenly move into darkness.

So too, if we wrap ourselves in the familiar without exposing our minds to fresh ideas, images are burned onto our brains, leaving shadows of the past overlaid on the present. Familiar companies, technologies, ideas, and social structures hide others with a vastly dissimilar structure, and we see only ghostly images until the new comes into focus. Once your eyes have adjusted to the new light, you see what was previously invisible to you.

Science fiction writer Kim Stanley Robinson captures this moment perfectly in his novel Green Mars, when one of the original settlers of Mars has a shock of insight: “He realized then that history is a wave that moves through time slightly faster than we do.” If we are honest with ourselves, each of us has many such moments, when we realize that the world has moved on and we are stuck in the past.

It is this mental hiccup that leads to many a failure of insight. Famously, Jaron Lanier (and many others) have made the comparison between Kodak, which at its height had 140,000 employees, and Instagram, which had only 13 when it was sold to Facebook for $1 billion in 2012. It’s easy to overlay the afterimage of Kodak, and say, as Lanier did, that the jobs have gone away. Yet for Instagram to exist and thrive, every phone had to include a digital camera and to be connected to a communications network, and that network had to be pervasive and data centers had to provide hosting services that allow tiny startups to serve tens of millions of users. (Instagram had perhaps 40 million users when it was bought; it has 500 million today.) Add up the employees of Apple and Samsung, Cisco and Huawei, Verizon and AT&T, Amazon Web Services (where Instagram was originally hosted) and Facebook’s own data centers, and you see the size of the mountain range of employment of which Instagram itself is a boulder on one small peak.

But that’s not all. These digital communications and content creation technologies have made it possible for a new class of media company—Facebook, Instagram, YouTube, Twitter, Snap, WeChat, Tencent, and a host of others around the world—to turn ordinary people into “workers” producing content for their advertising business. We don’t see these people as workers, because they start out unpaid, but over time, an increasing number of them see economic opportunity on the platform for which they originally volunteered, and before long, the platform supports an ecosystem of small businesses.

Of course, there were networks of people who didn’t work for Kodak either—camera manufacturers, film processors, chemical suppliers, retailers. Not to mention news, portrait, and fashion photographers. But the number of people whose jobs and lives were impacted by film photography was tiny by comparison with digital. The Internet sector now represents more than 5% of GDP in developed countries. For consumers at least, digital photography is a major driver of online activity, central to how people communicate, share, buy, sell, and learn about the world. More than 1.5 trillion digital photographs are shared online each year, up from 80 billion in the days of Kodak.

The cascade of combinatorial effects continues. Without digital photography, would there be Amazon, eBay, Etsy, or Airbnb?

Digital photography certainly played a role in the success of e-commerce, not to mention a host of hotel, restaurant, and travel sites. Being able to see a picture of a product is the next-best thing to seeing it for yourself. But for Airbnb, we have a definitive answer. Photography played a key role in its success.

The company was founded in 2008 by two designers, Brian Chesky and Joe Gebbia, and an engineer, Nathan Blecharczyk. The original idea came to them in 2007, when, as Joe described it, “our rent went up for our San Francisco apartment and we had to figure out a way to bring in some extra income. There was a design conference coming to the city, but hotels were sold out. The size of our apartment could easily fit airbeds on the floor, so we decided to rent them out.”

They built a simple website of their own rather than listing the space on Craigslist, the venerable online classified site founded in 1995 by Craig Newmark. The experiment was so successful that they decided to build out a short-term room, apartment, and home rental service for the upcoming SXSW technology conference in Austin, Texas, because they knew that every hotel room in the city would be sold out. They followed that up by doing the same thing for the 2008 Democratic National Convention, held in Denver, Colorado.

In 2009, they were accepted into Y Combinator, the prestigious Silicon Valley startup incubator, and then received funding from one of Silicon Valley’s top venture firms, Sequoia Capital. But despite a promising start, they were still struggling with acquiring users fast enough. The breakthrough came when they realized that hosts were taking lousy photographs of their properties, leading to lower trust and thus lower interest by possible renters. So in the spring of 2009, Brian and Joe rented a high-end digital camera, went to New York, Airbnb’s top city at the time, and took as many professional photos as they could. Listings on the website doubled, even tripled. So they invested in a program to hire professional photographers in top cities around the world and never looked back. The company now has more rooms available every night than the largest hotel chains in the world.

BUILDING A THICK MARKETPLACE

What made Airbnb’s achievement possible, of course, was not just digital photography, making it easy for hosts to show off their property, but the World Wide Web, online credit card payments, and the experience of other sites that had built reputation systems and ratings to help users build trust with strangers. Airbnb had to wrap these services into a new platform, which you can define as the set of digital services that enables its hosts to find and serve guests.

The primary platform service provided by Airbnb, though, is not to build a pretty web page showing off a property, to schedule rentals, or to take payments. Anyone with a modicum of web experience can do all those things in an afternoon. The essential job of an Internet service like Airbnb is to build what Alvin E. Roth, the economist whose work on labor marketplaces earned him the Nobel Prize, calls a “thick marketplace,” a critical mass of consumers and producers, readers and writers, or buyers and sellers. There is many a brilliant and beautiful site that for no obvious reason never attracts users, while others, seemingly inferior in design or features, flourish.

If you’re lucky, and your timing is just right, a thick marketplace can happen organically, seemingly without deliberate effort. The first website went live on August 6, 1991. It contained a simple description of Tim Berners-Lee’s hypertext project, complete with source code for a web server and a web browser. The site could be accessed by Telnet, a remote log-in program, and using that, you could download the source code for a web server and set up your own site. By the time Dale Dougherty and I had lunch with Tim in Boston a year later, there were perhaps a hundred websites. Yet by the time Google launched in September 1998, there were millions.

Because the World Wide Web had been put into the public domain, Tim Berners-Lee didn’t have to do all the work himself. The National Center for Supercomputing Applications (NCSA), located at the University of Illinois, built an improved web server and browser. Marc Andreessen, who wrote the browser while a student there, left to found Mosaic Communications Corporation (later renamed Netscape Communications). A group of users, abandoned by the original developers, took over the server project, pooling all their patches (shared improvements to the source code) to create the Apache server, which eventually became the world’s most widely used web server. (A pun: It was “a patchy server.”)

The web became a rich marketplace of writers and readers. And from there, entrepreneurs layered on marketplaces for buying and selling everything from books and music to travel, homes, and automobiles. And for advertising them.

There were other online hypertext systems competing with the web. Microsoft had launched a series of successful CD-ROM-based information products, starting in 1992 with Cinemania, an interactive movie guide, and Encarta, a full encyclopedia released the following year. Their multimedia hypertext experience was far ahead of the nascent World Wide Web.

Dale Dougherty had gone up to Microsoft in the fall of 1993 to show them GNN, and as he recalls, they were brutally dismissive. As Dale remembers it, he had been invited to present GNN and the web to a team at Microsoft. A man to whom he was never introduced “arrived late, never sat down, but interrupted me as he paced the room, dismissing the web and saying it wasn’t important to Microsoft. I recall that the others in the room knew very little about the web and they seemed curious, but upon this fellow’s abrupt dismissal, they grew quiet and the conversation ended.”

Microsoft realized, though, that there was an online hypertext opportunity after all. The Microsoft Network (MSN), a proprietary network similar to AOL, was launched in the fall of 1995. In the spring of 1996, Nathan Myhrvold, then Microsoft’s chief technology officer, gave a talk about the Microsoft Network at Esther Dyson’s influential PC Forum. I remember him showing a graph with the number of documents on one axis and the number of readers on the other, and saying, “There are a few documents that are read by millions of people, and millions of documents that are read by one or two people. But there’s this huge space in the middle, and that’s what we’re serving with MSN.”

I stood up during the Q&A and said to Nathan, “I totally agree with your insight about the huge opportunity, but you’re talking about the World Wide Web.” I had been asked by Microsoft to publish content on their new network. “Pay us $50,000, and we’ll make you rich and famous” was essentially their pitch. But the alternative was far easier: Get on the Internet if you weren’t already on it, download and set up Apache, format your content with HTML, and you’re off to the races. No contract needed. The web was a permissionless network.

Microsoft had begun to experiment with the web as early as 1994, but their big bet was on MSN. “Microsoft developed MSN to compete with AOL, something they would control in terms of content and access,” Dale recalled. “The web as an open system undermined that control, and they did not want to imagine a world without them at the center, both from a technological and business point of view.”

Permissionless networks, like open source software projects or the World Wide Web, often grow faster and more organically than those that require approval, and the web soon left MSN and AOL far behind. The web grew to hundreds of millions of websites, hosting trillions of web pages.

This is a central pattern of the Internet age: More freedom leads to more growth.

Of course, on a permissionless network like the Web, anyone could bring content. That was a boon to anyone who had content to post online (including bad actors peddling porn, scams, and pirated content), who could now reach millions of people virtually for nothing. It was also a boon to users, who had access to vast amounts of free content.

Not all successful network platforms are permissionless and decentralized like the Web. Facebook owns and controls its centralized user network, but allows anyone to post on it, as long as they follow certain rules. You can be kicked off the platform, but content is not vetted beforehand. The iPhone App Store is both centralized and tightly controlled. Apps must be registered and approved before they are allowed into Apple’s App Store. The Android app store is far more open. But in either case—iPhone or Android—the underlying open and decentralized network of cell phone users is what first brought one side of the market to scale. With hundreds of millions of smartphone users and a clear economic opportunity for paid apps, there was plenty of incentive for app developers to join the marketplace.

Sometimes, once the network itself is at scale, a particular node on the network takes off and spawns a new network of its own. In 2007, Craig Newmark recalled the process by which Craigslist grew from a simple listing of arts and technology events in San Francisco to the world’s largest online classified network: “We built something, we get feedback, we try to figure out what make sense out of the suggestions, and then we do something about it and then we listen some more.” That is a great description of how Internet software is typically built today, with what is now called a “build-measure-learn” cycle, in which the users of a minimally useful service teach its creators what they want from them. But even that is not really the secret to Craigslist’s success.

Classified advertising in newspapers was expensive and most Craigslist ads were free. If Craigslist hadn’t been a labor of love, Craig’s service to his community, it might not have come out as the winner. Would-be competitors, being venture funded, had a fatal flaw: They needed to charge money to pay back their investors. So they had fewer ads, and because they had fewer ads, they had fewer visitors. Despite being bare-bones, with a minimalist design, and having only nineteen employees, Craigslist was at one time the seventh-most-trafficked site on the web. (Today it is still No. 49.)

Later startups turned growth into a religion, seeking revenue only after massive user scale has been achieved. This is an incomplete map, which leads companies to get lots of users and then have to sell out to someone else. Networks often turn out to be two-sided marketplaces, in which one party pays for access to the other, trading money for attention. If you are unable to develop the matching side of the market, in the form of a network of advertisers, you are in trouble. This is why, for example, YouTube was sold to Google despite beating Google’s own video product in attracting viewers, and why Instagram and WhatsApp were sold to Facebook. It is why Twitter is still struggling. Ultimately, network businesses need to develop both sides of the market.

Uber, Lyft, and Airbnb didn’t have the luxury of user growth without revenue. Unlike advertising-based startups that could sell out to an existing giant in a well-developed industry segment, they have had to build both sides of a new marketplace. Uber and Lyft started out with organic growth, but later accelerated it by deploying huge amounts of capital to acquire new drivers and new customers.

Once a marketplace reaches critical mass, it tends to become self-sustaining, at least as long as the marketplace provider remembers that its primary job is to provide value for marketplace participants, not just for itself. Once marketplaces achieve scale, they often forget this essential point, and this is where decline begins to set in. I’d first noticed this with Microsoft’s abuse of its monopoly position in the personal computer industry. In the early years, there was a thriving ecosystem of application vendors built on top of Microsoft Windows; by the time Microsoft reached its zenith, it had taken over many of the most lucrative application categories, using its platform dominance to drive the former leaders out of business. Entrepreneurs naturally went elsewhere, finding opportunity in the green fields of the as-yet noncommercial Internet.

I’ve watched this same dynamic unfolding on the web. Google began its life as a kind of switchboard, solely directing people to content produced by others. But over time, more of the most frequently sought information is offered directly via Google. There’s a fine balance here. Google is trying to serve its users; embedding information directly into search results may be the right answer. But marketplace providers must tread carefully, because ultimately, the health of the entire ecosystem must be their concern.

A robust ecosystem is good not just for the participants but also for the marketplace platform owner. Internet entrepreneur and investor John Borthwick made a prescient comment to me when Twitter ended access to its data “firehose” for many of its third-party app providers in 2012. “It’s a big mistake,” he said, “for Twitter to shut down its ecosystem before someone in it invents their real business model.”

Amazon needs to be especially responsible because of its dominance in so many e-commerce markets. More than 63 million Americans (roughly half of all households) are now enrolled in Amazon Prime, the company’s free shipping service. Amazon has more than 200 million active credit card accounts; 55% of online shoppers now begin their search at Amazon, and 46% of all online shopping happens on the platform.

Yet Amazon too often competes with its own marketplace participants, creating private-label versions of bestselling products from its vendors, and using its control of the platform to remove the “Buy” button from the products of vendors who don’t go along with its demands. This is their privilege, just as it is the privilege of any store to stock or fail to stock any product. And Amazon is far from the first large retailer to create its own private-label products. But once a company reaches monopoly status, it is no longer a marketplace participant. It is the market. As Olivia LaVecchia and Stacy Mitchell write in their report “Amazon’s Stranglehold”: “In effect, Amazon is turning an open, public marketplace into a privately controlled one.”

Over time, as networks reach monopoly or near-monopoly status, they must wrestle with the issue of how to create more value than they capture—how much value to take out of the ecosystem, versus how much they must leave for other players in order for the marketplace to continue to thrive.

Google and Amazon are both fiercely committed to creating value for one side of the marketplace—users—and justify their actions to themselves on that basis. But as they replace more and more of the supplier side of the network with their own services, they risk weakening the marketplace as a whole. After all, someone else invented and invested in those products or services that they are copying. This is why antitrust law can’t just use lower costs for consumers as its primary benchmark, rather than the overall level of competition in the market. Lower costs are only one outcome of competition. Innovation withers when only one party can afford to innovate, or when there’s only one place to bring new products to market. The mental map used by regulators shapes their decisions, and thus the future.

There is also systemic risk to the economy when a pervasive marketplace begins to compete with its participants. In 2008, not long before the financial crash, I organized a conference called Money:Tech to explore what we could learn about the future of the Internet from the larger and older networked economy of finance. What I learned alarmed me.

In my 2007 research leading up to the event, Bill Janeway, the former vice chairman of private equity firm Warburg Pincus and the author of Doing Capitalism in the Innovation Economy, who began his career on Wall Street, pointed out that Wall Street firms had moved from being brokers to being active players who “began to trade against their clients for their own account, such that now, the direct investment activities of a firm like Goldman Sachs dwarf their activities on behalf of outside customers.” The events that came to a head later that year told us just how far Wall Street firms had gone in trading against their clients, and even more alarmingly, that their trading involved the creation of complex instruments that far outstripped their creators’ ability to understand or control them. Our economy and our politics have not yet recovered from the damage.

CENTRALIZATION VS. DECENTRALIZATION

The tension between centralized and decentralized networks, and between closed and open platforms, first became clear to me when I was exploring the difference between the personal computer industry as dominated by Microsoft in the 1980s and 1990s and the emerging world of open source software and the Internet. At the heart of these two worlds were two competing architectures, two competing platforms. One of them, like Tolkien’s “one ring to bind them,” was a tool of control. The other had what I call “an architecture of participation,” open and inclusive.

I was deeply influenced by the design of the Unix operating system, the system on which I’d cut my teeth early in my career and which had sparked in me an enduring love of computing. Instead of a tightly integrated operating system providing every possible feature in one big package, Unix had a small kernel (the core operating system code) surrounded by a large set of single-purpose tools that all followed the same rules and could be creatively recombined to perform complex functions. Perhaps because AT&T Bell Labs, the creator of Unix, was a communications company, the rules for interoperability between the programs were well established.

As described in the book The Unix Programming Environment, by Brian Kernighan and Rob Pike, two of the computer scientists who were key members of the early community that had built Unix: “Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer. Although that philosophy can’t be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves.” The Internet also had a communications-oriented architecture, in which “small pieces loosely joined” (to use David Weinberger’s wonderful phrase) cooperate to become something much bigger.

In one of the early classics of systems engineering, Systemantics, John Gall wrote, “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true. A complex system designed from scratch never works and cannot be made to work. You have to start over beginning with a working simple system.”

Simple, decentralized systems work better at generating new possibilities than centralized, complex systems because they are able to evolve more quickly. Each decentralized component within the overall framework of simple rules is able to seek out its own fitness function. Those components that work better reproduce and spread; those that don’t die off.

“Fitness function” is a term from genetic programming, an artificial intelligence technique that tries to model the development of computer programs on evolutionary biology. An algorithm is designed to produce small programs optimized to perform a specific task. In a series of iterations, those programs that perform poorly are killed off, while new variations are “bred” from those that are most successful.

Writing in 1975, John Gall wasn’t thinking in terms of fitness functions. Genetic programming wasn’t introduced until 1988. But add the idea of fitness functions and a fitness landscape to his insight that simple systems are able to evolve in ways that surprise their creators and you have a powerful tool for seeing and understanding how computer networks and marketplaces work.

The Internet itself proves the point.

In the 1960s, Paul Baran, Donald Davies, Leonard Kleinrock, and others had developed a theoretical alternative called packet switching to the circuit-switched networks that had characterized the telephone and telegraph. Rather than creating a physical circuit between the two endpoints for the duration of a communication, messages are broken up into small, standardized chunks, shipped by whatever route is most convenient for each packet, and reassembled at their destination.

Networks such as NPL in the United Kingdom and ARPANET in the United States were the first packet-switched networks, but by the early 1970s there were dozens, if not hundreds, of incompatible networks, and it had become obvious that some method of interoperability was needed. (To be fair, J. C. R. Licklider, the legendary DARPA program manager, had called for interoperable networks a full decade earlier.)

In 1973, Bob Kahn and Vint Cerf realized that the right way to solve the interoperability problem was to take the intelligence out of the network and to make the network endpoints responsible for reassembling the packets and requesting retransmission if any packets had been lost. Seemingly paradoxically, they had figured out that the best way to make the network more reliable was to have it do less. Over the next five years, with the help of many others, they developed two protocols, TCP (Transmission Control Protocol) and IP (Internet Protocol), generally spoken of together as TCP/IP, which effectively bridged the differences between underlying networks. It wasn’t until 1983, though, that TCP/IP became the official protocol of the ARPANET, and from there became the basis of what was sometimes called “the network of networks,” and eventually the Internet we know today.

Part of the genius of TCP/IP was how little it did. Rather than making the protocols more complex to handle additional needs, the Internet community simply defined additional protocols that sat on top of TCP/IP. The design was remarkably ad hoc. Any group that wanted to propose a new protocol or data format published a “Request for Comment” (RFC) describing the proposed technology. It would be examined and voted on by a community of peers who, starting in January 1986, gathered under the name of the Internet Engineering Task Force (IETF). There were no formal membership requirements. In 1992, MIT computer science professor Dave Clark described the IETF’s guiding philosophy: “We reject: kings, presidents, and voting. We believe in: rough consensus and running code.”

And there was this naive, glorious statement by Jon Postel in RFC 761: “TCP implementation should follow a general principle of robustness. Be conservative in what you do. Be liberal in what you accept from others.” It sounds like something out of the Bible, the Golden Rule as applied to computers.

In the 1980s, a separate, more traditionally constituted international standards committee had also gotten together to define the future of computer networking. The resulting Open Systems Interconnect (OSI) model was comprehensive and complete, and one of the industry pundits of the day wrote, in 1986: “Over the long haul, most vendors are going to migrate from TCP/IP to support Layer 4, the transport layer of the OSI model. For the short term, however, TCP/IP provides organizations with enough functionality to protect their existing equipment investment and over the long term, TCP/IP promises to allow for easy migration to OSI.”

It didn’t work out that way. It was the profoundly simple protocols of the Internet that grew richer and more complex, while the OSI protocol stack was relegated to the status of an academic reference model used to describe network architecture. The architecture of the World Wide Web, which echoed the radical design of the underlying Internet protocols, became the basis of the next generation of computer applications, and brought what was once an obscure networking technology to billions of people.

There’s a key lesson here for networks that wish to reach maximum scale. Open source software projects like Linux and open systems like the Internet and the World Wide Web work not because there’s a central board of approval giving permission for each new addition but because the original designers of the system laid down clear rules for cooperation and interoperability.

The coordination is all in the design of the system itself.

This principle is the key to understanding not only today’s Internet technology giants, but also what’s wrong with today’s WTF? economy.

Get WTF?: What's the Future and Why It's Up to Us now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.