1

SEEING THE FUTURE IN THE PRESENT

IN THE MEDIA, I’M OFTEN PEGGED AS A FUTURIST. I DON’T think of myself that way. I think of myself as a mapmaker. I draw a map of the present that makes it easier to see the possibilities of the future. Maps aren’t just representations of physical locations and routes. They are any system that helps us see where we are and where we are trying to go. One of my favorite quotes is from Edwin Schlossberg: “The skill of writing is to create a context in which other people can think.” This book is a map.

We use maps—simplified abstractions of an underlying reality, which they represent—not just in trying to get from one place to another but in every aspect of our lives. When we walk through our darkened home without the need to turn on the light, that is because we have internalized a mental map of the space, the layout of the rooms, the location of every chair and table. Similarly, when an entrepreneur or venture capitalist goes to work each day, he or she has a mental map of the technology and business landscape. We dispose the world in categories: friend or acquaintance, ally or competitor, important or unimportant, urgent or trivial, future or past. For each category, we have a mental map.

But as we’re reminded by the sad stories of people who religiously follow their GPS off a no-longer-existent bridge, maps can be wrong. In business and in technology, we often fail to see clearly what is ahead because we are navigating using old maps and sometimes even bad maps—maps that leave out critical details about our environment or perhaps even actively misrepresent it.

Most often, in fast-moving fields like science and technology, maps are wrong simply because so much is unknown. Each entrepreneur, each inventor, is also an explorer, trying to make sense of what’s possible, what works and what doesn’t, and how to move forward.

Think of the entrepreneurs working to develop the US transcontinental railroad in the mid-nineteenth century. The idea was first proposed in 1832, but it wasn’t even clear that the project was feasible until the 1850s, when the US House of Representatives provided the funding for an extensive series of surveys of the American West, a precursor to any actual construction. Three years of exploration from 1853 to 1855 resulted in the Pacific Railroad Surveys, a twelve-volume collection of data on 400,000 square miles of the American West.

But all that data did not make the path forward entirely clear. There was fierce debate about the best route, debate that was not just about the geophysical merits of northern versus southern routes but also about the contested extension of slavery. Even when the intended route was decided on and construction began in 1863, there were unexpected problems—a grade steeper than previously reported that was too difficult for a locomotive, weather conditions that made certain routes impassable during the winter. You couldn’t just draw lines on the map and expect everything to work perfectly. The map had to be refined and redrawn with more and more layers of essential data added until it was clear enough to act on. Explorers and surveyors went down many false paths before deciding on the final route.

Creating the right map is the first challenge we face in making sense of today’s WTF? technologies. Before we can understand how to deal with AI, on-demand applications, and the disappearance of middle-class jobs, and how these things can come together into a future we want to live in, we have to make sure we aren’t blinded by old ideas. We have to see patterns that cross old boundaries.

The map we follow into the future is like a picture puzzle with many of the pieces missing. You can see the rough outline of one pattern over here, and another there, but there are great gaps and you can’t quite make the connections. And then one day someone pours out another set of pieces on the table, and suddenly the pattern pops into focus. The difference between a map of an unknown territory and a picture puzzle is that no one knows the full picture in advance. It doesn’t exist until we see it—it’s a puzzle whose pattern we make up together as we go, invented as much as it is discovered.

Finding our way into the future is a collaborative act, with each explorer filling in critical pieces that allow others to go forward.

LISTENING FOR THE RHYMES

Mark Twain is reputed to have said, “History doesn’t repeat itself, but it often rhymes.” Study history and notice its patterns. This is the first lesson I learned in how to think about the future.

The story of how the term open source software came to be developed, refined, and adopted in early 1998—what it helped us to understand about the changing nature of software, how that new understanding changed the course of the industry, and what it predicted about the world to come—shows how the mental maps we use limit our thinking, and how revising the map can transform the choices we make.

Before I delve into what is now ancient history, I need you to roll back your mind to 1998.

Software was distributed in shrink-wrapped boxes, with new releases coming at best annually, often every two or three years. Only 42% of US households had a personal computer, versus the 80% who own a smartphone today. Only 20% of the US population had a mobile phone of any kind. The Internet was exciting investors—but it was still tiny, with only 147 million users worldwide, versus 3.4 billion today. More than half of all US Internet users had access through AOL. Amazon and eBay had been launched three years earlier, but Google was only just founded in September of that year.

Microsoft had made Bill Gates, its founder and CEO, the richest man in the world. It was the defining company of the technology industry, with a near-monopoly position in personal computer software that it had leveraged to destroy competitor after competitor. The US Justice Department launched an antitrust investigation against the company in May of that year, just as it had done nearly thirty years earlier against IBM.

In contrast to the proprietary software that made Microsoft so successful, open source software is distributed under a license that allows anyone to freely study, modify, and build on it. Examples of open source software include the Linux and Android operating systems; web browsers like Chrome and Firefox; popular programming languages like Python, PHP, and JavaScript; modern big data tools like Hadoop and Spark; and cutting-edge artificial intelligence toolkits like Google’s TensorFlow, Facebook’s Torch, or Microsoft’s CNTK.

In the early days of computers, most software was open source, though not by that name. Some basic operating software came with a computer, but much of the code that actually made a computer useful was custom software written to solve specific problems. The software written by scientists and researchers in particular was often shared. During the late 1970s and 1980s, though, companies had realized that controlling access to software gave them commercial advantage and had begun to close off access using restrictive licenses. In 1985, Richard Stallman, a programmer at the Massachusetts Institute of Technology, published The GNU Manifesto, laying out the principles of what he called “free software”—not free as in price, but free as in freedom: the freedom to study, to redistribute, and to modify software without permission.

Stallman’s ambitious goal was to build a completely free version of AT&T’s Unix operating system, originally developed at Bell Labs, the research arm of AT&T.

At the time Unix was first developed in the late 1970s, AT&T was a legal monopoly with enormous profits from regulated telephone services. As a result, AT&T was not allowed to compete in the computer industry, then dominated by IBM, and in accord with its 1956 consent decree with the Justice Department had licensed Unix to computer science research groups on generous terms. Computer programmers at universities and companies all over the world had responded by contributing key elements to the operating system.

But after the decisive consent decree of 1982, in which AT&T agreed to be broken up into seven smaller companies (“the Baby Bells”) in exchange for being allowed to compete in the computer market, AT&T tried to make Unix proprietary. They sued the University of California, Berkeley, which had built an alternate version of Unix (the Berkeley Software Distribution, or BSD), and effectively tried to shut down the collaborative barn raising that had helped to create the operating system in the first place.

While Berkeley Unix was stalled by AT&T’s legal attacks, Stallman’s GNU Project (named for the meaningless recursive acronym “Gnu’s Not Unix”) had duplicated all of the key elements of Unix except the kernel, the central code that acts as a kind of traffic cop for all the other software. That kernel was supplied by a Finnish computer science student named Linus Torvalds, whose master’s thesis in 1990 consisted of a minimalist Unix-like operating system that would be portable to many different computer architectures. He called this operating system Linux.

Over the next few years, there was a fury of commercial activity as entrepreneurs seized on the possibilities of a completely free operating system combining Torvalds’s kernel with the Free Software Foundation’s re-creation of the rest of the Unix operating system. The target was no longer AT&T, but rather Microsoft.

In the early days of the PC industry, IBM and a growing number of personal computer “clone” vendors like Dell and Gateway provided the hardware, Microsoft provided the operating system, and a host of independent software companies provided the “killer apps”—word processing, spreadsheets, databases, and graphics programs—that drove adoption of the new platform. Microsoft’s DOS (Disk Operating System) was a key part of the ecosystem, but it was far from in control. That changed with the introduction of Microsoft Windows. Its extensive Application Programming Interfaces (APIs) made application development much easier but locked developers into Microsoft’s platform. Competing operating systems for the PC like IBM’s OS/2 were unable to break the stranglehold. And soon Microsoft used its dominance of the operating system to privilege its own applications—Microsoft Word, Excel, PowerPoint, Access, and, later, Internet Explorer, their web browser (now Microsoft Edge)—by making bundling deals with large buyers.

The independent software industry for the personal computer was slowly dying, as Microsoft took over one application category after another.

This is the rhyming pattern that I noticed: The personal computer industry had begun with an explosion of innovation that broke IBM’s monopoly on the first generation of computing, but had ended in another “winner takes all” monopoly. Look for repeating patterns and ask yourself what the next iteration might be.

Now everyone was asking whether a desktop version of Linux could change the game. Not only startups but also big companies like IBM, trying to claw their way back to the top of the heap, placed huge bets that they could.

But there was far more to the Linux story than just competing with Microsoft. It was rewriting the rules of the software industry in ways that no one expected. It had become the platform on which many of the world’s great websites—at the time, most notably Amazon and Google—were being built. But it was also reshaping the very way that software was being written.

In February 1997, at the Linux Kongress in Würzburg, Germany, hacker Eric Raymond delivered a paper, called “The Cathedral and the Bazaar,” that electrified the Linux community. It laid out a theory of software development drawn from reflections on Linux and on Eric’s own experiences with what later came to be called open source software development. Eric wrote:

Who would have thought even five years ago that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet? . . . [T]he Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who’d take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.

Eric laid out a series of principles that have, over the past decades, become part of the software development gospel: that software should be released early and often, in an unfinished state rather than waiting to be perfected; that users should be treated as “co-developers”; and that “given enough eyeballs, all bugs are shallow.”

Today, whether programmers develop open source software or proprietary software, they use tools and approaches that were pioneered by the open source community. But more important, anyone who uses today’s Internet software has experienced these principles at work. When you go to a site like Amazon, Facebook, or Google, you are a participant in the development process in a way that was unknown in the PC era. You are not a “co-developer” in the way that Eric Raymond imagined—you are not another hacker contributing feature suggestions and code. But you are a “beta tester”—someone who tries out continually evolving, unfinished software and gives feedback—at a scale never before imagined. Internet software developers constantly update their applications, testing new features on millions of users, measuring their impact, and learning as they go.

Eric saw that something was changing in the way software was being developed, but in 1997, when he first delivered “The Cathedral and the Bazaar,” it wasn’t yet clear that the principles he articulated would spread far beyond free software, beyond software development itself, shaping content sites like Wikipedia and eventually enabling a revolution in which consumers would become co-creators of services like on-demand transportation (Uber and Lyft) and lodging (Airbnb).

I was invited to give a talk at the same conference in Würzburg. My talk, titled “Hardware, Software, and Infoware,” was very different. I was fascinated not just with Linux, but with Amazon. Amazon had been built on top of various kinds of free software, including Linux, but it seemed to me to be fundamentally different in character from the kinds of software we’d seen in previous eras of computing.

Today it’s obvious to everyone that websites are applications and that the web has become a platform, but in 1997 most people thought of the web browser as the application. If they knew a little bit more about the architecture of the web, they might think of the web server and associated code and data as the application. The content was something managed by the browser, in the same way that Microsoft Word manages a document or that Excel lets you create a spreadsheet. By contrast, I was convinced that the content itself was an essential part of the application, and that the dynamic nature of that content was leading to an entirely new architectural design pattern for a next stage beyond software, which at the time I called “infoware.”

Where Eric was focused on the success of the Linux operating system, and saw it as an alternative to Microsoft Windows, I was particularly fascinated by the success of the Perl programming language in enabling this new paradigm on the web.

Perl was originally created by Larry Wall in 1987 and distributed for free over early computer networks. I had published Larry’s book, Programming Perl, in 1991, and was preparing to launch the Perl Conference in the summer of 1997. I had been inspired to start the Perl Conference by the chance conjunction of comments by two friends. Early in 1997, Carla Bayha, the computer book buyer at the Borders bookstore chain, had told me that the second edition of Programming Perl, published in 1996, was one of the top 100 books in any category at Borders that year. It struck me as curious that despite this fact, there was virtually nothing written about Perl in any of the computer trade papers. Because there was no company behind Perl, it was virtually invisible to the pundits who followed the industry.

And then Andrew Schulman, the author of a book called Unauthorized Windows 95, told me something I found equally curious. At that time, Microsoft was airing a series of television commercials about the way that their new technology called Active/X would “activate the Internet.” The software demos in these ads were actually mostly done with Perl, according to Andrew. It was clear to me that Perl, not Active/X, was actually at the heart of the way dynamic web content was being delivered.

I was outraged. I decided that I needed to make some noise about Perl. And so, early in 1997, I had announced my first conference as a publicity stunt, to get people to pay attention. And that’s also what I had come to the Linux Kongress in Würzburg to talk about.

In the essay I later based on that talk, I wrote: “Perl has been called ‘the duct tape of the Internet,’ and like duct tape, it is used in all kinds of unexpected ways. Like a movie set held together with duct tape, a website is often put up and torn down in a day, and needs lightweight tools and quick but effective solutions.”

I saw Perl’s duct tape approach as an essential enabler of the “infoware” paradigm, in which control over computers was through an information interface, not a software interface per se. A web link, as I described it at the time, was a way to embed commands to the computer into dynamic documents written in ordinary human language, rather than, say, a drop-down software menu, which embedded little bits of human language into a traditional software program.

The next part of the talk focused on a historical analogy that was to obsess me for the next few years. I was fascinated by the parallels between what open source software and the open protocols of the Internet were doing to Microsoft and the way that Microsoft and an independent software industry had previously displaced IBM.

When I had first entered the industry in 1978, it was shaking off IBM’s monopoly, not dissimilar to the position that Microsoft occupied twenty years later. IBM’s control over the industry was based on integrated computer systems in which software and hardware were tightly coupled. Creating a new type of computer meant inventing both new hardware and a new operating system to control it. What few independent software companies existed had to choose which hardware vendor they would be satellite to, or to “port” their software to multiple hardware architectures, much as phone developers today need to create separate versions for iPhone and Android. Except the problem was much worse. In the mid-1980s, I remember talking with one of the customers of my documentation consulting business, the author of a mainframe graphics library called DISSPLA (Display Integrated Software System and Plotting Language). He told me that he had to maintain more than 200 different versions of his software.

The IBM Personal Computer, released in August 1981, changed all that. In 1980, realizing that they were missing out on the new microcomputer market, IBM set up a skunkworks project in Boca Raton, Florida, to develop the new machine. They made a critical decision: to cut costs and accelerate development they would develop an open architecture using industry-standard parts—including software licensed from third parties.

The PC, as it was soon called, was an immediate hit when it was released in the fall of 1981. IBM’s projections had called for sales of 250,000 units in the first five years. They were rumored to have sold 40,000 on the first day; within two years, more than a million were in customers’ hands.

However, the executives at IBM failed to understand the full consequences of their decisions. At the time, software was a small player in the computer industry, a necessary but minor part of an integrated computer, often bundled rather than sold separately. So when it came time to provide an operating system for the new machine, IBM decided to license it from Microsoft, giving them the right to resell the software to the segment of the market that IBM did not control.

The size of that segment was about to explode. Because IBM had published the specifications for the machine, its success was followed by the development of dozens, then hundreds of PC-compatible clones. The barriers to entry to the market were so low that Michael Dell built his eponymous company while still a student at the University of Texas, assembling and selling computers from his dorm room. The IBM personal computer architecture became the standard, over time displacing not only other personal computer designs, but, over the next two decades, minicomputers and mainframes.

As cloned personal computers were built by hundreds of manufacturers large and small, however, IBM lost its leadership in the new market. Software became the new sun that the industry revolved around; Microsoft became the most important company in the computer industry.

Intel also forged a privileged role through bold decision making. In order to ensure that no one supplier became a choke point, IBM had required that every component in the PC’s open hardware architecture be available from at least two suppliers. Intel had gone along with this mandate, licensing their 8086 and 80286 chips to rival AMD, but in 1985, with the release of the 80386 processor, they made the bold decision to stand up to IBM, placing the bet that the clone market was now big enough that IBM’s wishes would be overridden by the market. Former Intel CTO Pat Gelsinger told me the story. “We took a vote in the five-person management committee. It was three to two against. But Andy [Grove, Intel’s CEO] was one of the two, so we did it anyway.”

That’s another lesson about the future. It doesn’t just happen. People make it happen. Individual decisions matter.

By 1998, the story had largely repeated itself. Microsoft had used its position as the sole provider of the operating system for the PC to establish a monopoly on desktop software. Software applications had become increasingly complex, with Microsoft putting up deliberate barriers to entry against competitors. It was no longer possible for a single programmer or small company to make an impact in the PC software market.

Open source software and the open protocols of the Internet were now challenging that dominance. The barriers to entry into the software market were crashing down. History may not repeat itself, but yes, it does rhyme.

Users could try a new product for free—and even more than that, they could build their own custom version of it, also for free. Source code was available for massive independent peer review, and if someone didn’t like a feature, they could add to it, subtract from it, or reimplement it. If they gave their fix back to the community, it could be adopted widely very quickly.

What’s more, because developers (at least initially) weren’t trying to compete on the business end, but instead focused simply on solving real problems, there was room for experimentation. As has often been said, open source software “lets you scratch your own itch.” Because of the distributed development paradigm, with new features being added by users, open source programs “evolve” as much as they are designed. And as I wrote in my 1998 paper, “Hardware, Software, and Infoware,” “Evolution breeds not a single winner, but diversity.”

That diversity was the reason that the seeds of the future were found in free software and the Internet rather than in the now-establishment technologies offered by Microsoft.

It is almost always the case that if you want to see the future, you have to look not at the technologies offered by the mainstream but by the innovators out at the fringes.

Most of the people who launched the personal computer software industry four decades ago weren’t entrepreneurs; they were kids to whom the idea of owning their own computer was absurdly exciting. Programming was like a drug—no, it was better than a drug, or joining a rock band, and it was certainly better than any job they could imagine. So too Linux, the open source operating system now used as a PC operating system by 90 million people, and by billions as the operating system on which most large Internet sites run, and as the underlying code in every Android phone. The title of Linus Torvalds’s book about how he developed Linux? Just for Fun.

The World Wide Web got its start the same way. At first, no one took it seriously as a place to make money. It was all about the joy of sharing our work, the rush of clicking on a link and connecting with another computer half the world away, and constructing similar destinations for our peers. We were all enthusiasts. Some of us were also entrepreneurs.

To be sure, it is those entrepreneurs—people like Bill Gates, Steve Jobs, and Michael Dell in the personal computer era; Jeff Bezos, Larry Page, Sergey Brin, and Mark Zuckerberg in the web era—who saw that this world driven by a passion for discovery and sharing could become the cradle of a new economy. They found financial backers, shaped the toy into a tool, and built the businesses that turned a movement into an industry.

The lesson is clear: Treat curiosity and wonder as a guide to the future. That sense of wonder may just mean that those crazy enthusiasts are seeing something that you don’t . . . yet.

The enormous diversity of software that had grown up around free software was reflected in the bestselling books that drove my publishing business. Perl wasn’t alone. Many of the most successful technology books of the 1990s, books I published with names only a programmer could love—Programming Perl, Learning the Vi Editor, Sed & Awk, DNS and Bind, Running Linux, Programming Python—were all about software written by individuals and distributed freely over the Internet. The web itself had been put into the public domain.

I realized that many of the authors of these programs didn’t actually know each other. The free software community that had coalesced around Linux didn’t mix much with the Internet crowd. Because of my position as a technology publisher, I traveled in both circles. So I resolved to bring them together. They needed to see themselves as part of the same story.

In April 1998, I organized an event that I originally called “the Freeware Summit” to bring together the creators of many of the most important free software programs.

The timing was perfect. In January, Marc Andreessen’s high-profile web company, Netscape, built to commercialize the web browser, had decided to provide the source code to its browser as a free software project using the name Mozilla. Under competitive pressure from Microsoft, which had built a browser of its own and had given it away for free (but without source code) in order to “cut off Netscape’s air supply,” Netscape had no choice but to go back to the web’s free software roots.

At the meeting, which was held at the Stanford Court Hotel (now the Garden Court) in Palo Alto, I brought together Linus Torvalds, Brian Behlendorf (one of the founders of the Apache web server project), Larry Wall, Guido van Rossum (the creator of the Python programming language), Jamie Zawinski (the chief developer of the Mozilla project), Eric Raymond, Michael Tiemann (the founder and CEO of Cygnus Solutions, a company that was commercializing free software programming tools), Paul Vixie (the author and maintainer of BIND [Berkeley Internet Name Daemon], the software behind the Internet Domain Name System), and Eric Allman (the author of Sendmail, the software that routed a majority of the Internet’s email).

At the meeting, one of the topics that came up was the name free software. Richard Stallman’s free software movement had created many enemies with its seemingly radical proposition that all software source code must be given away freely—because it was immoral to do otherwise. Even worse, many people had taken free software to mean that its developers were hostile to commercial use. At the meeting, Linus Torvalds remarked, “I didn’t realize that free had two meanings in English: ‘libre’ and ‘gratis.’

Linus wasn’t the only one who had different notions about what free meant. In a separate meeting, Kirk McKusick, the head of the Berkeley Unix project, which had developed many of the key Unix features and utilities that had been incorporated into Linux, had told me: “Richard Stallman likes to say that copyright is evil, so we need this new construct called copyleft. Here at Berkeley we use copycentral—that is, we tell people to go down to Copy Central [the local photocopy shop] and copy it.” The Berkeley Unix project, which had provided my own introduction to the operating system in 1983, was part of the long academic tradition of knowledge sharing. Source code was given away so people could build on it, and that included commercial use. The only requirement was attribution.

Bob Scheifler, director of MIT’s X Window System project, followed the same philosophy. The X Window System had been started in 1984, and by the time I encountered it in 1987, it was becoming the standard window system for Unix and Linux, adopted and adapted by virtually every vendor. My company developed a series of programming manuals for X that used the MIT specifications as a base, rewriting and expanding them, and then licensed them to companies shipping new Unix and X-based systems. Bob encouraged me. “That’s exactly what we want companies to do,” he said. “We’re laying a foundation, and we want everyone to build on it.”

Larry Wall, creator of Perl, was another of my mentors in how to think about free software. When I asked him why he had made Perl free software, he explained that he had gotten so much value from the work of others that he felt an obligation to give something back. Larry also quoted to me a variation of Stewart Brand’s classic observation, saying, “Information doesn’t want to be free. Information wants to be valuable.” Like many other free software authors, Larry had discovered that one way to make his information (that is, his software) more valuable was to give it away. He was able to increase its utility not only for himself (because others who took it up made changes and enhancements that he could use), but for everyone else who uses it, because as software becomes more ubiquitous it can be taken for granted as a foundation for further work.

Nonetheless, it was also clear to me that proprietary software creators, including those, such as Microsoft, who were regarded by most free software advocates as immoral, had found that they could make their information valuable by restricting access to it. Microsoft had created enormous value for itself and its shareholders, but it was also a key enabler of ubiquitous personal computing, a necessary precursor to the global computing networks of today. That was value for all of society.

I saw that Larry Wall and Bill Gates had a great deal in common. As the creators (albeit with a host of co-contributors) of a body of intellectual work, they had made strategic decisions about how best to maximize its value. History has proven that each of their strategies can work. The question for me became one of how to maximize value creation for society, rather than simply value capture by an individual or a company. What were the conditions under which giving software away was a better strategy than keeping it proprietary?

This question has recurred, ever more broadly, throughout my career: How can a business create more value for society than it captures for itself?

WHAT’S IN A NAME?

As we wrestled with the name free software, various alternatives were proposed. Michael Tiemann said that Cygnus had begun using the term sourceware. But Eric Raymond argued for open source, a new term that had been coined only six weeks earlier by Christine Peterson of the Foresight Institute, a nanotechnology think tank, at a meeting convened by Larry Augustin, the CEO of a Linux company called VA Linux Systems.

Eric and another software developer and free software activist, Bruce Perens, had been so excited about Christine’s new term that they had formed a nonprofit organization called the Open Source Initiative to reconcile the various free software licenses that were being used into a kind of metalicense. But as of yet, the term was largely unknown.

Not everyone liked it. “Sounds too much like ‘open sores,’” one participant commented. But we all agreed that there were serious problems with the name free software and that wide adoption of a new name could be an important step forward. So we put it to a vote. Open source won handily over sourceware and we all agreed to use the new term going forward.

It was an important moment, because at the end of the day I’d arranged a press conference with reporters from the New York Times, Wall Street Journal, San Jose Mercury News (at the time the daily paper of Silicon Valley), Fortune, Forbes, and many other national publications. Because I’d earlier built relationships with many of these reporters during my time in the early 1990s promoting the commercialization of the Internet, they showed up even though they didn’t know what the news was going to be.

I lined up the participants in front of the assembled reporters and told a story that none of them had heard before. It went something like this:

When you hear the term free software, you think that it’s a rebel movement that is hostile to commercial software. I’m here to tell you that every big company—including your own—already uses free software every day. If your company has an Internet domain name—say nytimes.com or wsj.com or fortune.com—that name only works because of BIND, the software written by this man—Paul Vixie. The web server you use is probably Apache, created by a team co-founded by Brian Behlendorf, sitting here. That website also makes heavy use of programming languages like Perl and Python, written by Larry Wall, here, and Guido van Rossum, here. If you send email, it was routed to its destination by Sendmail, written by Eric Allman. And that’s before we even get to Linux, which you’ve all heard about, which was written by Linus Torvalds here.

And here’s the amazing thing: All of these guys have dominant market share in important categories of Internet software without any venture capitalist giving them money, without any company behind them, just on the strength of building great software and giving it away to anyone who wants to use it or to help them build it.

Because free software has some negative connotations as a name, we’ve all gotten together here today and decided to adopt a new name: open source software.

Over the next couple of weeks, I gave dozens of interviews in which I explained that all of the most critical pieces of the Internet infrastructure were “open source.” I still remember the disbelief and surprise in many of the initial interviews. After a few weeks, though, it was accepted wisdom, the new map. No one even remembers that the event was originally called the Freeware Summit. It was thereafter referred to as “The Open Source Summit.”

This is a key lesson in how to see the future: bring people together who are already living in it. Science fiction writer William Gibson famously observed, “The future is already here. It’s just not evenly distributed yet.” The early developers of Linux and the Internet were already living in a future that was on its way to the wider world. Bringing them together was the first step in redrawing the map.

ARE YOU LOOKING AT THE MAP OR THE ROAD?

There’s another lesson here too: Train yourself to recognize when you are looking at the map instead of at the road. Constantly compare the two and pay special attention to all the things you see that are missing from the map. That’s how I was able to notice that the narrative about free software put forward by Richard Stallman and Eric Raymond had ignored the most successful free software of all, the free software that underlies the Internet.

Your map should be an aid to seeing, not a replacement for it. If you know a turn is coming up, you can be on the lookout for it. If it doesn’t come when you expect, perhaps you are on the wrong road.

My own training in how to keep my eyes on the road began in 1969, when I was only fifteen years old. My brother Sean, who was seventeen, met a man named George Simon, who was to have a shaping role in my intellectual life. George was a troop leader in the Explorer Scouts, the teenage level of the Boy Scouts—no more than that and yet so much more. The focus of the troop, which Sean joined, was on nonverbal communication.

Later, George went on to teach workshops at the Esalen Institute, which was to the human potential movement of the 1970s what the Googleplex or Apple’s Infinite Loop is to the Silicon Valley of today. I taught at Esalen with George when I was barely out of high school, and his ideas have deeply influenced my thinking ever since.

George had this seemingly crazy idea that language itself was a kind of map. Language shapes what we are able to see and how we see it. George had studied the work of Alfred Korzybski, whose 1933 book, Science and Sanity, had come back into vogue in the 1960s, largely through the work of Korzybski’s student S. I. Hayakawa.

Korzybski believed that reality itself is fundamentally unknowable, since what is is always mediated by our nervous system. A dog perceives a very different world than a human being, and even individual humans have great variability in their experience of the world. But at least as importantly, our experience is shaped by the words we use.

I had a vivid experience of this years later when I moved to Sebastopol, a small town in Northern California, where I kept horses. Before that, I’d look out at a meadow and I’d see something that I called “grass.” But over time, I learned to distinguish between oats, rye, orchard grass, and alfalfa, as well as other types of forage such as vetch. Now, when I look at a meadow, I see them all, as well as other types whose names I don’t know. Having a language for grass helps me to see more deeply.

Language can also lead us astray. Korzybski was fond of showing people how words shaped their experience of the world. In one famous anecdote, he shared a tin of biscuits wrapped in brown paper with his class. As everyone munched on the biscuits, some taking seconds, he tore off the paper, showing that he’d passed out dog biscuits. Several students ran out of the class to throw up. Korzybski’s lesson: “I have just demonstrated that people don’t just eat food, but also words, and that the taste of the former is often outdone by the taste of the latter.”

Korzybski argued that many psychological and social aberrations can be seen as problems with language. Consider racism: It relies on terms that deny the fundamental humanity of the people it describes. Korzybski urged everyone to become viscerally aware of the process of abstraction, by which reality is transformed into a series of statements about reality—maps that can guide us but can also lead us astray.

This insight seems particularly important in the face of the fake news that bedeviled the 2016 US presidential election. It wasn’t just the most outrageous examples, like the child slavery ring supposedly being run by the Clinton campaign out of a Washington, DC, pizza joint, but the systematic and increasingly algorithmic selection of news to fit and amplify people’s preconceived views. Whole sectors of the population are now led by vastly divergent maps. How are we to solve the world’s most pressing problems when we aren’t even trying to create maps that reflect the actual road ahead, but instead drive toward political or business goals?

After working with George for a few years, I got a near-instinctive sense of when I was wrapped in the coils of the words we use about reality and when I was paying attention to what I was actually experiencing, or even more, reaching beyond what I was experiencing now to the thing itself. When faced with the unknown, a certain cultivated receptivity, an opening to that unknown, leads to better maps than simply trying to overlay prior maps on that which is new.

It is precisely this training in how to look at the world directly, not simply to reshuffle the maps, that is at the heart of original work in science—and as I try to make the case in this book, in business and technology.

As recounted in his autobiography, Surely You’re Joking, Mr. Feynman, fabled physicist Richard Feynman was appalled by how many students in a class he visited during his sabbatical in Brazil couldn’t apply what they had been taught. Immediately after a lecture about the polarization of light, with demonstrations using strips of polarizing film, he asked a question whose answer could be determined by looking through the film at the light reflected off the bay outside. Despite their ability to recite the relevant formula when asked directly (something called Brewster’s Angle), it never occurred to them that the formula provided a way to answer the question. They’d learned the symbols (the maps) but just couldn’t relate them back to the underlying reality sufficiently to use them in real life.

“I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote, or something,” Feynman wrote. “Their knowledge is so fragile!”

Recognizing when you’re stuck in the words, looking at the map rather than looking at the road, is something that is surprisingly hard to learn. The key is to remember that this is an experiential practice. You can’t just read about it. You have to practice it. As we’ll see in the next chapter, that’s what I did in my continuing struggle to understand the import of open source software.

Get WTF?: What's the Future and Why It's Up to Us now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.