Chapter 1. Introduction
Chris DiBona, Sam Ockman, and Mark Stone
Prologue
Linux creator Linus Torvalds reports that the name “Linus” was chosen for him because of his parents’ admiration for Nobel laureate Linus Pauling. Pauling was the rarest of men: a scientist who won the Nobel Prize not once, but twice. We find a cautionary tale for the Open Source community in the story of Pauling’s foundational work that made possible the discovery of the structure of DNA.
The actual discovery was made Francis Crick and James Watson, and is famously chronicled in Watson’s book The Double Helix. Watson’s book is a remarkably frank account of the way science is actually done. He recounts not just the brilliance and insight, but the politics, the competition, and the luck. The quest for the secret of DNA became a fierce competition between, among others, Watson and Crick’s lab in Cambridge, and Pauling’s lab at Cal Tech.
Watson describes with obvious unease the way in which Pauling came to know that Watson and Crick had solved the mystery, and created a model of DNA’s helical structure. The story here centers on Max Delbruk, a mutual friend who traveled between Cambridge and Cal Tech. While sympathetic to Watson and Crick’s desire to keep the discovery secret until all results could be confirmed, Delbruk’s allegiance ultimately was to science itself. In this passage, Watson describes how he learned that Pauling had heard the news:
Linus Pauling first heard about the double helix from Max Delbruk. At the bottom of the letter that broke the news of the complementary chains, I had asked that he not tell Linus. I was still slightly afraid something would go wrong and did not want Pauling to think about hydrogen-bonded base pairs until we had a few more days to digest our position. My request, however, was ignored. Delbruk wanted to tell everyone in his lab and knew that within hours the gossip would travel from his lab in biology to their friends working under Linus. Also, Pauling made him promise to let him know the minute he heard from me. Then there was the even more important consideration that Delbruk hated any form of secrecy in scientific matters and did not want to keep Pauling in suspense any longer.
Clearly the need for secrecy made Watson uncomfortable. One of the poignant themes that runs throughout the book is Watson’s acknowledgment that competition kept parties from disclosing all they knew, and that the progress of science may have been delayed, if ever so slightly, by that secrecy.
Science, after all, is ultimately an Open Source enterprise. The scientific method rests on a process of discovery, and a process of justification. For scientific results to be justified, they must be replicable. Replication is not possible unless the source is shared: the hypothesis, the test conditions, and the results. The process of discovery can follow many paths, and at times scientific discoveries do occur in isolation. But ultimately the process of discovery must be served by sharing information: enabling other scientists to go forward where one cannot; pollinating the ideas of others so that something new may grow that otherwise would not have been born.
What Is Free Software and How Does It Relate to Open Source?
In 1984, Richard Stallman, a researcher at the MIT AI Lab, started the GNU project. The GNU project’s goal was, simply put, to make it so that no one would ever have to pay for software. Stallman launched the GNU project because essentially he feels that the knowledge that constitutes a running program—what the computer industry calls the source code—should be free. If it were not, Stallman reasons, a very few, very powerful people would dominate computing.
Where proprietary commercial software vendors saw an industry guarding trade secrets that must be tightly protected, Stallman saw scientific knowledge that must be shared and distributed. The basic tenet of the GNU project and the Free Software Foundation (the umbrella organization for the GNU project) is that source code is fundamental to the furthering of computer science and freely available source code is truly necessary for innovation to continue.
Stallman worried how the world would react to free software. Scientific knowledge is often in the public domain; it is one function of academic publishing to put it there. With software, however, it was clear that just letting the source code go into the public domain would tempt businesses to co-opt the code for their own profitability. Stallman’s answer to this threat was the GNU General Public License, known as the GPL (see Appendix B).
The GPL basically says that you may copy and distribute the software licensed under the GPL at will, provided you do not inhibit others from doing the same, either by charging them for the software itself or by restricting them through further licensing. The GPL also requires works derived from work licensed under the GPL to be licensed under the GPL as well.
When Stallman and others in this book talk about free software, they are really talking about free speech. English handles the distinction here poorly, but it is the distinction between gratis and liberty, as in “Free as in speech, not as in beer.” This radical message (the freedom part, not the beer part) led many software companies to reject free software outright. After all, they are in the business of making money, not adding to our body of knowledge. For Stallman, this rift between the computer industry and computer science was acceptable, maybe even desirable.
What Is Open Source Software?
In the spring of 1997, a group of leaders in the free software community assembled in California. This group included Eric Raymond, Tim O’Reilly, and VA Research president Larry Augustin, among others. Their concern was to find a way to promote the ideas surrounding free software to people who had formerly shunned the concept. They were concerned that the Free Software Foundation’s anti-business message was keeping the world at large from really appreciating the power of free software.
At Eric Raymond’s insistence, the group agreed that what they lacked in large part was a marketing campaign, a campaign devised to win mind share, and not just market share. Out of this discussion came a new term to describe the software they were promoting: Open Source. A series of guidelines were crafted to describe software that qualified as Open Source.
Bruce Perens had laid much of the groundwork for the Open Source Definition. One of the GNU project’s states goals was to create a freely available operating system that could serve as the platform for running GNU software. In a classic case of software bootstrapping, Linux had become that platform, and Linux had been created with the help of GNU tools. Perens had headed the Debian project, which managed a distribution of Linux that included within the distribution only software that adhered to the spirit of GNU. Perens had laid this out explicitly in a document called the “Debian Social Contract.” The Open Source definition is a direct descendant of the “Debian Social Contract,” and thus Open Source is very much in the spirit of GNU.
The Open Source Definition allows greater liberties with licensing than the GPL does. In particular, the Open Source Definition allows greater promiscuity when mixing proprietary and open-source software.
Consequently, an Open Source license could conceivably allow the use and redistribution of open-source software without compensation or even credit. As an example you can take great swaths of the Netscape browser source code and distribute it with another, possibly proprietary, program without even notifying Netscape. Why would Netscape wish this? For a number of reasons, but the most compelling is that it gets greater market share for their client code, which works very well with their commercial offerings. In this way, giving away source code is a very good way to build a platform. This is also one of the reasons why the people at Netscape did not use the GPL.
This is not a small issue in the community. Late in 1998, there was an important dispute that threatened to fracture the Linux community. This fracture was caused by the advent of two software systems, GNOME and KDE, each of which aims to build an object-oriented desktop interface. On the one hand, KDE utilized Troll Technology’s Qt library, a piece of code that was proprietary, but quite stable and mature. On the other hand, the GNOME people decided to use the GTK+ library, which was a completely free library, though not as mature as Qt.
In the past, Troll Technology would have had to choose between using the GPL and maintaining their proprietary stance. The rift between GNOME and KDE would have continued. With the advent of Open Source, however, Troll was able to change their license to one that met the Open Source definition, while still giving Troll the control over the technology they wanted. The rift between two important parts of the Linux community appears to be closing.
The Dark Side of the Force
Though he may not have realized it at the time, Watson stood at the threshold of a new era in biological science. At the time of the discovery of the double helix, science in biology and chemistry was essentially a craft, a practical art. It was practiced by a few men working in small groups, primarily under the auspices of academic research. The seeds of change had already been planted, however. With the advent of several medical breakthroughs, notably the polio vaccine and the discovery of penicillin, biological science was about to become an industry.
Today organic chemistry, molecular biology, and basic medical research are not practiced as a craft by a small body of practitioners, but pursued as an industry. While research continues in academia, the vast majority of researchers, and the vast majority of research dollars, belong to the pharmaceutical industry. This alliance between science and industry is an uneasy one at best. While pharmaceutical companies can fund research at a level undreamed of in academic institutions, they also fund research with a vested interest. Consider: would a pharmaceutical company rather put major funding into research for a cure for an illness that is therapy-based or medication-based?
Computer science, too, must exist in an uneasy alliance with industry. Once new ideas came primarily from academic computer scientists; now the computer industry drives innovation forward. While the rank and file of Open Source programmers are still the many computer science undergrads and graduate students around the world, more and more Open Source programmers are working in industry rather than academic settings.
Industry has produced some marvelous innovations: Ethernet, the mouse, and the Graphical User Interface (GUI) all came out of Xerox PARC. But there is an ominous side to the computer industry as well. No one outside of Redmond really thinks that it is a good idea for Microsoft to dictate, to the extent they do, what a computer desktop should look like or have on it.
Industry can have a negative impact on innovation. The Graphical Image Manipulation Program (GIMP) languished incomplete for a year at beta release 0.9. Its creators, two students at Berkeley, had left school to take jobs in industry, and left their innovation behind.
Use the Source, Luke
Open Source was not an idea decreed from the top. The Open Source movement is a genuine grass roots revolution. While evangelists like Eric Raymond and Bruce Perens have had great success changing the language around free software, that change would have been impossible if the conditions were not right. We have reached the stage where an entire generation of students who learned computer science under the influence of GNU is now at work in industry, and have quietly been bringing free software in through the back doors of industry for years. They do so not from altruistic motives, but rather to bring better code to their work.
The revolutionaries are in place. They are the network engineers, system administrators, and programmers who have thrived on open-source software throughout their education, and want to use open-source software to thrive professionally as well. Free software has become a vital part of many companies, often unwittingly, but in some cases quite deliberately. Open Source has come of age: there is such a thing as an Open Source business model.
Bob Young’s company, Red Hat Software, Inc., thrives on giving away its core product: Red Hat Linux. One good way to deliver free software is to package it as a full-featured distribution with a nice manual. Young is primarily selling convenience, as most do not want to have to bother with downloading all the pieces that make up a full-featured Linux system.
But he is not the only one doing this. So why does Red Hat dominate the U.S. market? Why does SuSE Linux dominate Europe? Open-source software is a commodity market. In any commodity market, customers value a brand they can trust. Red Hat’s strength comes from brand management: consistent marketing and community outreach that makes the community recommend them when their friends ask them which distribution to use. The same is true for SuSE, and the two companies own their respective markets mostly because they were first to take brand management seriously.
Supporting the community is essential. Red Hat, SuSE, and other companies in the Linux space understand that to just make money off of Linux without giving anything back would cause two problems. First, people would consider such a company a freeloader and would recommend a competitor instead. Second, a company must be able to differentiate itself from competitors. Companies like CheapBytes and Linux Central merely provide low-cost distribution, selling CDs for as little as a dollar. For Red Hat to be perceived as offering greater value than these budget distributors, Red Hat must give something back. In a wonderful irony of the Open Source model, Red Hat can afford to charge $49.95 for their distribution only because they support the development of new code and return that code to the community at large as Open Source.
This kind of brand management is new to Open Source, but an old-fashioned model of simply providing good service has been a part of the Open Source business model for a long time. Michael Tiemann helped found Cygnus on the idea that though the world’s best compiler, GCC, was freely available, companies would still be willing to pay for support of and enhancements to that compiler. Co-founder John Gilmore’s description of Cygnus is apt: “Making free software affordable.”
In fact this model of giving away the product and selling the support is proliferating rapidly in the Open Source world now. VA Research has been making and supporting high-quality Linux systems since late 1993. Penguin Computing offers similar products and services. LinuxCare does full, soup-to-nuts support for Linux in all of its flavors. Sendmail creator Eric Allmen has now created Sendmail Inc. to provide service and enhancements for the mail server software that holds about 80% of the market share. Sendmail is an interesting case because they have a two-tiered approach to the market. It has the proprietary Sendmail Pro, and the Free Software Sendmail, which is one year behind Sendmail Pro’s development cycle.
Along those same lines, Paul Vixie, the president of Vixie Enterprises and a contributor to this book, enjoys a practical monopoly through his program BIND. This unassuming program is used every time you send an email or go to a web site or download a file via ftp. BIND is the program that handles the conversion of addresses like “www.dibona.com” to their actual IP address (in this case, 209.81.8.245). Vixie enjoys a thriving consultancy derived from his program’s ubiquity.
Innovation Through the Scientific Method
The most fascinating development in the Open Source movement today is not the success of companies like Red Hat or Sendmail Inc. What’s intriguing is to see major corporations within the computer industry, companies like IBM and Oracle, turn their attention to Open Source as a business opportunity. What are they looking for in Open Source?
Innovation.
Science is ultimately an Open Source enterprise. The scientific method rests on a process of discovery, and a process of justification. For scientific results to be justified, they must be replicable. Replication is not possible unless the source is shared: the hypothesis, the test conditions, and the results. The process of discovery can follow many paths, and at times scientific discoveries do occur in isolation. But ultimately the process of discovery must be served by sharing information: enabling other scientists to go forward where one cannot; pollinating the ideas of others so that something new may grow that otherwise would not have been born.
Where scientists talk of replication, Open Source programmers talk of debugging. Where scientists talk of discovering, Open Source programmers talk of creating. Ultimately, the Open Source movement is an extension of the scientific method, because at the heart of the computer industry lies computer science. Consider the words of Grace Hopper, inventor of the compiler, who said, in the early 60s:
To me programming is more than an important practical art. It is also a gigantic undertaking in the foundations of knowledge.
Computer science, though, differs fundamentally from all other sciences. Computer science has only one means of enabling peers to replicate results: share the source code. To demonstrate the validity of a program to someone, you must provide them with the means to compile and run the program.
Replication makes scientific results robust. One scientist cannot expect to account for all possible test conditions, nor necessarily have the test environment to fully test every aspect of a hypothesis. By sharing hypotheses and results with a community of peers, the scientist enables many eyes to see what one pair of eyes might miss. In the Open Source development model, this same principle is expressed as “Given enough eyes, all bugs are shallow.” By sharing source code, Open Source developers make software more robust. Programs get used and tested in a wider variety of contexts than one programmer could generate, and bugs get uncovered that otherwise would not be found. Because source code is provided, bugs can often be removed, not just discovered, by someone who otherwise would be outside the development process.
The open sharing of scientific results facilitates discovery. The scientific method minimizes duplication of effort because peers will know when they are working on similar projects. Progress does not stop simply because one scientists stops working on a project. If the results are worthy, other scientists will follow up. Similarly, in the Open Source development model, sharing source code facilitates creativity. Programmers working on complimentary projects can each leverage the results of the other, or combine resources into a single project. One project may spark the inspiration for another project that would not have been conceived without it. And worthy projects need not be orphaned when a programmer moves on. With the source code available, others can step in and take over the direction of a project. The GIMP sat idle for a year, but ultimately development did continue, and today the GIMP is pointed to with pride when Open Source developers consider what they can do in an area that is new territory for them: end-user applications.
Fortune 500 companies want to leverage off of this powerful model for innovation. IBM will happily charge a tidy sum to set up and administer the integration of Apache into MIS departments. This is a net win for IBM; they can install an exceptionally stable platform, which reduces the cost of supporting the platform, and really deliver service that can truly help their customers out. Just as important, IBM engineers share in the cross-pollination of ideas with other independent developers in the Apache Team.
This is precisely the reasoning behind Netscape’s decision to make its browser Open Source. Part of the goal was to stabilize or increase market share. But more importantly, Netscape looked to the community of independent developers to drive innovation and help them build a superior product.
IBM was quick to see that tightly integrating software technologies like Apache into server platforms like the AS400 and the RS6000 line can only help in winning contracts and selling more IBM hardware. IBM is taking this to the next step and is porting their popular DB2 database to the Linux operating system. While many took this as being a response to Oracle releasing its Oracle 8 line on Linux, IBM has taken its role in the community seriously and has dedicated resources to the open software cause. By porting Apache to the AS400 platform, IBM has taken its most popular mainframe and legitimized the open technologies in ways only IBM can do.
It will be interesting to see what will happen in the competitive bids that companies like Coleman (of Thermo-electron), SAIC, BDM, and IBM make to the federal government and to industry. Consider the software cost of installing 1,000 seats with NT or Solaris and compare them with installing 1,000 seats with Linux. When you can drop your bid price by over a quarter of a million dollars, you can compete more effectively for these sorts of bids. Companies like CSC, who have a reputation of commonly forgoing a percent or two of profit margin to get more contracts, are probably just trying to figure out how to leverage technologies like Linux.
While companies like IBM, Oracle, and Netscape have begun to integrate their business model with Open Source, many traditional software companies continue to focus on purely proprietary solutions. They do so at their peril.
In the web server space, Microsoft’s complete denial of the Open Source phenomenon is almost amusing. The Apache web server has, at the time of writing, more than 50% of the web serving market according the Netcraft survey (http://www.netcraft.com/survey). When you look at advertisements for Microsoft’s Internet Information Server (IIS) you see them tout that they own over half the market in web serving—over half the commercial server market, that is. When compared against competitors like Netscape and Lotus, they have a substantial edge in the market share, but that “edge” looks puny in the overall server market where Microsoft’s 20% is dwarfed by Apache’s 53%.
The irony deepens, however. The fact is that 29% of the Web now runs on Linux-based servers, according the surveys conducted by the QUESO and WTF. QUESO is a tool that can determine the operating system a machine is running by sending TCP/IP packets to it and analyzing how the server responds to them. When run in conjunction with the likes of the Netcraft query engine, which analyzes the identification tags that the server responds with, it can produce a telling picture of the Web by OS and server type.
In fact, proprietary software vendors have already suffered a number of quiet casualties. Linux and Free BSD have really eliminated opportunities to successfully sell a proprietary Unix on PC hardware. One such company, Coherent, has already foundered. The Santa Cruz Operation (SCO) has gone from a leading Unix vendor to an afterthought in the span of a couple of years. SCO, the company, will probably find a way to survive, but will its flagship product, SCO Unix, be another casualty of Open Source?
Sun Microsystems has in many ways provided support for open-source development over the years, whether through donations of hardware and resources to help with the SPARC port of Linux, or through supporting development of John Ousterhout’s Tcl scripting language. It’s ironic, then, that the company that grew out of the joyous free software roots at Berkeley that Kirk McKusick describes so often struggles to grasp the significance of the Open Source phenomenon.
Let’s take a moment to do some comparison and contrast with SCO and Sun.
Sun makes the majority of their money from supporting and servicing their OS and hardware. Sun’s product line goes from a desktop workstation that is competitively priced with a PC to large enterprise class server-clusters that compete in the mainframe space. Their profits in the hardware realm do not come from their low-end Ultra series so much as they do from the service, sales, and support of their highly specialized and customized E and A series servers. It’s estimated that Sun enjoys fully 50% of their profits from support, training, and consulting services.
SCO, on the other hand, makes money by selling the SCO Unix operating system, programs like compilers and servers, and training and education on the use of the SCO products. So while SCO has a nicely put together organization, it is in danger the same way that a farm with one crop can be vulnerable to a single blight destroying a harvest.
Sun sees the development of Linux as perhaps more concern for the lower end of their product offering. Sun’s strategy is to make sure that Linux can run on the Sun hardware so people can choose to run Linux on Sun hardware. This is interesting because Sun can then continue to offer hardware support for their machines. In the future, we would not be surprised to see Sun offer software support for Linux on their lower-end machines.
In many ways, this is a short step for Sun. In fact, if you were to call a Sun administrator right now and ask him or her what’s the first thing they do when they receive a new Sun box, they will tell you “Download the GNU tools and compilers and install my favorite shells.” Sun will perhaps finally get this message from their customer base and just do this for people as an outreach measure. However, Sun will operate at a disadvantage until they see the service their customer base can provide for them: innovation through cross-pollination of ideas based on source code release.
SCO, on the other hand, has a less flexible model. SCO’s pricing model sells the OS first, with additional costs for tools that the Linux user takes for granted, such as compilers and text processing languages. This model simply can’t be sustained in the face of competition from a robust free OS. Unlike Sun, which has added value in its broad hardware line, SCO has no hardware to tie profits to. Their OS is essentially all they have, and in SCO’s case, that’s not good enough. What will SCO do?
Their response so far has not been enlightened. In the beginning of 1998, SCO sent out a letter to its vast mailing list of users slamming open Unixes like Linux and FreeBSD as unstable and unprofessional while offering a reduced price on the SCO base OS. They were widely scorned for this move and had to do some serious backpedaling. The letter insulted a number of people by blatantly lying about the credentials of Linux. SCO didn’t give their customers credit for being smart enough to see through the FUD. SCO eventually published a retraction on their web site.
In late 1998, SCO sent out a press release talking about how SCO Unix now has a Linux compatibility layer, so that your favorite Linux programs can be run under SCO Unix. The response was underwhelming. Why spend money on an OS just to make it compatible with a competitive free offering?
SCO is in a unique position to benefit from the Open Source movement. SCO has some very valuable intellectual property that they can leverage into a real position of power in the Open Source future. They must, however, make a leap of faith. Instead of seeing Open Source as a threat that would erode SCO’s intellectual property, they need to see Open Source as an opportunity to bring innovation to that intellectual property.
Of course the maneuverings of a company like SCO or even Sun with respect to Open Source pale compared to the actions of Microsoft. So far Microsoft remains locked in its proprietary model, and seems determined to see that model through at least the release of Windows 2000.
Our guess is that Windows 2000 will ship in the latter part of 2000 or early 2001 to great fanfare. This will be the great merging event of NT and 98 after all. Somewhere around this event, or six months before, there will be an announcement for a new Microsoft Windows operating system. Microsoft has always coveted the “Lucrative Enterprise Market,” a place where the machines serve a company’s lifeblood of data. So far, however, there’s no evidence that Microsoft can deliver a Windows NT system or Windows 2000 system with the greater stability this market requires. So this new system will be decreed as the coming solution.
Let’s call the product that represents this phantom change “Windows Enterprise” or WEnt. Microsoft will look at NT and say, essentially, how can we make it more reliable and stable? OS theory, as Linus Torvalds points out in his essay, really hasn’t changed much in the last 20 years, and so Microsoft engineering will essentially come back and say that a nice, tightly-written kernel without any pollution in the executive level of execution is the best way to accomplish reliability and speed. Thus to fix the major errors of the Windows NT kernels, namely the inclusion of ill tested or ill-chosen third party drivers and making the GUI part of the kernel, Microsoft will have to either write a monster slow emulation layer, or just break a ton of old applications. Microsoft is certainly capable of pursuing either course. But open-source programs may well have reached the maturity where corporations who are buying software will ask themselves if they trust Microsoft to give them what is already available under the guise of Linux, namely a stable kernel.
The answer will of course show itself with time. No one really knows if Microsoft can actually write solid stable software at this level. The “Halloween Documents” that Eric Raymond refers to suggest that even within Microsoft there are serious doubts.
Perils to Open Source
Most software ventures, like most scientific enterprises, fail. As Bob Young points out, making successful open-source software is not so very different from making successful proprietary software. In both cases real success is rare, and the best innovators are those who learn from mistakes.
The rampant creativity that leads to innovation in both science and software comes at a cost. Maintaining control of an active Open Source project can be difficult. This fear of losing control prevents some individuals and many companies from active participation. Specifically, one concern when embarking or joining an open-source software project is that a large competitor or group of people will come in and create what is called a fork in the code base. Much like a fork in the road, a code base can at times diverge into two separate, incompatible, roads and never the twain shall meet. This is not an idle problem; look, for instance, at the multiple forks that the BSD-based operating systems have taken, leading to NetBSD, OpenBSD, FreeBSD, and many others. What is to keep this from happening to Linux?
One thing that keeps this happening is the open methods used in the development of the Linux kernel. Linus Torvalds, Alan Cox, and the rest run a tight ship and are the central authority for adding and accessing the kernel. The Linux kernel project has been called a benign dictatorship, with Linus as its dictator, and so far this model has produced a nicely written tight kernel without too much extraneous cruft in it.
What’s ironic is that while Linux has experienced little actual forking, there exist large patches that convert the Linux kernel into a hard real-time kernel, suitable for tight critical device control, and additionally there exist versions of Linux that can run under dramatically weird architectures. These patches could be considered forks, as they are based on a set kernel and grow from there out, but because they occupy special niche application areas for Linux, they do not have a fracturing effect on the Linux community as a whole.
Think, by way of analogy, of a scientific theory applied on special cases. Most of the world gets along just fine using Newton’s laws of motion for mechanical calculations. Only under special circumstances of large masses or high velocities must we make recourse to Einstein’s theory of relativity. Einstein’s theory could grow and extend without undermining the application of the older Newtonian theoretical base.
But competing software ventures often conflict just as competing scientific theories often conflict. Look at the history of Lucid. Lucid was a company formed to exploit and develop a streamlined version of the popular programmer’s editor Emacs and sell it to the development community as a replacement for the original Emacs, written by Richard Stallman. Lucid’s alternative was called Lucid Emacs and then Xemacs. When the Lucid team went to pitch the Xemacs solution to various companies, they found that they could not draw enough of a distinction in the results produced by Xemacs versus Emacs. Combined with the lackluster state of the computer market at the time, Lucid was short-lived.
Interestingly, Lucid did GPL its Xemacs code before it went out of business. Even this failed enterprise is a testament to the longevity of open-source software. As long as people find a use for the software, they will maintain it to work with new systems and architectures. Even now, Xemacs enjoys terrific popularity, and you can spark an interesting debate among Emacs hackers by asking them which Emacs they prefer. Xemacs, with very few people working on the program, is still a vital advancing product, changing, keeping up, and adapting to the new times and architectures.
Motivating the Open Source Hacker
The Lucid experience shows that programmers often have loyalty to a project that goes beyond direct compensation for working on the project. Why do people write free software? Why do they give away freely what they could charge hundreds of dollars an hour for? What do they get out of it?
The motivation is not just altruism. The contributors here may not be loaded down with Microsoft stock options, but each has achieved a reputation that should assure him opportunities that pay the rent and feeds the kids. From the outside this can seem like a paradox; you can’t eat free software after all. The answer lies, in part, in thinking beyond conventional notions of work and compensation. We are witnessing a new economic model take shape, not just a new culture.
Eric Raymond has become a kind of self-appointed participant anthropologist to the Open Source community, and in his writings he touches on the reasons why the people develop software only to give it away.
Keep in mind that these people have been, for the most part, coding for years, and don’t see programming itself as burdensome, or as work. A very complex project like Apache or the Linux kernel brings the satisfaction of the ultimate in intellectual exercise. Much like the rush a runner feels while running a race, a true programmer will feel this same rush after writing a perfect routine or tight piece of code. It is difficult to describe the joy felt after completing or debugging a hideously tricky piece of recursive code that has been a source of trouble for days.
The point is that many programmers code because it is what they love to do, and in fact it is how they define their intellect. Without coding, a programmer feels less of a person, much like an athlete deprived of an opportunity to compete. Discipline can be a problem with programmers as much as with athletes; many programmers really don’t enjoy maintaining a piece of code after having mastered it.
Still, other programmers don’t take this “macho” view of their craft, and take a more scholarly view. Many programmers consider themselves, rightly, to be scientists. Scientists aren’t supposed to hoard profits from their inventions, they are supposed to publish and share their inventions for all to benefit from. A scientist isn’t supposed to let profits come at the expense of the pursuit of knowledge.
What all these introspections on programming have in common is an emphasis on reputation. Programming is a gift culture: the value of a programmer’s work can only come from sharing it with others. That value is enhanced when the work is more widely shared, and that value is enhanced when the work is more completely shared by showing the source, not just the results from a pre-compiled binary.
Programming is also about empowerment, what Eric Raymond calls “scratching an itch.” Most Open Source projects began with frustration: looking for a tool to do a job and finding none, or finding one that was broken or poorly maintained. Eric Raymond began fetchmail this way; Larry Wall began Perl this way; Linus Torvalds began Linux this way. Empowerment, in many ways, is the most important concept underlying Stallman’s motivation for starting the GNU project.
The Venture and Investment Future of Linux
A hacker’s motivations may be intellectual, but the result need not be a lifestyle of sacrifice. More and more, enterprises and individual Open Source programmers are coming together in a new spirit of pragmatism and opportunity.
In Silicon Valley, where we work and live, there is a history of investment and venture capital that powers the economy of the region. It can be traced back to the years when the transistor was first being exploited for commercial gain and the microprocessor became a force in the industry, replacing cumbersome logic boards with thousands of chips and individual transistors on them.
At any given time there is a hot technology that venture capital concentrates on. While they don’t ignore other companies and opportunities, they realize that to achieve their benchmarks of economic performance, they don’t just need successful companies, they need hot companies that can do an Initial Public Offering (IPO) within three years of investment or be sold for hundreds of millions to companies like Oracle or Cisco.
In 1998, the great wave of the Internet is ebbing; the rash of Internet IPOs that began with Netscape’s spectacular debut has begun to decline. In a fitting act of symbolism, America Online’s purchase of Netscape really does signal the end of an era. Internet stocks are now looked at more closely by the investment community, and generally Internet companies are held to the same standards as other companies: they must have some plausible expectation of profitability ahead.
So where will venture go? Our guess is that Linux and open-source software related companies are and will be the hot investment through the end of the millennium. With any luck, you will see a great rash of Linux and Open Source IPOs starting with Red Hat software in late 1999. The money out there to be invested is nothing less than staggering, and companies like Scriptics, Sendmail, and Vix.com are well poised to take advantage of the favorable market conditions and build their dream companies.
The question really is not whether venture capital funding will flow to Open Source, but why the flow has only begun to trickle in that direction. Keep in mind, free software is not new; Richard Stallman started the FSF in 1984 and was building on a tradition dating back long before that. Why did it take so long to catch on?
Taking a look at the computing landscape, you’ve got a situation where a very large company with very deep pockets controls the lion’s share of the commercial market. In Silicon Valley, hopeful applications vendors looking for backing from the angel and venture capital community learn very quickly that if they position themselves against Microsoft, they will not get funded. Every startup either has to play Microsoft’s game or not play at all.
This creatively oppressive environment is clearly where the original impetus for the rise of free software began to take root. Any programmer who has had to deal with creating programs for Microsoft’s Windows operating systems will tell you that it is a daunting collection of cumbersome interfaces designed to serve the goal of making the program completely dependent on Microsoft libraries. The number of interfaces presented to the programmer from Microsoft serve the purpose of making any Windows native program very difficult to port to other operating systems.
The biggest arena that Microsoft has yet to dominate—the Internet—has no such restriction. As Scott Bradner describes, the Internet is built on a powerful collection of open standards maintained on the merit of individual participation, not the power of a corporate wallet. The Internet is, in many ways, the original Open Source venture. Keeping the Internet firmly based on open standards made it possible for a wide and diverse range of programmers to work on developing Internet applications. The Internet’s spectacular growth is a testament to the power of this open standards model.
The structures inherent in the Internet’s success are present in the Open Source movement as well. Linux distributors like Red Hat and SuSE compete, yes, but they compete based on open standards and shared code. Both use Red Hat Package Manager (RPM) as their package management tool, for example, rather than trying to lock developers in to individual package management systems. Debian uses a different package management tool, but because both Debian’s and Red Hat’s tools are open-source programs, compatibility between the two has been achieved.
So the infrastructure that made Internet technologies a tempting arena for venture capitalists is present in Open Source, and should make Open Source technologies equally tempting.
More importantly, though, the Internet has created a new infrastructure off of which Open Source can leverage. We are moving from the era of software enterprises to the era of infoware enterprises that Tim O’Reilly describes. To make this move, the barriers to entry and the costs of distribution had to be lowered dramatically. The Internet has lowered the bar.
Science and the New Renaissance
The Open Source development model of today has its roots in the academic computer science of a decade or more ago. What makes Open Source dramatically more successful today, however, is the rapid dissemination of information made possible by the Internet. When Watson and Crick discovered the double helix, they could reasonably expect the information to travel from Cambridge to Cal Tech in a matter of days, or weeks at most. Today the transmission of such information is effectively instantaneous. Open Source has been born into a digital renaissance made possible by the Internet, just as modern science was made possible during the Renaissance by the invention of the printing press.
The Middle Ages lacked an affordable information infrastructure. Written works had to be copied by hand at great expense, and hence the information had to have an immediate value attached to it. Trade records, banking transactions, diplomatic correspondence; this information was concise enough and carried enough immediate value to be transmitted. The speculative writings of alchemists, priests, and philosophers—the men who would later be called scientists—took a much lower priority, and hence the information was disseminated much more slowly. The printing press changed all this by dramatically lowering the barriers to entry in the information infrastructure. Scholars who had previously worked in isolation could, for the first time, establish a sense of community with other scholars all over Europe. But this exercise in community building required an absolute commitment to the open sharing of information.
What was born out of this community was the notion of academic freedom, and ultimately the process we now call the Scientific Method. None of this would have been possible without the need to form community, and the open sharing of information has, for centuries, been the cement that has held the scientific community together.
Imagine for a moment if Newton had withheld his laws of motion, and instead gone into business as a defense contractor to artillerists following the 30 Years War. “No, I won’t tell you how I know about parabolic trajectories, but I’ll calibrate your guns for a fee.” The very idea, of course, sounds absurd. Not only did science not evolve this way, but it could not have evolved this way. If that had been the mindset of those scientifically inclined, their very secrecy would have kept science from developing and evolving at all.
The Internet is the printing press of the digital age. Once again, the barriers to entry for the information infrastructure have been dramatically lowered. No longer does source code need to be distributed on paper tape as with the original version of Unix, or floppy disks, as in the early days of DOS, or even on CD-ROM. Any ftp or web server can now serve as a distribution point that is cheap and effectively instantaneous.
While this renaissance holds great promise, we must not forget the centuries old scientific heritage on which the Open Source development model is based. Computer science and the computer industry do exist in an uneasy alliance today. There is pressure from industry giants like Microsoft to keep new developments proprietary for the sake of short-term financial gain. But as more and more of the development work in computer science has its origins in industry rather than academia, industry must take care to nourish computer science through the open sharing of ideas—namely, the Open Source development model. The computer industry must do this not out of any altruistic motives to serve a greater cause, but for the most basic pragmatic reasons: enlightened self-interest.
First of all, it would be shortsighted of those in the computer industry to believe that monetary reward is the primary concern of Open Source’s best programmers. To involve these people in industry, their priorities must be respected. These people are involved in a reputation game, and history has shown that scientific success outlives financial success. We remember a few of the great industrialists of the last hundred years: Carnegie, Rockefeller. We remember a great many more of the scientists and inventors from the last hundred years: Einstein, Edison . . . Pauling. When the history of this time is written a hundred years from now, people will perhaps remember the name of Bill Gates, but few other computer industrialists. They are much more likely to remember names like Richard Stallman and Linus Torvalds.
Second, and more important, industry needs the innovation science can provide. Open Source can develop and debug new software with the speed and creativity of science. The computer industry needs the next generation of ideas that will come from Open Source development.
Consider the example of Linux once again. Linux is a project that was conceived some five years after Microsoft began development of Windows NT. Microsoft has spent tens of thousands of man-hours and millions of dollars on the development of Windows NT. Yet today Linux is considered a competitive alternative to NT as a PC-based server system, an alternative that major middleware and backend software is being ported to by Oracle, IBM, and other major providers of enterprise software. The Open Source development model has produced a piece of software that would otherwise require the might and resources of someone like Microsoft to create.
To sustain the digital renaissance we need Open Source development. Open Source development drives progress not just in computer science, but in the computer industry as well.
Get Open Sources now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.