Search the Catalog
Open Sources: Voices from the Open Source Revolution

Open Sources: Voices from the Open Source Revolution


1st Edition January 1999
1-56592-582-3, Order Number: 5823
280 pages, $24.95


The Linux Edge

The Linux Edge

Linus Torvalds

Linux today has millions of users, thousands of developers, and a growing market. It is used in embedded systems; it is used to control robotic devices; it has flown on the space shuttle. I'd like to say that I knew this would happen, that it's all part of the plan for world domination. But honestly this has all taken me a bit by surprise. I was much more aware of the transition from one Linux user to one hundred Linux users than the transition from one hundred to one million users.

Linux has succeeded not because the original goal was to make it widely portable and widely available, but because it was based on good design principles and a good development model. This strong foundation made portability and availability easier to achieve.

Contrast Linux for a moment with ventures that have had strong commercial backing, like Java or Windows NT. The excitement about Java has convinced many people that "write once, run anywhere" is a worthy goal. We're moving into a time when a wider and wider range of hardware is being used for computing, so indeed this is an important value. Sun didn't invent the idea of "write once, run anywhere," however. Portability has long been a holy grail of the computer industry. Microsoft, for example, originally hoped that Windows NT would be a portable operating system, one that could run on Intel machines, but also on RISC machines common in the workstation environment. Linux never had such an ambitious original goal. It's ironic, then, that Linux has become such a successful medium for cross-platform code.

Originally Linux was targeted at only one architecture: the Intel 386. Today Linux runs on everything from PalmPilots to Alpha workstations; it is the most widely ported operating system available for PCs. If you write a program to run on Linux, then, for a wide range of machines, that program can be "write once, run anywhere." It's interesting to look at the decisions that went into the design of Linux, and how the Linux development effort evolved, to see how Linux managed to become something that was not at all part of the original vision.

Amiga and the Motorola Port

Linux is a Unix-like operating system, but not a version of Unix. This gives Linux a different heritage than, for example, Free BSD. What I mean is this: the creators of Free BSD started with the source code to Berkeley Unix, and their kernel is directly descended from that source code. So Free BSD is a version of Unix; it's in the Unix family tree. Linux, on the other hand, aims to provide an interface that is compatible with Unix, but the kernel was written from scratch, without reference to Unix source code. So Linux itself is not a port of Unix. It's a new operating system.

Porting this new operating systems to other platforms was really not on my mind at the beginning. At first I just wanted something that would run on my 386.

A serious effort to make the Linux kernel code portable began with the effort to port Linux to DEC's Alpha machine. The Alpha port was not the first port, however.

The first port came from a team who ported the Linux kernel to the Motorola 68K series, which was the chip in the early Sun, Apple, and Amiga computers. The programmers behind the Motorola port really wanted to do something low-level and in Europe you had a number of people who were in the Amiga community who were especially disenchanted with the idea of using DOS or Windows.

While the Amiga people did get a system running on the 68K, I don't really think of this as a successful port of Linux. They took the same kind of approach I had taken when writing Linux in the first place: writing code from scratch targeted to support a certain kind of interface. So that first 68K port could be considered a Linux-like operating system, and a fork off the original codebase.

In one sense this first 68K Linux was not helpful in creating a portable Linux, but in another sense it was. When I started thinking about the Alpha port I had to think about the 68K experience. If we took the same approach with Alpha, then I would have three different code bases to support in order to maintain Linux. Even if this had been feasible in terms of coding, it wasn't feasible in terms of management. I couldn't manage the development of Linux if it meant keeping track of an entirely new code base every time someone wanted Linux on a new architecture. Instead, I wanted to do a system where I have an Alpha specific tree, a 68K specific tree, and an x86 specific tree, but all in a common code base.

So the kernel underwent a major rewrite at this time. But that rewrite was motivated by how to work with a growing community of developers.

Microkernels

When I began to write the Linux kernel, there was an accepted school of thought about how to write a portable system. The conventional wisdom was that you had to use a microkernel-style architecture.

With a monolithic kernel such as the Linux kernel, memory is divided into user space and kernel space. Kernel space is where the actual kernel code is loaded, and where memory is allocated for kernel-level operations. Kernel operations include scheduling, process management, signaling, device I/O, paging, and swapping: the core operations that other programs rely on to be taken care of. Because the kernel code includes low-level interaction with the hardware, monolithic kernels appear to be specific to a particular architecture.

A microkernel performs a much smaller set of operations, and in more limited form: interprocess communication, limited process management and scheduling, and some low-level I/O. Microkernels appear to be less hardware-specific because many of the system specifics are pushed into user space. A microkernel architecture is basically a way of abstracting the details of process control, memory allocation, and resource allocation so that a port to another chipset would require minimal changes.

So at the time I started work on Linux in 1991, people assumed portability would come from a microkernel approach. You see, this was sort of the research darling at the time for computer scientists. However, I am a pragmatic person, and at the time I felt that microkernels (a) were experimental, (b) were obviously more complex than monolithic Kernels, and (c) executed notably slower than monolithic kernels. Speed matters a lot in a real-world operating system, and so a lot of the research dollars at the time were spent on examining optimization for microkernels to make it so they could run as fast as a normal kernel. The funny thing is if you actually read those papers, you find that, while the researchers were applying their optimizational tricks on a microkernel, in fact those same tricks could just as easily be applied to traditional kernels to accelerate their execution.

In fact, this made me think that the microkernel approach was essentially a dishonest approach aimed at receiving more dollars for research. I don't necessarily think these researchers were knowingly dishonest. Perhaps they were simply stupid. Or deluded. I mean this in a very real sense. The dishonesty comes from the intense pressure in the research community at that time to pursue the microkernel topic. In a computer science research lab, you were studying microkernels or you weren't studying kernels at all. So everyone was pressured into this dishonesty, even the people designing Windows NT. While the NT team knew the final result wouldn't approach a microkernel, they knew they had to pay lip service to the idea.

Fortunately I never felt much pressure to pursue microkernels. The University of Helsinki had been doing operating system research from the late 60s on, and people there didn't see the operating system kernel as much of a research topic anymore. In a way they were right: the basics of operating systems, and by extension the Linux kernel, were well understood by the early 70s; anything after that has been to some degree an exercise in self-gratification.

If you want code to be portable, you shouldn't necessarily create an abstraction layer to achieve portability. Instead you should just program intelligently. Essentially, trying to make microkernels portable is a waste of time. It's like building an exceptionally fast car and putting square tires on it. The idea of abstracting away the one thing that must be blindingly fast--the kernel--is inherently counter-productive.

Of course there's a bit more to microkernel research than that. But a big part of the problem is a difference in goals. The aim of much of the microkernel research was to design for a theoretical ideal, to come up with a design that would be as portable as possible across any conceivable architecture. With Linux I didn't have to aim for such a lofty goal. I was interested in portability between real world systems, not theoretical systems.

From Alpha to Portability

The Alpha port started in 1993, and took about a year to complete. The port wasn't entirely done after a year, but the basics were there. While this first port was difficult, it established some design principles that Linux has followed since, and that have made other ports easier.

The Linux kernel isn't written to be portable to any architecture. I decided that if a target architecture is fundamentally sane enough, and follows some basic rules then Linux would fundamentally support that kind of model. For example, memory management can be very different from one machine to another. I read up on the 68K, the Sparc, the Alpha, and the PowerPC memory management documents, and found that while there are differences in the details, there was a lot in common in the use of paging, caching, and so on. The Linux kernel memory management could be written to a common denominator among these architectures, and then it would not be so hard to modify the memory management code for the details of a specific architecture.

A few assumptions simplify the porting problem a lot. For example, if you say that a CPU must have paging, then it must by extension have some kind of translation lookup buffer (TLB), which tells the CPU how to map the virtual memory for use by the CPU. Of course, what form the TLB takes you aren't sure. But really, the only thing you need to know is how to fill it and how to flush it when you decide it has to go away. So in this sane architecture you know you need to have a few machine-specific parts in the kernel, but most of the code is based on the general mechanisms by which something like the TLB works.

Another rule of thumb that I follow is that it is always better to use a compile time constant rather than using a variable, and often by following this rule, the compiler will do a lot better job at code optimization. This is obviously wise, because you can set up your code so as to be flexibly defined, but easily optimized.

What's interesting about this approach--the approach of trying to define a sane common architecture--is that by doing this you can present a better architecture to the OS than is really available on the actual hardware platform. This sounds counter-intuitive, but it's important. The generalizations you're looking for when surveying systems are frequently the same as the optimizations you'd like to make to improve the kernel's performance.

You see, when you do a large enough survey of things like page table implementation and you make a decision based on your observations--say, that the page tree should be only three deep--you find later that you could only have done it that way if you were truly interested in having high performance. In other words, if you had not been thinking about portability as a design goal, but had just been thinking about optimization of the kernel on a particular architecture, you would frequently reach the same conclusion--say, that the optimal depth for the kernel to represent the page tree is three deep.

This isn't just luck. Often when an architecture deviates from a sane general design in some of its details that's because it's a bad design. So the same principles that make you write around the design specifics to achieve portability also make you write around the bad design features and stick to a more optimized general design. Basically I have tried to reach middle ground by mixing the best of theory into the realistic facts of life on today's computer architectures.

Kernel Space and User Space

With a monolithic kernel such as the Linux kernel, it's important to be very cautious about allowing new code and new features into the kernel. These decisions can affect a number of things later on in the development cycle beyond the core kernel work.

The first very basic rule is to avoid interfaces. If someone wants to add something that involves a new system interface you need to be exceptionally careful. Once you give an interface to users they will start coding to it and once somebody starts coding to it you are stuck with it. Do you want to support the exact same interface for the rest of your system's life?

Other code is not so problematic. If it doesn't have an interface, say a disk driver, there isn't much to think about; you can just add a new disk driver with little risk. If Linux didn't have that driver before, adding it doesn't hurt anyone already using Linux, and opens Linux to some new users.

When it comes to other things, you have to balance. Is this a good implementation? Is this really adding a feature that is good? Sometimes even when the feature is good, it turns out that either the interface is bad or the implementation of that feature kind of implies that you can never do something else, now or in the future.

For example--though this is sort of an interface issue--suppose somebody has some stupid implementation of a filesystem where names can be no longer than 14 characters. The thing you really want to avoid having these limitations in an interface that is set in stone. Otherwise when you look to extend the filesystem, you are screwed because you have to find a way to fit within this lesser interface that was locked in before. Worse than that, every program that requests a filename may only have space in a variable for, say, 13 characters, so if you were to pass them a longer filename it would crash them.

Right now the only vendor that does such a stupid thing is Microsoft. Essentially, in order to read DOS/Windows files you have this ridiculous interface where all files had eleven characters, eight plus three. With NT, which allowed long filenames, they had to add a complete set of new routines to do the same things the other routines did, except that this set can also handle larger filenames. So this is an example of a bad interface polluting future works.

Another example of this happened in the Plan 9 operating system. They had this really cool system call to do a better process fork--a simple way for a program to split itself into two and continue processing along both forks. This new fork, which Plan 9 called R-Fork (and SGI later called S-Proc) essentially creates two separate process spaces that share an address space. This is helpful for threading especially.

Linux does this too with its clone system call, but it was implemented properly. However, with the SGI and Plan9 routines they decided that programs with two branches can share the same address space but use separate stacks. Normally when you use the same address in both threads, you get the same memory location. But you have a stack segment that is specific, so if you use a stack-based memory address you actually get two different memory locations that can share a stack pointer without overriding the other stack.

While this is a clever feat, the downside is that the overhead in maintaining the stacks makes this in practice really stupid to do. They found out too late that the performance went to hell. Since they had programs which used the interface they could not fix it. Instead they had to introduce an additional properly-written interface so that they could do what was wise with the stack space.

While a proprietary vendor can sometimes try to push the design flaw onto the architecture, in the case of Linux we do not have the latitude to do this.

This is another case where managing the development of Linux and making design decisions about Linux dictate the same approach. From a practical point of view, I couldn't manage lots of developers contributing interfaces to the kernel. I would not have been able to keep control over the kernel. But from a design point of view this is also the right thing to do: keep the kernel relatively small, and keep the number of interfaces and other constraints on future development to a minimum.

Of course Linux is not completely clean in this respect. Linux has inherited a number of terrible interfaces from previous implementations of Unix. So in some cases I would have been happier if I did not have to maintain the same interface as Unix. But Linux is about as clean as a system can be without starting completely from scratch. And if you want the benefit of being able to run Unix applications, then you get some of the Unix baggage as a consequence. Being able to run those applications has been vital to Linux's popularity, so the tradeoff is worth it.

GCC

Unix itself is a great success story in terms of portability. The Unix kernel, like many kernels, counts on the existence of C to give it the majority of the portability it needs. Likewise for Linux. For Unix the wide availability of C compilers on many architectures made it possible to port Unix to those architectures.

So Unix underscores how important compilers are. The importance of compilers was one reason I chose to license Linux under the GNU Public License (GPL). The GPL was the license for the GCC compiler. I think that all the other projects from the GNU group are for Linux insignificant in comparison. GCC is the only one that I really care about. A number of them I hate with a passion; the Emacs editor is horrible, for example. While Linux is larger than Emacs, at least Linux has the excuse that it needs to be.

But basically compilers are really a fundamental need.

Now that the Linux kernel follows a generally portable design, at least for reasonably sane architectures, portability should be possible as long as a reasonably good compiler is available. For the upcoming chips I don't worry much about architectural portability when it comes to the kernel anymore; I worry about the compilers. Intel's 64-bit chip, the Merced, is an obvious example, because Merced is very different for a compiler.

So the portability of Linux is very much tied to the fact that GCC is ported to major chip architectures.

Kernel Modules

With the Linux kernel it became clear very quickly that we want to have a system which is as modular as possible. The open-source development model really requires this, because otherwise you can't easily have people working in parallel. It's too painful when you have people working on the same part of the kernel and they clash.

Without modularity I would have to check every file that changed, which would be a lot, to make sure nothing was changed that would effect anything else. With modularity, when someone sends me patches to do a new filesystem and I don't necessarily trust the patches per se, I can still trust the fact that if nobody's using this filesystem, it's not going to impact anything else.

For example, Hans Reiser is working on a new filesystem, and he just got it working. I don't think it's worth trying to get into the 2.2 kernel at this point. But because of the modularity of the kernel I could if I really wanted to, and it wouldn't be too difficult. The key is to keep people from stepping on each other's toes.

With the 2.0 kernel Linux really grew up a lot. This was the point that we added loadable kernel modules. This obviously improved modularity by making an explicit structure for writing modules. Programmers could work on different modules without risk of interference. I could keep control over what was written into the kernel proper. So once again managing people and managing code led to the same design decision. To keep the number of people working on Linux coordinated, we needed something like kernel modules. But from a design point of view, it was also the right thing to do.

The other part of modularity is less obvious, and more problematic. This is the run-time loading part, which everyone agrees is a good thing, but leads to new problems. The first problem is technical, but technical problems are (almost) always the easiest to solve. The more important problem is the non-technical issues. For example, at which point is a module a derived work of Linux, and therefore under the GPL?

When the first module interface was done, there were people that had written drivers for SCO, and they weren't willing to release the source, as required by the GPL, but they were willing to recompile to provide binaries for Linux. At that point, for moral reasons, I decided I couldn't apply the GPL in this kind of situation.

The GPL requires that works "derived from" a work licensed under the GPL also be licensed under the GPL. Unfortunately what counts as a derived work can be a bit vague. As soon as you try to draw the line at derived works, the problem immediately becomes one of where do you draw the line?

We ended up deciding (or maybe I ended up decreeing) that system calls would not be considered to be linking against the kernel. That is, any program running on top of Linux would not be considered covered by the GPL. This decision was made very early on and I even added a special read-me file (see Appendix B) to make sure everyone knew about it. Because of this commercial vendors can write programs for Linux without having to worry about the GPL.

The result for module makers was that you could write a proprietary module if you only used the normal interface for loading. This is still a gray area of the kernel though. These gray areas leave holes for people to take advantage of things, perhaps, and it's partly because the GPL really isn't clear about things like module interface. If anyone were to abuse the guidelines by using the exported symbols in such a way that they are doing it just to circumvent the GPL, then I feel there would be a case for suing that person. But I don't think anyone wants to misuse the kernel; those who have shown commercial interest in the kernel have done so because they are interested in the benefits of the development model.

The power of Linux is as much about the community of cooperation behind it as the code itself. If Linux were hijacked--if someone attempted to make and distribute a proprietary version--the appeal of Linux, which is essentially the open-source development model, would be lost for that proprietary version.

Portability Today

Linux today has achieved many of the design goals that people originally assumed only a microkernel architecture could achieve.

By constructing a general kernel model drawn from elements common across typical architecture, the Linux kernel gets many of the portability benefits that otherwise require an abstraction layer, without paying the performance penalty paid by microkernels.

By allowing for kernel modules, hardware-specific code can often be confined to a module, keeping the core kernel highly portable. Device drivers are a good example of effective use of kernel modules to keep hardware specifics in the modules. This is a good middle ground between putting all the hardware specifics in the core kernel, which makes for a fast but unportable kernel, and putting all the hardware specifics in user space, which results in a system that is either slow, unstable, or both.

But Linux's approach to portability has been good for the development community surrounding Linux as well. The decisions that motivate portability also enable a large group to work simultaneously on parts of Linux without the kernel getting beyond my control. The architecture generalizations on which Linux is based give me a frame of reference to check kernel changes against, and provide enough abstraction that I don't have to keep completely separate forks of the code for separate architectures. So even though a large number of people work on Linux, the core kernel remains something I can keep track of. And the kernel modules provide an obvious way for programmers to work independently on parts of the system that really should be independent.

The Future of Linux

I'm sure we made the right decision with Linux to do as little as possible in the kernel space. At this point the honest truth is I don't envision major updates to the kernel. A successful software project should mature at some point, and then the pace of changes slows down. There aren't a lot of major new innovations in store for the kernel. It's more a question of supporting a wider range of systems than anything else: taking advantage of Linux's portability to bring it to new systems.

There will be new interfaces, but I think those will come partly from supporting the wider range of systems. For example, when you start doing clustering, suddenly you want to tell the scheduler to schedule certain groups of processes as gang scheduling and things like that. But at the same time, I don't want everybody just focusing on clustering and super-computing, because a lot of the future may be with laptops, or cards that you plug in wherever you go, or something similar, so I'd like Linux to go in that direction too.

And then there are the embedded systems were there is no user interface at all, really. You only access the system to upgrade the kernel perhaps, but otherwise they just sit there. So that's another direction for Linux. I don't think Java or Inferno (Lucent's embedded operating system) are going to succeed for embedded devices. They have missed the significance of Moore's Law. At first it sounds good to design an optimized system specific for a particular embedded device, but by the time you have a workable design, Moore's Law will have brought the price of more powerful hardware within range, undermining the value of designing for a specific device. Everything is getting so cheap that you might as well have the same system on your desktop as in your embedded device. It will make everyone's life easier.

Symmetric Multi-Processing (SMP) is one area that will be developed. The 2.2 Linux kernel will handle four processors pretty well, and we'll develop it up to eight or sixteen processors. The support for more than four processors is already there, but not really. If you have more than four processors now, it's like throwing money at a dead horse. So that will certainly be improved.

But, if people want sixty-four processors they'll have to use a special version of the kernel, because to put that support in the regular kernel would cause performance decreases for the normal users.

Some particular application areas will continue to drive kernel development. Web serving has always been an interesting problem, because it's the one real application that is really kernel-intensive. In a way, web serving has been dangerous for me, because I get so much feedback from the community using Linux as a web-serving platform that I could easily end up optimizing only for web serving. I have to keep in mind that web serving is an important application but not everything.

Of course Linux isn't being used to its full potential even by today's web servers. Apache itself doesn't do the right thing with threads, for example.

This kind of optimization has been slowed down by the limits in network bandwidth. At present, you saturate ten-megabit networks so easily that there's no reason to optimize more. The only way to not saturate ten-megabit networks is to have lots and lots of heavy duty CGIs. But that's not what the kernel can help with. What the kernel could potentially do is directly answer requests for static pages, and pass the more complicate requests to Apache. Once faster networking is more commonplace, this will be more intriguing. But right now we don't have the critical mass of hardware to test and develop it.

The lesson from all these possible future directions is that I want Linux to be on the cutting edge, and even a bit past the edge, because what's past the edge today is what's on your desktop tomorrow.

But the most exciting developments for Linux will happen in user space, not kernel space. The changes in the kernel will seem small compared to what's happening further out in the system. From this perspective, where the Linux kernel will be isn't as interesting a question as what features will be in Red Hat 17.5 or where Wine (the Windows emulator) is going to be in a few years.

In fifteen years, I expect somebody else to come along and say, hey, I can do everything that Linux can do but I can be lean and mean about it because my system won't have twenty years of baggage holding it back. They'll say Linux was designed for the 386 and the new CPUs are doing the really interesting things differently. Let's drop this old Linux stuff. This is essentially what I did when creating Linux. And in the future, they'll be able to look at our code, and use our interfaces, and provide binary compatibility, and if all that happens I'll be happy.


Next Chapter --->


oreilly.com Home | O'Reilly Bookstores | How to Order | O'Reilly Contacts
International | About O'Reilly | Affiliated Companies | Privacy Policy

© 2000, O'Reilly & Associates, Inc.