O'Reilly logo

Essential System Administration, 3rd Edition by Æleen Frisch

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

About the Unix Boot Process

Bootstrapping is the full name for the process of bringing a computer system to life and making it ready for use. The name comes from the fact that a computer needs its operating system to be able to do anything, but it must also get the operating system started all on its own, without having any of the services normally provided by the operating system to do so. Hence, it must "pull itself up by its own bootstraps." Booting is short for bootstrapping, and this is the term I'll use.[1]

The basic boot process is very similar for all Unix systems, although the mechanisms used to accomplish it vary quite a bit from system to system. These mechanisms depend on both the physical hardware and the operating system type (System V or BSD). The boot process can be initiated automatically or manually, and it can begin when the computer is powered on (a cold boot ) or as a result of a reboot command from a running system (a warm boot or restart).

The normal Unix boot process has these main phases:

  • Basic hardware detection (memory, disk, keyboard, mouse, and the like).

  • Executing the firmware system initialization program (happens automatically).

  • Locating and running the initial boot program (by the firmware boot program), usually from a predetermined location on disk. This program may perform additional hardware checks prior to loading the kernel.

  • Locating and starting the Unix kernel (by the first-stage boot program). The kernel image file to execute may be determined automatically or via input to the boot program.

  • The kernel initializes itself and then performs final, high-level hardware checks, loading device drivers and/or kernel modules as required.

  • The kernel starts the init process, which in turn starts system processes (daemons) and initializes all active subsystems. When everything is ready, the system begins accepting user logins.

We will consider each of these items in subsequent sections of this chapter.

From Power On to Loading the Kernel

As we've noted, the boot process begins when the instructions stored in the computer's permanent, nonvolatile memory (referred to colloquially as theBIOS, ROM,NVRAM, and so on) are executed. This storage location for the initial boot instructions is generically referred to as firmware (in contrast to "software," but reflecting the fact that the instructions constitute a program[2]).

These instructions are executed automatically when the power is turned on or the system is reset, although the exact sequence of events may vary according to the values of stored parameters.[3] The firmware instructions may also begin executing in response to a command entered on the system console (as we'll see in a bit). However they are initiated, these instructions are used to locate and start up the system's boot program , which in turn starts the Unix operating system.

The boot program is stored in a standard location on a bootable device. For a normal boot from disk, for example, the boot program might be located in block 0 of the root disk or, less commonly, in a special partition on the root disk. In the same way, the boot program may be the second file on a bootable tape or in a designated location on a remote file server in the case of a network boot of a diskless workstation.

There is usually more than one bootable device on a system. The firmware program may include logic for selecting the device to boot from, often in the form of a list of potential devices to examine. In the absence of other instructions, the first bootable device that is found is usually the one that is used. Some systems allow for several variations on this theme. For example, the RS/6000 NVRAM contains separate default device search lists for normal and service boots; it also allows the system administrator to add customized search lists for either or both boot types using the bootlist command.

The boot program is responsible for loading the Unix kernel into memory and passing control of the system to it. Some systems have two or more levels of intermediate boot programs between the firmware instructions and the independently-executing Unix kernel. Other systems use different boot programs depending on the type of boot.

Even PC systems follow this same basic procedure. When the power comes on or the system is reset, the BIOS starts the master boot program, located in the first 512 bytes of the system disk. This program then typically loads the boot program located in the first 512 bytes of the active partition on that disk, which then loads the kernel. Sometimes, the master boot program loads the kernel itself. The boot process from other media is similar.

The firmware program is basically just smart enough to figure out if the hardware devices it needs are accessible (e.g., can it find the system disk or the network) and to load and initiate the boot program. This first-stage boot program often performs additional hardware status verification, checking for the presence of expected system memory and major peripheral devices. Some systems do much more elaborate hardware checks, verifying the status of virtually every device and detecting new ones added since the last boot.

The kernel is the part of the Unix operating system that remains running at all times when the system is up. The kernel executable image itself, conventionally named unix (System V-based systems), vmunix (BSD-based system), or something similar. It is traditionally stored in or linked to the root directory. Here are typical kernel names and directory locations for the various operating systems we are considering:


/unix (actually a link to a file in /usr/lib/boot)











Once control passes to the kernel, it prepares itself to run the system by initializing its internal tables, creating the in-memory data structures at sizes appropriate to current system resources and kernel parameter values. The kernel may also complete the hardware diagnostics that are part of the boot process, as well as installing loadable drivers for the various hardware devices present on the system.

When these preparatory activities have been completed, the kernel creates another process that will run the init program as the process with PID 1.[4]

Booting to Multiuser Mode

As we've seen, init is the ancestor of all subsequent Unix processes and the direct parent of user login shells. During the remainder of the boot process, init does the work needed to prepare the system for users.

One of init's first activities is to verify the integrity of the local filesystems, beginning with the root filesystem and other essential filesystems, such as /usr. Since the kernel and the init program itself reside in the root filesystem (or sometimes the /usr filesystem in the case of init), you might wonder how either one can be running before the corresponding filesystem has been checked. There are several ways around this chicken-and-egg problem. Sometimes, there is a copy of the kernel in the boot partition of the root disk as well as in the root filesystem. Alternatively, if the executable from the root filesystem successfully begins executing, it is probably safe to assume that the file is OK.

In the case of init, there are several possibilities. Under System V, the root filesystem is mounted read-only until after it has been checked, and init remounts it read-write. Alternatively, in the traditional BSD approach, the kernel handles checking and mounting the root filesystem itself.

Still another method, used when booting from tape or CD-ROM (for example, during an operating system installation or upgrade), and on some systems for normal boots, involves the use of an in-memory (RAM) filesystem containing just the limited set of commands needed to access the system and its disks, including a version of init. Once control passes from the RAM filesystem to the disk-based filesystem, the init process exits and restarts, this time from the "real" executable on disk, a result that somewhat resembles a magician's sleight-of-hand trick.

Other activities performed by init include thefollowing:

  • Checking the integrity of the filesystems, traditionally using the fsck utility

  • Mounting local disks

  • Designating and initializing paging areas

  • Performing filesystem cleanup activities: checking disk quotas, preserving editor recovery files, and deleting temporary files in /tmp and elsewhere

  • Starting system server processes (daemons) for subsystems like printing, electronic mail, accounting, error logging, and cron

  • Starting networking daemons and mounting remote disks

  • Enabling user logins, usually by starting getty processes and/or the graphical login interface on the system console (e.g., xdm), and removing the file /etc/nologin, if present

These activities are specified and carried out by means of the system initialization scripts , shell programs traditionally stored in /etc or /sbin or their subdirectories and executed by init at boot time. These files are organized very differently under System V and BSD, but they accomplish the same purposes. They are described in detail later in this chapter.

Once these activities are complete, users may log in to the system. At this point, the boot process is complete, and the system is said to be in multiuser mode.

Booting to Single-User Mode

Once init takes control of the booting process, it can place the system in single-user mode instead of completing all the initialization tasks required for multiuser mode. Single-user mode is a system state designed for administrative and maintenance activities, which require complete and unshared control of the system. This system state is selected by a special boot command parameter or option; on some systems, the administrator may select it by pressing a designated key at a specific point in the boot process.

To initiate single-user mode, init forks to create a new process, which then executes the default shell (usually /bin/sh) as user root. The prompt in single-user mode is the number sign (#), the same as for the superuser account, reflecting the root privileges inherent in it. Single-user mode is occasionally called maintenance mode .

Another situation in which the system might enter single-user mode automatically occurs if there are any problems in the boot process that the system cannot handle on its own. Examples of such circumstances include filesystem problems that fsck cannot fix in its default mode and errors in one of the system initialization files. The system administrator must then take whatever steps are necessary to resolve the problem. Once this is done, booting may continue to multiuser mode by entering CTRL-D, terminating the single-user mode shell:

# ^D                             
               Continue boot process to multiuser mode. 
Tue Jul 14 14:47:14 EDT 1987     Boot messages from the initialization files. 
                . . . 

Alternatively, rather than picking up the boot process where it left off, the system may be rebooted from the beginning by entering a command such as reboot (AIX and FreeBSD) or telinit 6. HP-UX supports both commands.

Single-user mode represents a minimal system startup. Although you have root access to the system, many of the normal system services are not available at all or are not set up. On a mundane level, the search path and terminal type are often not set correctly. Less trivially, no daemons are running, so many Unix facilities are shut down (e.g., printing). In general, the system is not connected to the network. The available filesystems may be mounted read-only, so modifying files is initially disabled (we'll see how to overcome this in a bit). Finally, since only some of the filesystems are mounted, only commands that physically reside on these filesystems are available initially.

This limitation is especially noticeable if /usr was created on a separate disk partition from the root filesystem and is not mounted automatically under single-user mode. In this case, even commands stored in the root filesystem (in /bin, for example) will not work if they use shared libraries stored under /usr. Thus, if there is some problem with the /usr filesystem, you will have to make do with the tools that are available. For such situations, however rare and unlikely, you should know how to use the ed editor if vi is not available in single-user mode; you should know which tools are available to you in that situation before you have to use them.

On a few systems, vendors have exacerbated this problem by making /bin a symbolic link to /usr/bin, thereby rendering the system virtually unusable if there is a problem with a separate /usr filesystem.

Password protection for single-user mode

On older Unix systems, single-user mode does not require a password be entered to gain access. Obviously, this can be a significant security problem. If someone gained physical access to the system console, he could crash it (by hitting the reset button, for example) and then boot to single-user mode via the console and be automatically logged in as root without having to know the root password.

Modern systems provide various safeguards. Most systems now require that the root password be entered before granting system access in single-user mode. On some System V-based systems, this is accomplished via the sulogin program that is invoked automatically by init once the system reaches single-user mode. On these systems, if the correct root password is not entered within some specified time period, the system is automatically rebooted.[5]

Here is a summary of single-user mode password protection by operating system:




Required if the console is listed in /etc/ttys with the insecure option:

console none unknown off insecure




Required if /etc/inittab (discussed later in this chapter) contains a sulogin entry for single-user mode. For example:



Required if the SECURE_CONSOLE entry in /etc/rc.config is set to ON.


Required if the PASSREQ setting in /etc/default/sulogin is set to YES.


Current Linux distributions include the sulogin utility but do not always activate it (this is true of Red Hat Linux as of this writing), leaving single-user mode unprotected by default.

Firmware passwords

Some systems also allow you to assign a separate password to the firmware initialization program, preventing unauthorized persons from starting a manual boot. For example, on SPARC systems, the eeprom command may be used to require a password and set its value (via the security-mode and security-password parameters, respectively).

On some systems (e.g., Compaq Alphas), you must use commands within the firmware program itself to perform this operation (set password and set secure in the case of the Alpha SRM). Similarly, on PC-based systems, the BIOS monitor program must generally be used to set such a password. It is accessed by pressing a designated key (often F1 or F8) shortly after the system powers on or is reset.

On Linux systems, commonly used boot-loader programs have configuration settings that accomplish the same purpose. Here are some configuration file entries for lilo and grub:

password = something           /etc/lilo.conf
password -md5 xxxxxxxxxxxx     /boot/grub/grub.conf

The grub package provides the grub-md5-crypt utility for generating the MD5 encoding for a password. Linux boot loaders are discussed in detail in Chapter 16.

Starting a Manual Boot

Virtually all modern computers can be configured to boot automatically when power comes on or after a crash. When autobooting is not enabled, booting is initiated by entering a simple command in response to a prompt: sometimes just a carriage return, sometimes a b, sometimes the word boot. When a command is required, you often can tell the system to boot to single-user mode by adding a -s or similar option to the boot command, as in these examples from a Solaris and a Linux system:

ok boot -s             
boot: linux single     

In the remainder of this section, we will look briefly at the low-level boot commands for our supported operating systems. We will look at some more complex manual-boot examples in Chapter 16 and also consider boot menu configuration in detail.


AIX provides little in the way of administrator intervention options during the boot process.[6] However, the administrator does have the ability to preconfigure the boot process in two ways.

The first is to use the bootlist command to specify the list and ordering of boot devices for either normal boot mode or service mode. For example, this command makes the CD-ROM drive the first boot device for the normal boot mode:

# bootlist -m normal cd1 hdisk0 hdisk1 rmt0

If there is no bootable CD in the drive, the system next checks the first two hard disks and finally the first tape drive.

The second configuration option is to use the diag utility to specify various boot process options, including whether or not the system should boot automatically in various circumstances. These items are accessed via the Task Selection submenu.


FreeBSD (on Intel systems) presents a minimal boot menu:

F1  FreeBSD
F2  FreeBSD
F5  Drive 1     Appears if there is a second disk with a bootable partition.

This menu is produced by the FreeBSD boot loader (installed automatically if selected during the operating system installation, or installed manually later with the boot0cfg command). It simply identifies the partitions on the disk and lets you select the one from which to boot. Be aware, however, that it does not check whether each partition has a valid operating system on it (see Chapter 16 for ways of customizing what is listed).

The final option in the boot menu allows you to specify a different disk (the second IDE hard drive in this example). If you choose that option, you get a second, similar menu allowing you to select a partition on that disk:

F1  FreeBSD
F5  Drive 0

In this case, the second disk has only one partition.

Shortly after selecting a boot option, the following message appears:[7]

Hit [Enter] to boot immediately, or any other key for the command prompt

If you strike a key, a command prompt appears, from which you can manually boot, as in these examples:

disk1s1a:> boot -s             
                  Boot to single-user mode

disk1s1a:> unload              
                  Boot an alternate kernel
disk1s1a:> load kernel-new
disk1s1a:> boot

If you do not specify a full pathname, the alternate kernel must be located in the root directory on the disk partition corresponding to your boot menu selection.

FreeBSD can also be booted by the grub open source boot loader, which is discussed—along with a few other boot loaders—in the Linux section below.


HP-UX boot commands vary by hardware type. These examples are from an HP 9000/800 system. When power comes on initially, the greater-than-sign prompt (>)[8] is given when any key is pressed before the autoboot timeout period expires. You can enter a variety of commands here. For our present discussion, the most useful are search (to search for bootable devices) and co (to enter the configuration menu). The latter command takes you to a menu where you can specify the standard and alternate boot paths and options. When you have finished with configuration tasks, return to the main menu (ma) and give the reset command.

Alternatively, you can boot immediately by using the bo command, specifying one of the devices that search found by its two-character path number (given in the first column of the output). For example, the following command might be used to boot from CD-ROM:

> bo P1

The next boot phase involves loading and running the initial system loader (ISL). When it starts, it asks whether you want to enter commands with this prompt:

Interact with ISL? y

If you answer yes, you will receive the ISL> prompt, at which you can enter various commands to modify the usual boot process, as in these examples:

ISL> hpux -is                   
                  Boot to single user mode
ISL> hpux /stand/vmunix-new     
                  Boot an alternate kernel
ISL> hpux ll /stand             
                  List available kernels


When using lilo, the traditional Linux boot loader, the kernels available for booting are predefined. When you get lilo's prompt, you can press the TAB key to list the available choices. If you want to boot one of them into single-user mode, simply add the option single (or -s) to its name. For example:

boot: linux single

You can specify kernel parameters generally by appending them to the boot selection command.

If you are using the newer grub boot loader, you can enter boot commands manually instead of selecting one of the predefined menu choices, by pressing the c key. Here is an example sequence of commands:

grub> root (hd0,0)                              
                  Location of /boot
grub> kernel /vmlinuz=new ro root=/dev/hda2
grub> initrd /initrd.img
grub> boot

The root option on the kernel command locates the partition where the root directory is located (we are using separate / and /boot partitions here).

If you wanted to boot to single-user mode, you would add single to the end of the kernel command.

In a similar way, you can boot one of the existing grub menu selections in single-user mode by doing the following:

  1. Selecting it from the menu

  2. Pressing the e key to edit it

  3. Selecting and editing the kernel command, placing single at the end of the line

  4. Moving the cursor to the first command and then pressing b for boot

The grub facility is discussed in detail in Chapter 16.

On non-Intel hardware, the boot commands are very different. For example, some Alpha Linux systems use a boot loader named aboot.[9] The initial power-on prompt is a greater-than sign (>). Enter the b command to reach aboot's prompt.

Here are the commands to boot a Compaq Alpha Linux system preconfigured with appropriate boot parameters:

aboot> p 2     
                  Select the second partition to boot from.
aboot> 0       
                  Boot predefined configuration 0.

The following command can be used to boot Linux from the second hard disk partition:

aboot> 2/vmlinux.gz root=/dev/hda2

You could add single to the end of this line to boot to single-user mode.

Other Alpha-based systems use quite different boot mechanisms. Consult the manufacturer's documentation for your hardware to determine the proper commands for your system.


When power is applied, aTru64 system generally displays a console prompt that is a triple greater-than sign (>>>). You can enter commands to control the boot process, as in these examples:

>>> boot -fl s                
                  Boot to single-user mode

>>> boot dkb0.         
                  Boot an alternate device or kernel
>>> boot -file vmunix-new

The -fl option specifies boot flags; here, we select single-user mode. The second set of commands illustrate the method for booting from an alternate device or kernel (the two commands may be combined).

Note that there are several other ways to perform these same tasks, but these methods seem the most intuitive.


At power-on,Solaris systems may display the ok console prompt. If not, it isbecause the system is set to boot automatically, but you can generate one with the Stop-a or L1-a key sequence. From there, the boot command may be used to initiate a boot, as in this example:

ok boot -s        
                  Boot to single user mode
ok boot cdrom     
                  Boot from installation media

The second command boots an alternate kernel by giving its full drive and directory path. You can determine the available devices and how to refer to them by running the devalias command at the ok prompt.

Booting from alternate media

Booting from alternate media, such as CD-ROM or tape, is no different from booting any other non-default kernel. On systems where this is possible, you can specify the device and directory path to the kernel to select it. Otherwise, you must change the device boot order to place the desired alternate device before the standard disk location in the list.

Boot Activities in Detail

We now turn to a detailed consideration of theboot process from the point of kernel initialization onward.

Boot messages

The following example illustrates a generic Unix startup sequence. The messages included here are a composite of those from several systems, although the output is labeled as for a mythical computer named the Urizen, a late-1990s system running a vaguely BSD-style operating system. While this message sequence does not correspond exactly to any existing system, it does illustrate the usual elements of booting on Unix systems, under both System V and BSD.

We've annotated the boot process output throughout:

> b                                             
                  Initiate boot to multiuser mode. 
Urizen Ur-Unix boot in progress...    
testing memory                                  Output from boot program. 
checking devices                                Preliminary hardware tests. 
loading vmunix                                  Read in the kernel executable.

Urizen Ur-Unix Version 17.4.2: Fri Apr 24 23 20:32:54 GMT 1998 
Copyright (c) 1998 Blakewill Computer, Ltd.     Copyright for OS. 
Copyright (c) 1986 Sun Microsystems, Inc.       Subsystem copyrights.
Copyright (c) 1989-1998 Open Software Foundation, Inc. 
Copyright (c) 1991 Massachusetts Institute of Technology 
All rights reserved.                            Unix kernel is running now.
physical memory = 2.00 GB                       Amount of real memory. 

Searching SCSI bus for devices:                 Peripherals are checked next. 
rdisk0 bus 0 target 0 lun 0 
rdisk1 bus 0 target 1 lun 0 
rdisk2 bus 0 target 2 lun 0 
rmt0 bus 0 target 4 lun 0 
cdrom0 bus0 target 6 lun 0 
Ethernet address=8:0:20:7:58:jk                 Ethernet address of network adapter.

Root on /dev/disk0a                             Indicates disk partitions used as /, . . . 
Activating all paging spaces                     . . . as paging spaces and . . . 
swapon: swap device /dev/disk0b activated. 
Using /dev/disk0b as dump device                 . . . as the crash dump location.
                                                Single-user mode could be entered here, . . . 
INIT: New run level: 3                           . . . but this system is booting to run level 3. 
                                                Messages produced by startup scripts follow.
The system is coming up. Please wait.           Means "Be patient."
Tue Jul 14 14:45:28 EDT 1998    
Checking TCB databases                          Verify integrity of the security databases. 
Checking file systems:                          Check and mount remaining local filesystems. 
fsstat: /dev/rdisk1c (/home) umounted cleanly;  Skipping check. 
fsstat: /dev/rdisk2c (/chem) dirty              This filesystem needs checking.
Running fsck: 
/dev/rdisk2c: 1764 files, 290620 used, 110315 free 
Mounting local file systems.

Checking disk quotas: done.                     Daemons for major subsystems start first, . . . 
cron subsystem started, pid = 3387 
System message logger started. 
Accounting services started.
                                                 . . . followed by network servers, . . . 
Network daemons started: portmap inetd routed named rhwod timed. 
NFS started: biod(4) nfsd(6) rpc.mountd rpc.statd rpc.lockd. 
Mounting remote file systems. 
Print subsystem started.                         . . . and network-dependent local daemons.
sendmail started.

Preserving editor files.                        Save interrupted editor sessions. 
Clearing /tmp.                                  Remove files from /tmp.
Enabling user logins.                           Remove the /etc/nologin file. 
Tue Jul 14 14:47:45 EDT 1998                    Display the date again.
Urizen Ur-Unix 9.1 on hamlet                    The hostname is hamlet.
login:                                          Unix is running in multiuser mode.

There are some things that are deliberately anachronistic about this example boot sequence—running fsck and clearing /tmp, for instance—but we've retained them for nostalgia's sake. We'll consider the scripts and commands that make all of these actions happen in the course of this section.

Saved boot log files

Most Unix versions automatically save some or all of the boot messages from the kernel initialization phase to a log file. The system message facility, controlled by the syslogd daemon, and the related System V dmesg utility are often used to capture messages from the kernel during a boot (syslog is discussed in detail Chapter 3). In the latter case, you must execute the dmesg command to view the messages from the most recent boot. On FreeBSD systems, you can also view them in the /var/run/dmesg.boot file.

It is common for syslogd to maintain only a single message log file, so boot messages may be interspersed with system messages of other sorts. The conventional message file is /var/log/messages.

The syslog facility under HP-UX may also be configured to produce a messages file, but it is not always set up at installation to do so automatically. HP-UX also provides the /etc/rc.log file, which stores boot output from the multiuser phase.

Under AIX, /var/adm/ras/bootlog is maintained by the alog facility. Like the kernel buffers that are its source, this file is a circular log that is maintained at a predefined fixed size; new information is written at the beginning of the file once the file is full, replacing the older data. You can use a command like this one to view the contents of this file:

# alog -f /var/adm/ras/bootlog -o

General considerations

In general, init controls the multiuser mode boot process. init runs whatever initialization scripts it has been designed to run, and the structure of the init program determines the fundamental design of the set of initialization scripts for that Unix version: what the scripts are named, where they are located in the filesystem, the sequence in which they are run, the constraints placed upon the scripts' programmers, the assumptions under which they operate, and so on. Ultimately, it is the differences in the System V and BSD versions of init that determines the differences in the boot process for the two types of systems.

Although we'll consider those differences in detail later, in this section, we'll begin by looking at the activities that are part of every normal Unix boot process, regardless of the type of system. In the process, we'll examine sections of initialization scripts from a variety of different computer systems.


System initialization scripts usually perform a few preliminary actions before getting down to the work of booting the system. These include defining any functions and local variables that may be used in the script and setting up the script's execution environment, often beginning by defining HOME and PATH environment variables:

HOME=/; export HOME 
PATH=/bin:/usr/bin:/sbin:/usr/sbin; export PATH

The path is deliberately set to be as short as possible; generally, only system directories appear in it to ensure that only authorized, unmodified versions of commands get executed (we'll consider this issue in more detail in Section 7.4).

Alternatively, other scripts are careful always to use full pathnames for every command that they use. However, since this may make commands excessively long and scripts correspondingly harder to read, some scripts take a third approach and define a local variable for each command that will be needed at the beginning of the script:


The commands would then be invoked in this way:

${rm} -f /tmp/*

This practice ensures that the proper version of the command is run while still leaving the individual command lines very readable.

Whenever full pathnames are not used, we will assume that the appropriate PATH has previously been set up in the script excerpts we'll consider.

Preparing filesystems

Preparing thefilesystem for use is the first and most important aspect of the multiuser boot process. It naturally separates into two phases: mounting the root filesystem and other vital system filesystems (such as /usr), and handling the remainder of the local filesystems.

Filesystem checking is one of the key parts of preparing the filesystem. This task is the responsibility of the fsck [10] utility.


Most of the following discussion applies only to traditional, non-journaled Unix filesystems. Modern filesystem types use journaling techniques adapted from transaction processing to record and, if necessary, replay filesystem changes. In this way, they avoid the need for a traditional fsck command and its agonizingly slow verification and repair procedures (although a command of this name is usually still provided).

For traditional Unix filesystem types (such as ufs under FreeBSD and ext2 under Linux), fsck's job is to ensure that the data structures in the disk partition's superblock and inode tables are consistent with the filesystem's directory entries and actual disk block consumption. It is designed to detect and correct inconsistencies between them, such as disk blocks marked as in use that are not claimed by any file, and files existing on disk that are not contained in any directory. fsck deals with filesystem structure, but not with the internal structure or contents of any particular file. In this way, it ensures filesystem-level integrity, not data-level integrity.

In most cases, the inconsistencies that arise are minor and completely benign, and fsck can repair them automatically at boot time. Occasionally, however, fsck finds more serious problems, requiring administrator intervention.

System V and BSD have very different philosophies of filesystem verification. Under traditional BSD, the normal practice is to check all filesystems on every boot. In contrast, System V-style filesystems are not checked if they were unmounted normally when the system last went down. The BSD approach is more conservative, taking into account the fact that filesystem inconsistencies do on occasion crop up at times other than system crashes. On the other hand, the System V approach results in much faster boots.[11]

If the system is rebooting after a crash, it is quite normal to see many messages indicating minor filesystem discrepancies that have been repaired. By default, fsck fixes problems only if the repair cannot possibly result in data loss. If fsck discovers a more serious problem with the filesystem, it prints a message describing the problem and leaves the system in single-user mode; you must then run fsck manually to repair the damaged filesystem. For example (from a BSD-style system):

RUN fsck MANUALLY                        Message from fsck.
Automatic reboot failed . . . help!      Message from /etc/rc script.
Enter root password:                     Single-user mode.
# /sbin/fsck -p /dev/disk2e             
                  Run fsck manually with -p.
...                                      Many messages from fsck.
BAD/DUP FILE=2216 OWNER=190 M=120777     Mode=> file is a symbolic link, so deleting it is safe.
S=16 MTIME=Sep 16 14:27 1997
# ^D                                    
                  Resume booting.
Mounting local file systems.             Normal boot messages

In this example, fsck found a file whose inode address list contained duplicate entries or addresses of known bad spots on the disk. In this case, the troublesome file was a symbolic link (indicated by the mode), so it could be safely removed (although the user who owned it will need to be informed). This example is intended merely to introduce you to fsck; the mechanics of running fsck are described in detail in Section 10.2.

Checking and mounting the root filesystem

The root filesystem is the first filesystem that the boot process accesses as it prepares the system for use. On a System V system, commands like these might be used to check the root filesystem, if necessary:

/sbin/fsstat ${rootfs} >/dev/null 2>&1 
if [ $? -eq 1 ] ; then 
    echo "Running fsck on the root file system." 
    /sbin/fsck -p ${rootfs} 

The shell variable rootfs has been defined previously as the appropriate special file for the root filesystem. The fsstat command determines whether a filesystem is clean (under HP-UX, fsclean does the same job). If it returns an exit value of 1, the filesystem needs checking, and fsck is run with its -p option, which says to correct automatically all benign errors that are found.

On many systems, the root filesystem is mounted read-only until after it is known to be in a viable state as a result of running fsstat and fsck as needed. At that point, it is remounted read-write by the following command:

# mount -o rw,remount /

On FreeBSD systems, the corresponding command is:

# mount -u -o rw /

Preparing other local filesystems

The traditional BSD approach to checking the filesystems is to check all of them via a single invocation of fsck (although the separate filesystems are not all checked simultaneously), and some System V systems have adopted this method as well. The initialization scripts on such systems include a fairly lengthy case statement, which handles the various possible outcomes of the fsck command:

/sbin/fsck -p 
case $retval in                                  Check fsck exit code.
0)                                               No remaining problems,
   ;;                                               so just continue the boot process 
4)                                               fsck fixed problems on root disk. 
   echo "Root file system was modified." 
   echo "Rebooting system automatically." 
   exec /sbin/reboot -n 
8)                                               fsck failed to fix filesystem. 
   echo "fsck -p could not fix file system."
   echo "Run fsck manually."    
   ${single}                                     Single-user mode.
12)                                              fsck exited before finishing.
   echo "fsck interrupted ... run manually." 
*)                                               All other fsck errors.
   echo "Unknown error in fsck." 

This script executes the command fsck -p to check the filesystem's consistency. The -p option stands for preen and says that any needed repairs that will cause no loss of data should be made automatically. Since virtually all repairs are of this type, this is a very efficient way to invoke fsck. However, if a more serious error is found, fsck asks whether to fix it. Note that the options given to fsck may be different on your system.

Next, the case statement checks the status code returned by fsck (stored in the local variable retval) and performs the appropriate action based on its value.

If fsck cannot fix a disk on its own, you need to run it manually when it dumps you into single-user mode. Fortunately, this is rare. That's not just talk, either. I've had to run fsck manually only a handful of times over the many hundreds of times I've rebooted Unix systems, and those times occurred almost exclusively after crashes due to electrical storms or other power loss problems. Generally, the most vulnerable disks are those with continuous disk activity. For such systems, a UPS device is often a good protection strategy.

Once all the local filesystems have been checked (or it has been determined that they don't need to be), they can be mounted with the mount command, as in this example from a BSD system:

mount -a -t ufs

mount's -a option says to mount all filesystems listed in the system's filesystem configuration file, and the -t option restricts the command to filesystems of the type specified as its argument. In the preceding example, all ufs filesystems will be mounted. Some versions of mount also support a nonfs type, which specifies all filesystems other than those accessed over the network with NFS.

Saving a crash dump

When a system crashes due to an operating system-level problem, most Unix versions automatically write the current contents of kernel memory—known as a crash dump —to a designated location, usually the primary swap partition. AIX lets you specify the dump location with the sysdumpdev command, and FreeBSD sets it via the dumpdev parameter in /etc/rc.conf. Basically, a crash dump is just a core dump of the Unix kernel, and like any core dump, it can be analyzed to figure out what caused the kernel program—and therefore the system—to crash.

Since the swap partition will be overwritten when the system is booted and paging is restarted, some provision needs to be made to save its contents after a crash. The savecore command copies the contents of the crash dump location to a file within the filesystem. savecore exits without doing anything if there is no crash dump present. The HP-UX version of this command is called savecrash.

savecore is usually executed automatically as part of the boot process, prior to the point at which paging is initiated:

savecore /var/adm/crash

savecore's argument is the directory location to which the crash dump should be written; /var/adm/crash is a traditional location. On Solaris systems, you can specify the default directory location with the dumpadm command.

The crash dumps themselves are conventionally a pair of files named something like vmcore.n (the memory dump) and kernel.n, unix.n, or vmunix.n (the running kernel), where the extension is an integer that is increased each time a crash dump is made (so that multiple files may exist in the directory simultaneously). Sometimes, additional files holding other system status information are created as well.

HP-UX creates a separate subdirectory of /var/adm/crash for each successive crash dump, using names of the form crash.n. Each subdirectory holds the corresponding crash data and several related files.

The savecore command is often disabled in the delivered versions of system initialization files since crash dumps are not needed by most sites. You should check the files on your system if you decide to use savecore to save crash dumps.

Starting paging

Once the filesystem is ready and any crash dump has been saved, paging can be started. This normally happens before the major subsystems are initialized since they might need to page, but the ordering of the remaining multiuser mode boot activities varies tremendously.

Paging is started by the swapon -a command, which activates all the paging areas listed in the filesystem configuration file.

Security-related activities

Another important aspect of preparing the system for users is ensuring that available security measures are in place and operational. Systems offering enhanced security levels over the defaults provided by vanilla Unix generally include utilities to verify the integrity of system files and executables themselves. Like their filesystem-checking counterpart fsck, these utilities are run at boot time and must complete successfully before users are allowed access to the system.

In a related activity, initialization scripts on many systems often try to ensure that there is a valid password file (containing the system's user accounts). These Unix versions provide the vipw utility for editing the password file. vipw makes sure that only one person edits the password file at a time. It works by editing a copy of the password file; vipw installs it as the real file after editing is finished. If the system crashes while someone is running vipw, however, there is a slight possibility that the system will be left with an empty or nonexistent password file, which significantly compromises system security by allowing anyone access without a password.

Commands such as these are designed to detect and correct such situations:

if [ -s /etc/ptmp ]; then                             Someone was editing /etc/passwd.
   if [ -s /etc/passwd ]; then                        If passwd is non-empty, use it . . . 
      ls -l /etc/passwd /etc/ptmp >/dev/console 
      rm -f /etc/ptmp                                  . . . and remove the temporary file. 
   else                                               Otherwise, install the temporary file. 
      echo 'passwd file recovered from /etc/ptmp' 
      mv /etc/ptmp /etc/passwd 
elif [ -r /etc/ptmp ]; then                           Delete any empty temporary file. 
    echo 'removing passwd lock file' 
    rm -f /etc/ptmp 

The password temporary editing file, /etc/ptmp in this example, also functions as a lock file. If it exists and is not empty (-s checks for a file of greater than zero length), someone was editing /etc/passwd when the system crashed or was shut down. If /etc/passwd exists and is not empty, the script assumes that it hasn't been damaged, prints a long directory listing of both files on the system console, and removes the password lock file. If /etc/passwd is empty or does not exist, the script restores /etc/ptmp as a backup version of /etc/passwd and prints the message "passwd file recovered from /etc/ptmp" on the console.

The elif clause handles the case where /etc/ptmp exists but is empty. The script deletes it (because its presence would otherwise prevent you from using vipw) and prints the message "removing passwd lock file" on the console. Note that if no /etc/ptmp exists at all, this entire block of commands is skipped.

Checking disk quotas

Most Unix systems offer an optional disk quota facility, which allows the available disk space to be apportioned among users as desired. It, too, depends on database files that need to be checked and possibly updated at boot time, via commands like these:

echo "Checking quotas: \c" 
quotacheck -a 
echo "done." 
quotaon -a

The script uses the quotacheck utility to check the internal structure of all disk quota databases, and then it enables disk quotas with quotaon. The script displays the string "Checking quotas:" on the console when the quotacheck utility begins (suppressing the customary carriage return at the end of the displayed line) and completes the line with "done." after it has finished (although many current systems use fancier, more aesthetically pleasing status messages). Disk quotas are discussed in Section 15.6.

Starting servers and initializing local subsystems

Once all the prerequisite system devices are ready, important subsystems such as electronic mail, printing, and accounting can be started. Most of them rely on daemons (server processes). These processes are started automatically by one of the boot scripts. On most systems, purely local subsystems that do not depend on the network are usually started before networking is initialized, and subsystems that do need network facilities are started afterwards.

For example, a script like this one (from a Solaris system) could be used to initialize the cron subsystem, a facility to execute commands according to a preset sch edule (cron is discussed in Chapter 3):

if [ -p /etc/cron.d/FIFO ]; then
  if /usr/bin/pgrep -x -u 0 -P 1 cron >/dev/null 2>&1; then
     echo "$0: cron is already running"
     exit 0
elif [ -x /usr/sbin/cron ]; then
   /usr/bin/rm -f /etc/cron.d/FIFO
   /usr/sbin/cron &

The script first checks for the existence of the cron lock file (a named pipe called FIFO whose location varies). If it is present, the script next checks for a current cron process (via the pgrep command). It the latter is found, the script exits because cron is already running. Otherwise, the script checks for the existence of the cron executable file. If it finds the file, the script removes the cron lock file and then starts the cron server.

The precautionary check to see whether cron is already running isn't made on all systems. Lots of system initialization files simply (foolishly) assume that they will be run only at boot time, when cron obviously won't already be running. Others use a different, more general mechanism to determine the conditions under which they were run. We'll examine that shortly.

Other local subsystems started in a similar manner include:


A process that periodically forces all filesystem buffers (accumulated changes to inodes and data blocks) to disk. It does so by running the sync command, ensuring that the disks are fairly up-to-date should the system crash. The name of this daemon varies somewhat: bdflush is a common variant, AIX calls its version syncd, the HP-UX version is syncer, and it is named fsflush on Solaris systems. Linux runs both update and bdflush. Whatever its name, don't disable this daemon or you will seriously compromise filesystem integrity.


The system message handling facility that routes informational and error messages to log files, specific users, electronic mail, and other destinations according to the specifications in its configuration file (see Chapter 3).


this subsystem is started using the accton command. If accounting is not enabled, the relevant commands may be commented out.

System status monitor daemons

some systems provide daemons that monitor the system's physical conditions (e.g., power level, temperature, and humidity) and trigger the appropriate action when a problem occurs. For example, the HP-UX ups_mond daemon watches for a power failure, switching to an uninterruptible power supply (UPS) to allow an orderly system shutdown, if necessary.

Subsystems that are typically started after networking (discussed in the next section) include:

  • Electronic mail: the most popular electronic mail server is sendmail, which can route mail locally and via the network as needed. Postfix is a common alternative (its server process is also called sendmail).

  • Printing: the spooling subsystem also may be entirely local or used for printing to remote systems in addition to (or instead of) locally connected ones. BSD-type printing subsystems rely on the lpd daemon, and System V systems use lpsched. The AIX printing server is qdaemon.

There may be other subsystems on your system with their own associated daemon processes; some may be vendor enhancements to standard Unix. We'll consider some of these when we look at the specific initialization files used by the various Unix versions later in this chapter.

The AIX System Resource Controller

On AIX systems, system daemons are controlled by the S ystem Resource Controller (SRC). This facility starts daemons associated with the various subsystems and monitors their status on an ongoing basis. If a system daemon dies, the SRC automatically restarts it.

The srcmstr command is the executable corresponding to the SRC. The lssrc and chssys commands may be used to list services controlled by the SRC and change their configuration settings, respectively. We'll see examples of these commands at various points in this book.

Connecting to the network

Network initialization begins by setting the system's network hostname, if necessary, and configuring the network interfaces (adapter devices), enabling it to communicate on the network. The script that starts networking at boot time contains commands like these:

ifconfig lo0 
ifconfig ent0 inet netmask

The specific ifconfig commands vary quite a bit. The first parameter to ifconfig, which designates the network interface, may be different on your system. In this case, lo0 is the loopback interface, and ent0 is the Ethernet interface. Other common names for Ethernet interfaces include eri0, dnet0, and hme0 (Solaris); eth0 (Linux); tu0 (Tru64); xl0 (FreeBSD); lan0 (HP-UX); en0 (AIX); and ef0 and et0 (some System V). Interfaces for other network media will have different names altogether. Static routes may also be defined at this point using the route command. Networking is discussed in detail in Chapter 5.

Networking services also rely on a number of daemon processes. They are usually started with commands of this general form:

if [ -x server-pathname ]; then 
  preparatory commands 
  echo Starting server-name 

When the server program file exists and is executable, the script performs any necessary preparatory activities and then starts the server process. Note that some servers go into background execution automatically, while others must be explicitly started in the background. The most important network daemons are listed in Table 4-1.

Table 4-1. Common network daemons




Networking master server responsible for responding to many types of network requests via a large number of subordinate daemons, which it controls and to which it delegates tasks.

named , routed , gated

The name server and routing daemons, which provide dynamic remote hostname and routing data for TCP/IP. At most, one of routed or gated is used.

ntpd , xntpd , timed

Time-synchronization daemons. The timed daemon has been mostly replaced by the newer ntpd and the latest xntpd.

portmap , rpc.statd , rpc.lockd

Remote Procedure Call (RPC) daemons. RPC is the primary network interprocess communication mechanism used on Unix systems. portmap connects RPC program numbers to TCP/IP port numbers, and many network services depend on it. rpc.lockd provides locking services to NFS in conjunction with rpc.statd, the status monitor. The names of the latter two daemons may vary.

nfsd , biod , mountd

NFS daemons, which service file access and filesystem mounting requests from remote systems. The first two take an integer parameter indicating how many copies of the daemon are created. The system boot scripts also typically execute the exportfs -a command, which makes local filesystems available to remote systems via NFS.


NFS automounter, responsible for mounting remote filesystems on demand. This daemon has other names on some systems.

smbd , nmbd

SAMBA daemons that handle SMB/CIFS-based remote file access requests from Windows (and other) systems.

Once basic networking is running, other services and subsystems that depend on it can be started. In particular, remote filesystems can be mounted with a command like this one, which mounts all remote filesystems listed in the system's filesystem configuration file:

mount -a -t nfs     On some systems, -F replaces -t.

Housekeeping activities

Traditionally, multiuser-mode boots also include a number of cleanup activities such as the following:

  • Preserving editor files from vi and other ex-based editors, which enable users to recover some unsaved edits in the event of a crash. These editors automatically place checkpoint files in /tmp or /var/tmp during editing sessions. The expreserve utility is normally run at boot time to recover such files. On Linux systems, the elvis vi-clone is commonly available, and elvprsv performs the same function as expreserve for its files.

  • Clearing the /tmp directory and possibly other temporary directories. The commands to accomplish this can be minimalist:

    rm -f /tmp/*
  • utilitarian:

    cd /tmp; find . ! -name . ! -name .. ! -name lost+found \    
                           ! -name quota\* -exec rm -fr {} \;

    or rococo:

    # If no /tmp exists, create one (we assume /tmp is not
    # a separate file system).
    if [ ! -d /tmp -a ! -l /tmp ]; then    
       rm -f /tmp    
       mkdir /tmp 
    for dir in /tmp /var/tmp /usr/local/tmp ; do 
       if [ -d $dir ] ; then       
          cd $dir    
          find . \( \( -type f \( -name a.out -o        \     
                   -name \*.bak -o -name core -o -name \*~ -o    \ 
                   -name .\*~ -o -name #\*# -o -name #.\*# -o    \    
                   -name \*.o -o \( -atime +1 -mtime +3 \) \) \)  \    
                   -exec rm -f {} \; -o -type d -name \*        \    
                   -prune -exec rm -fr {} \; \)     
    cd / 

    The first form simply removes from /tmp all files other than those whose names begin with a period. The second form might be used when /tmp is located on a separate filesystem from the root filesystem to avoid removing important files and subdirectories. The third script excerpt makes sure that the /tmp directory exists and then removes a variety of junk files and any subdirectory trees (with names not beginning with a period) from a series of temporary directories.

On some systems, these activities are not part of the boot process but are handled in other ways (see Chapter 15 for details).

Allowing users onto the system

The final boot-time activities complete the process of making the system available to users. Doing so involves both preparing resources users need to log in and removing barriers that prevent them from doing so. The former consists of creating the getty processes that handle each terminal line and starting a graphical login manager like xdm—or a vendor-customized equivalent facility—for X stations and the system console, if appropriate. On Solaris systems, it also includes initializing the Service Access Facility daemons sac and ttymon. These topics are discussed in detail in Chapter 12.

On most systems, the file /etc/nologin may be created automatically when the system is shut down normally. Removing it is often one of the very last tasks of the boot scripts. FreeBSD uses /var/run/nologin.

/etc/nologin may also be created as needed by the system administrator. If this file is not empty, its contents are displayed to users when they attempt to log in. Creating the file has no effect on users who are already logged in, and the root user can always log in. HP-UX versions prior to 11i do not use this file.

[1] IBM has traditionally referred to the bootstrapping process as the IPL (initial program load). This term still shows up occasionally in AIX documentation.

[2] At least that's my interpretation of the name. Other explanations abound.

[3] Or the current position of the computer's key switch. On systems using a physical key switch, one of its positions usually initiates an automatic boot process when power is applied (often labeled "Normal" or "On"), and another position (e.g., "Service") prevents autobooting and puts the system into a completely manual mode suitable for system maintenance and repair.

[4] Process 0, if it exists, is really part of the kernel itself. Process 0 is often the scheduler (controls which processes execute at what time under BSD) or the swapper (moves process memory pages to and from swap space under System V). However, some systems assign PID 0 to a different process, and others do not have a process 0 at all.

[5] The front panel key position also influences the boot process, and the various settings provide for some types of security protection. There is usually a setting that disables booting to single-user mode; it is often labeled "Secure" (versus "Normal") or "Standard" (versus "Maintenance" or "Service"). Such security features are usually described on the init or boot manual pages and in the vendor's hardware or system operations manuals.

[6] Some AIX systems respond to a specific keystroke at a precise moment during the boot process and place you in the System Management Services facility, where the boot device list can also be specified.

[7] We're ignoring the second-stage boot loader here.

[8] Preceded by various verbiage.

[9] This description will also apply to Alpha hardware running other operating systems.

[10] Variously pronounced as "fisk" (like the baseball player Carlton, rhyming with "disk"), "ef-es-see-kay," "ef-es-check," and in less genteel ways.

[11] FreeBSD Version 4.4 and higher also checks only dirty filesystems at boot time.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required