Chapter 4. Executing Commands

The main purpose of bash (or of any shell) is to allow you to interact with the computer’s operating system so that you can accomplish whatever you need to do. Usually that involves launching programs, so the shell takes the commands you type, determines from that input what programs need to be run, and launches them for you.

Let’s take a look at the basic mechanism for launching jobs and explore some of the features bash offers for launching programs in the foreground or the background, sequentially or in parallel, indicating whether they succeeded, and more.

4.1 Running Any Executable

Problem

You need to run a command on a Linux or Unix system.

Solution

Use bash and type the name of the command at the prompt:

$ someprog

Discussion

This seems rather simple, and in a way it is, but a lot goes on behind the scenes that you never see. What’s important to understand about bash is that its basic operation is to load and execute programs. All the rest is just window dressing to get ready to run programs. Sure, there are shell variables and control statements for looping and if/then/else branching, and there are ways to control input and output, but they are all icing on the cake of program execution.

So where does it get the program to run?

bash uses a shell variable called $PATH to locate your executable. The $PATH variable is a list of directories. The directories are separated by colons (:). bash searches in each of those directories for a file with the name that you specified. The order of the directories is important—bash looks at the order in which the directories are listed in the variable, and takes the first executable found:

$ echo $PATH
/bin:/usr/bin:/usr/local/bin:.
$

In the $PATH variable shown here, four directories are included. The last directory in the list is just a single dot (called the dot directory, or just dot), which represents the current directory on a Linux or Unix filesystem—wherever you are, that’s the directory to which dot refers. For example, when you copy a file from someplace to dot (i.e., cp /other/place/file . ), you are copying the file into the current directory. Listing the dot directory in your path tells bash to look for commands not just in those other directories, but also in the current directory (.).

Many people feel that putting dot in the $PATH is too great a security risk—someone could trick you and get you to run their own malicious version of a command (say, ls) in place of one that you were expecting. If dot were listed first, then someone else’s version of ls would supersede the normal ls command, and you might unwittingly run that command. Don’t believe us? Try this:

$ bash
$ cd
$ touch ls
$ chmod 755 ls
$ PATH=".:$PATH"
$ ls
$

Suddenly, the ls appears not to work in your home directory. You get no output. When you cd to some other location (e.g., cd /tmp), then ls will work, but not in your home directory. Why? Because in that directory there is an empty file called ls that is run (and does nothing—it’s empty) instead of the normal ls command located at /bin/ls. Since we started this example by running a new copy of bash, you can exit from this mess by exiting this subshell—but you might want to remove the bogus ls command first:

$ cd
$ rm ls
$ exit
$

Can you see the potential danger of wandering into a strange directory with your $PATH set to search the dot directory before anywhere else?

If you put dot as the last directory in your $PATH variable, at least you won’t be tricked that easily. Of course, if you leave it off altogether it is arguably even safer, and you can still run commands in your local directory by typing a leading dot and slash character, as in:

./myscript

The choice is yours.

Warning

Never allow dot or writable directories in root’s $PATH. For more on this topic, see Recipe 14.9 and Recipe 14.10.

Don’t forget to set execute permissions on the file before you invoke your script:

chmod +x myscript

You only need to set the permissions once. Thereafter, you can invoke the script as a command.

A common practice among some bash users is to create a personal bin directory, analogous to the system directories /bin and /usr/bin where executables are kept. In your personal bin (if you create it in your home directory, its path is ~/bin) you can put copies of your favorite shell scripts and other customized or private commands. Then add that directory to your $PATH, even to the front (PATH=~/bin:$PATH). That way, you can still have your own customized favorites without the security risk of running commands from strangers.

4.2 Running Several Commands in Sequence

Problem

You need to run several commands, but some take a while and you don’t want to wait for each one to finish before issuing the next command.

Solution

There are three solutions to this problem, although the first is rather trivial: just keep typing. A Linux or Unix system is advanced enough to be able to let you type while it works on your previous commands, so you can simply keep typing one command after another.

Another rather simple solution is to type those commands into a file and then tell bash to execute the commands in the file—i.e., a simple shell script. For example, assume that we want to run three commands, long, medium, and short, each of whose execution time is reflected in its name. We need to run them in that order, but don’t want to wait around for the long script to finish before typing the other commands. We could use a shell script (a.k.a. batch file). Here’s a primitive way to do that:

$ cat > simple.script
long
medium
short
^D                      # Ctrl-D, not visible
$ bash ./simple.script

The third, and arguably best, solution is to run each command in sequence. If you want to run each program regardless of whether the preceding ones fail, separate them with semicolons:

long ; medium ; short

If you only want to run the next program if the preceding program worked, and all the programs correctly set exit codes, separate them with double ampersands:

long && medium && short

Discussion

The cat example was just a very primitive way to enter text into a file: we redirected the output from the command into the file named simple.script (for more on redirecting output, see Chapter 2). Better you should use a real editor, but such things are harder to show in examples like this. From now on, when we want to show a script, we’ll either just show the text as disembodied text not on a command line, or start the example with a command like cat filename to dump the contents of the file to the screen (rather than redirecting output from our typing into the file), and thus display it in the example.

The main point of this simple solution is to demonstrate that more than one command can be put on the bash command line. In the first case the second command isn’t run until the first command exits, the third doesn’t execute until the second exits, and so on, for as many commands as you have on the line. In the second case the second command isn’t run unless the first command succeeds, the third doesn’t execute unless the second succeeds, and so on, for as many commands as you have on the line.

See Also

4.3 Running Several Commands All at Once

Problem

You need to run three commands, but they are independent of each other and don’t need to wait for the previous ones to complete.

Solution

You can run a command in the background by putting an ampersand (&) at the end of the command line. Thus, you could fire off all three commands in rapid succession as follows:

$ long &
[1] 4592
$ medium &
[2] 4593
$ short
$

Or better yet, you can do it all on one command line:

$ long & medium & short
[1] 4592
[2] 4593
$

Discussion

When we run a command “in the background” (there really is no such place in Linux), all that really means is that we disconnect keyboard input from the command and the shell doesn’t wait for the command to complete before it gives another prompt and accepts more command input. Output from the command (unless we take explicit action to change this behavior) will still come to the screen, so in this example all three commands will be interspersing output to the screen.

The odd bits of numerical output are the job number in square brackets, followed by the process ID of the command that we just started in the background. In our example, job 1 (process 4592) is the long command, and job 2 (process 4593) is medium.

We didn’t put short into the background since we didn’t put an ampersand at the end of the line, so bash will wait for it to complete before giving us the shell prompt (the $).

The job number or process ID can be used to provide limited control over a job. For example, we could kill the long job with kill %1 (since its job number was 1), or we could specify the process number (i.e., kill 4592) with the same deadly results.

You can also use the job number to reconnect to a background job. For instance, we could connect the long job back to the foreground with fg %1. If you only have one job running in the background, you don’t even need the job number; just use fg by itself.

Tip

If you run a command and then realize it will take longer to complete than you thought, you can pause it using Ctrl-Z, which will return you to a prompt. You can then type bg to unpause the job and continue running it in the background. This is basically adding a trailing & after the fact.

4.4 Telling Whether a Command Succeeded or Not

Problem

You need to know whether the command you ran succeeded.

Solution

The shell variable $? is set with a nonzero value if the command fails—provided that the programmer who wrote that command or shell script followed the established convention:

$ somecommand
# it works...
$ echo $?
0
$ badcommand
# it fails...
$ echo $?
1
$

Discussion

The exit status of a command is kept in the shell variable referenced with $?. Its value can range from 0 to 255. When you write a shell script, it’s a good idea to have your script exit with zero if all is well and a nonzero value if you encounter an error condition. We recommend using only 0 to 127 because the shell uses 128+N to denote killed by signal N. Also, if you use a number greater than 255 or less than 0, the numbers will wrap around. You return an exit status with the exit statement (e.g., exit 1 or exit 0). But be aware that you only get one shot at reading a command’s exit status:

$ badcommand
# it fails...
$ echo $?
1
$ echo $?
0
$

Why does the second echo give us 0 as a result? It’s actually reporting on the status of the immediately preceding echo command. The first time we typed echo $? it returned a 1, which was the return value of badcommand. But the echo command itself succeeds, and therefore the new, most recent status is success (i.e., a 0 value). Because you only get one chance to check the exit status, many shell scripts will immediately assign the status to another shell variable, as in:

$ badcommand
# it fails...
$ STAT=$?
$ echo $STAT
1
$ echo $STAT
1
$

We can keep the value around in the variable $STAT and check its value later on.

Although we’re showing this in command-line examples, the real use of variables like $? comes in writing scripts. You can usually see whether a command worked or not if you are watching it run on your screen. But in a script, the commands may be running unattended.

One of the great features of bash is that the scripting language is identical to commands as you type them at a prompt in a terminal window. This makes it much easier to check out syntax and logic as you write your scripts.

The exit status is more often used in scripts, and often in if statements, to take different actions depending on the success or failure of a command. Here’s a simple example for now, but we will revisit this topic in future recipes:

somecommand
...
if (( $? )) ; then echo failed ; else echo OK; fi

(( )) evaluates an arithmetic expression; see Recipes 6.1 and 6.2.

We also do not recommend using negative numbers. The shell will accept them without an error, but it won’t do what you expect:

$ bash -c 'exit -2' ; echo $?
254

$ bash -c 'exit -200' ; echo $?
56

4.5 Running a Command Only if Another Command Succeeded

Problem

You need to run some commands, but you only want to run certain commands if certain other ones succeed. For example, you’d like to change directories (using the cd command) into a temporary directory and remove all the files. However, you don’t want to remove any files if the cd fails (e.g., if permissions don’t allow you into the directory, or if you spell the directory name wrong).

Solution

You can use the exit status ($?) of the cd command in combination with an if statement to do the rm only if the cd was successful:

cd mytmp
if (( $? == 0 )); then rm * ; fi
Tip

A better way to write this is the following, but we think it’s more clear to show and explain as we did:

if cd mytmp; then rm * ; fi

Discussion

Obviously, you wouldn’t need to do this if you were typing the commands by hand. You would see any error messages from the cd command, and thus you wouldn’t type the rm command. But scripting is another matter, and this test is very well worth doing in a script like our example to make sure that you don’t accidentally erase all the files in the directory where you are running it.

Let’s say you ran that script from the wrong directory, one that didn’t have a subdirectory named mytmp. The cd would fail, so the current directory would remain unchanged. Without the if check (for the cd having failed) the script would just continue on to the next statement. Running the rm * would remove all the files in your current directory. Ouch. The if is worth it.

So how does $? get its value? It is the exit code of the command (see Recipe 4.4). C language programmers will recognize this as the value of the argument supplied to the exit() function; e.g., exit(4); would return a 4. For the shell, an exit code of zero is considered success and a nonzero value means failure.

If you’re writing bash scripts, you’ll want to be sure to explicitly set return values, so that $? is set properly from your script. If you don’t, the value set will be the value of the last command run, which you may not want as your result.

4.6 Using Fewer if Statements

Problem

As a conscientious programmer, you took to heart what we described in the previous recipe. You applied the concept to your latest shell script, but now you find that the shell script is unreadable, with all those if statements checking the return code of every command. Isn’t there an alternative?

Solution

Use the double-ampersand operator in bash to provide conditional execution:

cd mytmp && rm *

Discussion

Separating two commands by the double ampersands tells bash to run the first command and then to run the second command only if the first command succeeds (i.e., its exit status is 0). This is very much like using an if statement to check the exit status of the first command in order to protect the running of the second command:

cd mytmp
if (( $? == 0 )); then rm * ; fi

The double-ampersand syntax is meant to be reminiscent of the logical AND operator in the C language. If you know your logic (and your C) then you’ll recall that if you are evaluating the logical expression A AND B, the entire expression can only be true if both (sub)expression A and (sub)expression B evaluate to true. If either one is false, the whole expression is false. The C language makes use of this fact, and when you code an expression like if (A && B) { ... }, it will evaluate expression A first. If it is false, it won’t even bother to evaluate B since the overall outcome (false) has already been determined (by A being false).

So what does this have to do with bash? Well, if the exit status of the first command (the one to the left of the &&) is nonzero (i.e., failed), then it won’t bother to evaluate the second expression—it won’t run the other command at all.

If you want to be thorough about your error checking, but don’t want if statements all over the place, you can have bash exit any time it encounters a failure (i.e., a nonzero exit status) from every command in your script (except in while loops and if statements where it is already capturing and using the exit status) by setting the -e flag:

set -e
cd mytmp
rm *

Setting the -e flag will cause the shell to exit when a command fails. If the cd in this example fails, the script will exit and never even try to execute the rm * command. We don’t recommend doing this on an interactive shell, however, because when the shell exits it will make your shell window go away.

See Also

4.7 Running Long Jobs Unattended

Problem

You ran a job in the background, then exited the shell and went for coffee. When you came back to check, the job was no longer running and it hadn’t completed. In fact, your job hadn’t progressed very far at all. It seems to have quit as soon as you exited the shell.

Solution

If you want to run a job in the background and expect to exit the shell before the job completes, then you need to nohup the job:

$ nohup long &
nohup: appending output to `nohup.out'
$

Discussion

When you put a job in the background (via the &, as described in Recipe 4.3), it is still a child process of the bash shell. When you exit an instance of the shell, bash sends a hangup (hup) signal to all of its child processes. That’s why your job didn’t run for very long. As soon as you exited bash, it killed your background job. (Hey, you were leaving; how was it supposed to know?)

The nohup command simply sets up the child process to ignore hangup signals. You can still kill the job with the kill command, because kill sends a SIGTERM signal, not a SIGHUP signal. But with nohup, bash won’t inadvertently kill your job when you exit.

The message that nohup gives about appending your output is just nohup trying to be helpful. Since you are likely to exit the shell after issuing a nohup command, your output destination will likely go away—i.e., the bash session in your terminal will no longer be active, so the job won’t be able to write to STDOUT. More importantly, writing to a nonexistent destination would cause a failure. So nohup redirects the output for you, appending it (not overwriting, but adding at the end) to a file named nohup.out in the current directory. You can explicitly redirect the output elsewhere on the command line, and nohup is smart enough to detect that this has happened and not use nohup.out for your output.

See Also

4.8 Displaying Error Messages When Failures Occur

Problem

You need your shell script to be verbose about failures. You want to see error messages when commands don’t work, but if statements tend to distract from the visual flow of statements.

Solution

A common idiom among some shell programmers is to use the || with commands to spit out debug or error messages. Here’s an example:

cmd || printf "%b" "cmd failed. You're on your own\n"

Discussion

Similar to how the && in Recipe 4.6 tells bash not to bother to evaluate the second expression if the first one is false, the || tells the shell not to bother to evaluate the second expression if the first one is true (i.e., succeeds). As with &&, the || syntax harkens back to logic and the C language, where the outcome is determined (as true) if the first expression in A OR B evaluates to true—so there’s no need to evaluate the second expression. In bash, if the first expression returns 0 (i.e., succeeds) then it just continues on. Only if the first expression returns a nonzero value (i.e., if the exit value of the command indicates failure) must it evaluate the second part, and thus run the other command.

Warning—don’t be fooled by this:

cmd || printf "%b" "FAILED.\n" ; exit 1

The exit will be executed in either case! The OR is only between the first two commands. If we want to have the exit happen only on error, we need to group it with the printf so that both are considered as a unit. The desired syntax would be:

cmd || { printf "%b" "FAILED.\n" ; exit 1 ; }

Note that the semicolon after the last command and just before the } is required, and that the closing brace must be separated by whitespace from the surrounding text. See Recipe 2.14 for a discussion.

4.9 Running Commands from a Variable

Problem

You want to run different commands in your script depending on circumstances. How can you vary which commands run?

Solution

There are many solutions to this problem—it’s what scripting is all about. In coming chapters we’ll discuss various programming constructs that can be used to solve this problem, such as if/then/else, case statements, and more. But here’s a slightly different approach that reveals something about bash. We can use the contents of a variable (more on those in Chapter 5) not just for parameters, but also for the command itself:

FN=/tmp/x.x
PROG=echo
$PROG $FN
PROG=cat
$PROG $FN

Discussion

We can assign the program name to a variable (here we use $PROG), and then when we refer to that variable in the place where a command name would be expected, bash uses the value of that variable ($PROG) as the command to run. It parses the command line, substitutes the values of its variables, and takes the result of all the substitutions and treats that as the command line, as if it had been typed that way verbatim.

Warning

Be careful about the variable names you use. Some programs, such as InfoZip, use environment variables such as $ZIP and $UNZIP to pass settings to the program itself, so if you do something like ZIP=/usr/bin/zip you can spend days pulling your hair out wondering why it works fine from the command line, but not in your script. Trust us. We learned this one the hard way. Also, RTFM.

See Also

4.10 Running All Scripts in a Directory

Problem

You want to run a series of scripts, but the list keeps changing; you’re always adding new scripts, but you don’t want to continuously modify a master list.

Solution

Put the scripts you want to run in a directory, and let bash run everything that it finds. Instead of keeping a master list, simply use the contents of that directory as your master list. Here’s a script that will run everything it finds in a particular directory:

for SCRIPT in /path/to/scripts/dir/*
do
    if [ -f "$SCRIPT" -a -x "$SCRIPT" ]
    then
        $SCRIPT
    fi
done

Discussion

We discuss the for loop and the if statement in greater detail in Chapter 6, but this gives you a taste. The variable $SCRIPT will take on successive values for each file that matches the wildcard pattern *, which matches everything in the named directory (except invisible dot files, whose names begin with a period). If it is a file (the -f test) and has execute permissions set (the -x test), the shell will then try to run that script.

In this simple example, we have provided no way to specify any arguments to the scripts as they are executed. This simple script may work well for your personal needs, but wouldn’t be considered robust; some might consider it downright dangerous. But we hope it gives you an idea of what lies ahead: some programming language–style scripting capabilities.

See Also

  • Chapter 6 for more about for loops and if statements

Get bash Cookbook, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.