All of us who use a Unix desktop system—engineers, educators, scientists, and business people—have second careers as Unix system administrators. Networking these computers gives us new tasks as network administrators.
Network administration and system administration are two different jobs. System administration tasks such as adding users and doing backups are isolated to one independent computer system. Not so with network administration. Once you place your computer on a network, it interacts with many other systems. The way you do network administration tasks has effects, good and bad, not only on your system but on other systems on the network. A sound understanding of basic network administration benefits everyone.
Networking your computers dramatically enhances their ability to communicate—and most computers are used more for communication than computation. Many mainframes and supercomputers are busy crunching the numbers for business and science, but the number of these systems in use pales in comparison to the millions of systems busy moving mail to a remote colleague or retrieving information from a remote repository. Further, when you think of the hundreds of millions of desktop systems that are used primarily for preparing documents to communicate ideas from one person to another, it is easy to see why most computers can be viewed as communications devices.
The positive impact of computer communications increases with the number and type of computers that participate in the network. One of the great benefits of TCP/IP is that it provides interoperable communications between all types of hardware and all kinds of operating systems.
The name “TCP/IP” refers to an entire suite of data communications protocols. The suite gets its name from two of the protocols that belong to it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP). TCP/IP is the traditional name for this protocol suite and it is the name used in this book. The TCP/IP protocol suite is also called the Internet Protocol Suite (IPS). Both names are acceptable.
This book is a practical, step-by-step guide to configuring and managing TCP/IP networking software on Unix computer systems. TCP/IP is the leading communications software for local area networks and enterprise intranets, and it is the foundation of the worldwide Internet. TCP/IP is the most important networking software available to a Unix network administrator.
The first part of this book discusses the basics of TCP/IP and how it moves data across a network. The second part explains how to configure and run TCP/IP on a Unix system. Let’s start with a little history.
In 1969 the Advanced Research Projects Agency (ARPA) funded a research and development project to create an experimental packet-switching network. This network, called the ARPAnet, was built to study techniques for providing robust, reliable, vendor-independent data communications. Many techniques of modern data communications were developed in the ARPAnet.
The experimental network was so successful that many of the organizations attached to it began to use it for daily data communications. In 1975 the ARPAnet was converted from an experimental network to an operational network, and the responsibility for administering the network was given to the Defense Communications Agency (DCA). However, development of the ARPAnet did not stop just because it was being used as an operational network; the basic TCP/IP protocols were developed after the network was operational.
The TCP/IP protocols were adopted as Military Standards (MIL STD) in 1983, and all hosts connected to the network were required to convert to the new protocols. To ease this conversion, DARPA funded Bolt, Beranek, and Newman (BBN) to implement TCP/IP in Berkeley (BSD) Unix. Thus began the marriage of Unix and TCP/IP.
About the time that TCP/IP was adopted as a standard, the term Internet came into common usage. In 1983 the old ARPAnet was divided into MILNET, the unclassified part of the Defense Data Network (DDN), and a new, smaller ARPAnet. “Internet” was used to refer to the entire network: MILNET plus ARPAnet.
In 1985 the National Science Foundation (NSF) created NSFNet and connected it to the then-existing Internet. The original NSFNet linked together the five NSF supercomputer centers. It was smaller than the ARPAnet and no faster: 56Kbps. Still, the creation of the NSFNet was a significant event in the history of the Internet because NSF brought with it a new vision of the use of the Internet. NSF wanted to extend the network to every scientist and engineer in the United States. To accomplish this, in 1987 NSF created a new, faster backbone and a three-tiered network topology that included the backbone, regional networks, and local networks. In 1990 the ARPAnet formally passed out of existence, and in 1995 the NSFNet ceased its role as a primary Internet backbone network.
Today the Internet is larger than ever and encompasses hundreds of thousands of networks worldwide. It is no longer dependent on a core (or backbone) network or on governmental support. Today’s Internet is built by commercial providers. National network providers, called tier-one providers, and regional network providers create the infrastructure. Internet Service Providers (ISPs) provide local access and user services. This network of networks is linked together in the United States at several major interconnection points called Network Access Points (NAPs).
The Internet has grown far beyond its original scope. The original networks and agencies that built the Internet no longer play an essential role for the current network. The Internet has evolved from a simple backbone network, through a three-tiered hierarchical structure, to a huge network of interconnected, distributed network hubs. It has grown exponentially since 1983—doubling in size every year. Through all of this incredible change one thing has remained constant: the Internet is built on the TCP/IP protocol suite.
A sign of the network’s success is the confusion that surrounds the term internet. Originally it was used only as the name of the network built upon IP. Now internet is a generic term used to refer to an entire class of networks. An internet (lowercase “i”) is any collection of separate physical networks, interconnected by a common protocol, to form a single logical network. The Internet (uppercase “I”) is the worldwide collection of interconnected networks, which grew out of the original ARPAnet, that uses IP to link the various physical networks into a single logical network. In this book, both “internet” and “Internet” refer to networks that are interconnected by TCP/IP.
Because TCP/IP is required for Internet connection, the growth of the Internet spurred interest in TCP/IP. As more organizations became familiar with TCP/IP, they saw that its power can be applied in other network applications as well. The Internet protocols are often used for local area networking even when the local network is not connected to the Internet. TCP/IP is also widely used to build enterprise networks. TCP/IP-based enterprise networks that use Internet techniques and web tools to disseminate internal corporate information are called intranets. TCP/IP is the foundation of all of these varied networks.
The popularity of the TCP/IP protocols did not grow rapidly just because the protocols were there, or because connecting to the Internet mandated their use. They met an important need (worldwide data communication) at the right time, and they had several important features that allowed them to meet this need. These features are:
Open protocol standards, freely available and developed independently from any specific computer hardware or operating system. Because it is so widely supported, TCP/IP is ideal for uniting different hardware and software components, even if you don’t communicate over the Internet.
Independence from specific physical network hardware. This allows TCP/IP to integrate many different kinds of networks. TCP/IP can be run over an Ethernet, a DSL connection, a dial-up line, an optical network, and virtually any other kind of physical transmission medium.
Standardized high-level protocols for consistent, widely available user services.
Protocols are formal rules of behavior. In international relations, protocols minimize the problems caused by cultural differences when various nations work together. By agreeing to a common set of rules that are widely known and independent of any nation’s customs, diplomatic protocols minimize misunderstandings; everyone knows how to act and how to interpret the actions of others. Similarly, when computers communicate, it is necessary to define a set of rules to govern their communications.
In data communications, these sets of rules are also called protocols. In homogeneous networks, a single computer vendor specifies a set of communications rules designed to use the strengths of the vendor’s operating system and hardware architecture. But homogeneous networks are like the culture of a single country—only the natives are truly at home in it. TCP/IP creates a heterogeneous network with open protocols that are independent of operating system and architectural differences. TCP/IP protocols are available to everyone and are developed and changed by consensus, not by the fiat of one manufacturer. Everyone is free to develop products to meet these open protocol specifications.
The open nature of TCP/IP protocols requires an open standards development process and publicly available standards documents. Internet standards are developed by the Internet Engineering Task Force (IETF) in open, public meetings. The protocols developed in this process are published as Requests for Comments (RFCs). As the title “Request for Comments” implies, the style and content of these documents are much less rigid than in most standards documents. RFCs contain a wide range of interesting and useful information, and are not limited to the formal specification of data communications protocols. There are three basic types of RFCs: standards (STD), best current practices (BCP), and informational (FYI).
RFCs that define official protocol standards are STDs and are given an STD number in addition to an RFC number. Creating an official Internet standard is a rigorous process. Standards track RFCs pass through three maturity levels before becoming standards:
- Proposed Standard
This is a protocol specification that is important enough and has received enough Internet community support to be considered for a standard. The specification is stable and well understood, but it is not yet a standard and may be withdrawn from consideration to be a standard.
- Draft Standard
This is a protocol specification for which at least two independent, interoperable implementations exist. A draft standard is a final specification undergoing widespread testing. It will change only if the testing forces a change.
- Internet Standard
There are two categories of standards. A Technical Specification (TS) defines a protocol. An Applicability Statement (AS) defines when the protocol is to be used. There are three requirement levels that define the applicability of a standard:
Two other requirements levels (limited use and not recommended) apply to RFCs that are not part of the standards track. A "limited use” protocol is used only in special circumstances, such as during an experiment. A protocol is "not recommended " when it has limited functionality or is outdated. There are three types of non-standards track RFCs:
A subset of the informational RFCs is called the FYI (For Your Information) notes. An FYI document is given an FYI number in addition to an RFC number. FYI documents provide introductory and background material about the Internet and TCP/IP networks. FYI documents are not mentioned in RFC 2026 and are not included in the Internet standards process. But there are several interesting FYI documents available.
Another group of RFCs that go beyond documenting protocols are the Best Current Practices (BCP) RFCs. BCPs formally document techniques and procedures. Some of these document the way that the IETF conducts itself; RFC 2026 is an example of this type of BCP. Others provide guidelines for the operation of a network or service; RFC 1918, Address Allocation for Private Internets, is an example of this type of BCP. BCPs that provide operational guidelines are often of great interest to network administrators.
There are now more than 3,000 RFCs. As a network system administrator, you will no doubt read several. It is as important to know which ones to read as it is to understand them when you do read them. Use the RFC categories and the requirements levels to help you determine which RFCs are applicable to your situation. (A good starting point is to focus on those RFCs that also have an STD number.) To understand what you read, you need to understand the language of data communications. RFCs contain protocol implementation specifications defined in terminology that is unique to data communications.
To discuss computer networking, it is necessary to use terms that have special meaning. Even other computer professionals may not be familiar with all the terms in the networking alphabet soup. As is always the case, English and computer-speak are not equivalent (or even necessarily compatible) languages. Although descriptions and examples should make the meaning of the networking jargon more apparent, sometimes terms are ambiguous. A common frame of reference is necessary for understanding data communications terminology.
An architectural model developed by the International Standards Organization (ISO) is frequently used to describe the structure and function of data communications protocols. This architectural model, which is called the Open Systems Interconnect (OSI) Reference Model, provides a common reference for discussing communications. The terms defined by this model are well understood and widely used in the data communications community—so widely used, in fact, that it is difficult to discuss data communications without using OSI’s terminology.
The OSI Reference Model contains seven layers that define the functions of data communications protocols. Each layer of the OSI model represents a function performed when data is transferred between cooperating applications across an intervening network. Figure 1-1 identifies each layer by name and provides a short functional description for it. Looking at this figure, the protocols are like a pile of building blocks stacked one upon another. Because of this appearance, the structure is often called a stack or protocol stack.
A layer does not define a single protocol—it defines a data communications function that may be performed by any number of protocols. Therefore, each layer may contain multiple protocols, each providing a service suitable to the function of that layer. For example, a file transfer protocol and an electronic mail protocol both provide user services, and both are part of the Application Layer.
Every protocol communicates with its peers. A peer is an implementation of the same protocol in the equivalent layer on a remote system; i.e., the local file transfer protocol is the peer of a remote file transfer protocol. Peer-level communications must be standardized for successful communications to take place. In the abstract, each protocol is concerned only with communicating to its peers; it does not care about the layers above or below it.
However, there must also be agreement on how to pass data between the layers on a single computer, because every layer is involved in sending data from a local application to an equivalent remote application. The upper layers rely on the lower layers to transfer the data over the underlying network. Data is passed down the stack from one layer to the next until it is transmitted over the network by the Physical Layer protocols. At the remote end, the data is passed up the stack to the receiving application. The individual layers do not need to know how the layers above and below them function; they need to know only how to pass data to them. Isolating network communications functions in different layers minimizes the impact of technological change on the entire protocol suite. New applications can be added without changing the physical network, and new network hardware can be installed without rewriting the application software.
Although the OSI model is useful, the TCP/IP protocols don’t match its structure exactly. Therefore, in our discussions of TCP/IP, we use the layers of the OSI model in the following way:
- Application Layer
The Application Layer is the level of the protocol hierarchy where user-accessed network processes reside. In this text, a TCP/IP application is any network process that occurs above the Transport Layer. This includes all of the processes that users directly interact with as well as other processes at this level that users are not necessarily aware of.
- Presentation Layer
For cooperating applications to exchange data, they must agree about how data is represented. In OSI, the Presentation Layer provides standard data presentation routines. This function is frequently handled within the applications in TCP/IP, though TCP/IP protocols such as XDR and MIME also perform this function.
- Session Layer
As with the Presentation Layer, the Session Layer is not identifiable as a separate layer in the TCP/IP protocol hierarchy. The OSI Session Layer manages the sessions (connections) between cooperating applications. In TCP/IP, this function largely occurs in the Transport Layer, and the term “session” is not used; instead, the terms “socket” and “port” are used to describe the path over which cooperating applications communicate.
- Transport Layer
Much of our discussion of TCP/IP is directed to the protocols that occur in the Transport Layer. The Transport Layer in the OSI reference model guarantees that the receiver gets the data exactly as it was sent. In TCP/IP, this function is performed by the Transmission Control Protocol (TCP). However, TCP/IP offers a second Transport Layer service, User Datagram Protocol (UDP), that does not perform the end-to-end reliability checks.
- Network Layer
The Network Layer manages connections across the network and isolates the upper layer protocols from the details of the underlying network. The Internet Protocol (IP), which isolates the upper layers from the underlying network and handles the addressing and delivery of data, is usually described as TCP/IP’s Network Layer.
- Data Link Layer
The reliable delivery of data across the underlying physical network is handled by the Data Link Layer. TCP/IP rarely creates protocols in the Data Link Layer. Most RFCs that relate to the Data Link Layer discuss how IP can make use of existing data link protocols.
- Physical Layer
The Physical Layer defines the characteristics of the hardware needed to carry the data transmission signal. Features such as voltage levels and the number and location of interface pins are defined in this layer. Examples of standards at the Physical Layer are interface connectors such as RS232C and V.35, and standards for local area network wiring such as IEEE 802.3. TCP/IP does not define physical standards—it makes use of existing standards.
The terminology of the OSI reference model helps us describe TCP/IP, but to fully understand it, we must use an architectural model that more closely matches the structure of TCP/IP. The next section introduces the protocol model we’ll use to describe TCP/IP.
While there is no universal agreement about how to describe TCP/IP with a layered model, TCP/IP is generally viewed as being composed of fewer layers than the seven used in the OSI model. Most descriptions of TCP/IP define three to five functional levels in the protocol architecture. The four-level model illustrated in Figure 1-2 is based on the three layers (Application, Host-to-Host, and Network Access) shown in the DOD Protocol Model in the DDN Protocol Handbook Volume 1, with the addition of a separate Internet layer. This model provides a reasonable pictorial representation of the layers in the TCP/IP protocol hierarchy.
As in the OSI model, data is passed down the stack when it is being sent to the network, and up the stack when it is being received from the network. The four-layered structure of TCP/IP is seen in the way data is handled as it passes down the protocol stack from the Application Layer to the underlying physical network. Each layer in the stack adds control information to ensure proper delivery. This control information is called a header because it is placed in front of the data to be transmitted. Each layer treats all the information it receives from the layer above as data, and places its own header in front of that information. The addition of delivery information at every layer is called encapsulation. (See Figure 1-3 for an illustration of this.) When data is received, the opposite happens. Each layer strips off its header before passing the data on to the layer above. As information flows back up the stack, information received from a lower layer is interpreted as both a header and data.
Each layer has its own independent data structures. Conceptually, a layer is unaware of the data structures used by the layers above and below it. In reality, the data structures of a layer are designed to be compatible with the structures used by the surrounding layers for the sake of more efficient data transmission. Still, each layer has its own data structure and its own terminology to describe that structure.
Figure 1-4 shows the terms used by different layers of TCP/IP to refer to the data being transmitted. Applications using TCP refer to data as a stream, while applications using UDP refer to data as a message. TCP calls data a segment, and UDP calls its data a packet. The Internet layer views all data as blocks called datagrams. TCP/IP uses many different types of underlying networks, each of which may have a different terminology for the data it transmits. Most networks refer to transmitted data as packets or frames. Figure 1-4 shows a network that transmits pieces of data it calls frames.
The Network Access Layer is the lowest layer of the TCP/IP protocol hierarchy. The protocols in this layer provide the means for the system to deliver data to the other devices on a directly attached network. This layer defines how to use the network to transmit an IP datagram. Unlike higher-level protocols, Network Access Layer protocols must know the details of the underlying network (its packet structure, addressing, etc.) to correctly format the data being transmitted to comply with the network constraints. The TCP/IP Network Access Layer can encompass the functions of all three lower layers of the OSI Reference Model (Network, Data Link, and Physical).
The Network Access Layer is often ignored by users. The design of TCP/IP hides the function of the lower layers, and the better-known protocols (IP, TCP, UDP, etc.) are all higher-level protocols. As new hardware technologies appear, new Network Access protocols must be developed so that TCP/IP networks can use the new hardware. Consequently, there are many access protocols—one for each physical network standard.
Functions performed at this level include encapsulation of IP datagrams into the frames transmitted by the network, and mapping of IP addresses to the physical addresses used by the network. One of TCP/IP’s strengths is its universal addressing scheme. The IP address must be converted into an address that is appropriate for the physical network over which the datagram is transmitted.
As implemented in Unix, protocols in this layer often appear as a combination of device drivers and related programs. The modules that are identified with network device names usually encapsulate and deliver the data to the network, while separate programs perform related functions such as address mapping.
The layer above the Network Access Layer in the protocol hierarchy is the Internet Layer. The Internet Protocol (IP) is the most important protocol in this layer. The release of IP used in the current Internet is IP version 4 (IPv4), which is defined in RFC 791. There are more recent versions of IP. IP version 5 is an experimental Stream Transport (ST) protocol used for real-time data delivery. IPv5 never came into operational use. IPv6 is an IP standard that provides greatly expanded addressing capacity. Because IPv6 uses a completely different address structure, it is not interoperable with IPv4. While IPv6 is a standard version of IP, it is not yet widely used in operational, commercial networks. Since our focus is on practical, operational networks, we do not cover IPv6 in detail. In this chapter and throughout the main body of the text, “IP” refers to IPv4. IPv4 is the protocol you will configure on your system when you want to exchange data with remote systems, and it is the focus of this text.
The Internet Protocol is the heart of TCP/IP. IP provides the basic packet delivery service on which TCP/IP networks are built. All protocols, in the layers above and below IP, use the Internet Protocol to deliver data. All incoming and outgoing TCP/IP data flows through IP, regardless of its final destination.
Defining the datagram, which is the basic unit of transmission in the Internet
Defining the Internet addressing scheme
Moving data between the Network Access Layer and the Transport Layer
Routing datagrams to remote hosts
Performing fragmentation and re-assembly of datagrams
Before describing these functions in more detail, let’s look at some of IP’s characteristics. First, IP is a connectionless protocol. This means that it does not exchange control information (called a “handshake”) to establish an end-to-end connection before transmitting data. In contrast, a connection-oriented protocol exchanges control information with the remote system to verify that it is ready to receive data before any data is sent. When the handshaking is successful, the systems are said to have established a connection. The Internet Protocol relies on protocols in other layers to establish the connection if they require connection-oriented service.
IP also relies on protocols in the other layers to provide error detection and error recovery. The Internet Protocol is sometimes called an unreliable protocol because it contains no error detection and recovery code. This is not to say that the protocol cannot be relied on—quite the contrary. IP can be relied upon to accurately deliver your data to the connected network, but it doesn’t check whether that data was correctly received. Protocols in other layers of the TCP/IP architecture provide this checking when it is required.
The TCP/IP protocols were built to transmit data over the ARPAnet, which was a packet-switching network. A packet is a block of data that carries with it the information necessary to deliver it, similar to a postal letter, which has an address written on its envelope. A packet-switching network uses the addressing information in the packets to switch packets from one physical network to another, moving them toward their final destination. Each packet travels the network independently of any other packet.
The datagram is the packet format defined by the Internet Protocol. Figure 1-5 is a pictorial representation of an IP datagram. The first five or six 32-bit words of the datagram are control information called the header. By default, the header is five words long; the sixth word is optional. Because the header’s length is variable, it includes a field called Internet Header Length (IHL) that indicates the header’s length in words. The header contains all the information necessary to deliver the packet.
The Internet Protocol delivers the datagram by checking the Destination Address in word 5 of the header. The Destination Address is a standard 32-bit IP address that identifies the destination network and the specific host on that network. (The format of IP addresses is explained in Chapter 2.) If the Destination Address is the address of a host on the local network, the packet is delivered directly to the destination. If the Destination Address is not on the local network, the packet is passed to a gateway for delivery. Gateways are devices that switch packets between the different physical networks. Deciding which gateway to use is called routing. IP makes the routing decision for each individual packet.
Internet gateways are commonly (and perhaps more accurately) referred to as IP routers because they use Internet Protocol to route packets between networks. In traditional TCP/IP jargon, there are only two types of network devices—gateways and hosts. Gateways forward packets between networks, and hosts don’t. However, if a host is connected to more than one network (called a multi-homed host), it can forward packets between the networks. When a multi-homed host forwards packets, it acts just like any other gateway and is in fact considered to be a gateway. Current data communications terminology makes a distinction between gateways and routers, but we’ll use the terms gateway and IP router interchangeably.
Figure 1-6 shows the use of gateways to forward packets. The hosts (or end systems) process packets through all four protocol layers, while the gateways (or intermediate systems) process the packets only up to the Internet Layer where the routing decisions are made.
Systems can deliver packets only to other devices attached to the same physical network. Packets from A1 destined for host C1 are forwarded through gateways G1 and G2. Host A1 first delivers the packet to gateway G1, with which it shares network A. Gateway G1 delivers the packet to G2 over network B. Gateway G2 then delivers the packet directly to host C1 because they are both attached to network C. Host A1 has no knowledge of any gateways beyond gateway G1. It sends packets destined for both networks C and B to that local gateway and then relies on that gateway to properly forward the packets along the path to their destinations. Likewise, host C1 sends its packets to G2 to reach a host on network A, as well as any host on network B.
Figure 1-7 shows another view of routing. This figure emphasizes that the underlying physical networks a datagram travels through may be different and even incompatible. Host A1 on the token ring network routes the datagram through gateway G1 to reach host C1 on the Ethernet. Gateway G1 forwards the data through the X.25 network to gateway G2 for delivery to C1. The datagram traverses three physically different networks, but eventually arrives intact at C1.
As a datagram is routed through different networks, it may be necessary for the IP module in a gateway to divide the datagram into smaller pieces. A datagram received from one network may be too large to be transmitted in a single packet on a different network. This condition occurs only when a gateway interconnects dissimilar physical networks.
Each type of network has a maximum transmission unit (MTU), which is the largest packet that it can transfer. If the datagram received from one network is longer than the other network’s MTU, the datagram must be divided into smaller fragments for transmission. This process is called fragmentation. Think of a train delivering a load of steel. Each railway car can carry more steel than the trucks that will take it along the highway, so each railway car’s load is unloaded onto many different trucks. In the same way that a railroad is physically different from a highway, an Ethernet is physically different from an X.25 network; IP must break an Ethernet’s relatively large packets into smaller packets before it can transmit them over an X.25 network.
The format of each fragment is the same as the format of any normal datagram. Header word 2 contains information that identifies each datagram fragment and provides information about how to re-assemble the fragments back into the original datagram. The Identification field identifies what datagram the fragment belongs to, and the Fragmentation Offset field tells what piece of the datagram this fragment is. The Flags field has a “More Fragments” bit that tells IP if it has assembled all of the datagram fragments.
When IP receives a datagram that is addressed to the local host, it must pass the data portion of the datagram to the correct Transport Layer protocol. This is done by using the protocol number from word 3 of the datagram header. Each Transport Layer protocol has a unique protocol number that identifies it to IP. Protocol numbers are discussed in Chapter 2.
You can see from this short overview that IP performs many important functions. Don’t expect to fully understand datagrams, gateways, routing, IP addresses, and all the other things that IP does from this short description; each chapter will add more details about these topics. So let’s continue on with the other protocol in the TCP/IP Internet Layer.
An integral part of IP is the Internet Control Message Protocol (ICMP) defined in RFC 792. This protocol is part of the Internet Layer and uses the IP datagram delivery facility to send its messages. ICMP sends messages that perform the following control, error reporting, and informational functions for TCP/IP:
- Flow control
When datagrams arrive too fast for processing, the destination host or an intermediate gateway sends an ICMP Source Quench Message back to the sender. This tells the source to stop sending datagrams temporarily.
- Detecting unreachable destinations
When a destination is unreachable, the system detecting the problem sends a Destination Unreachable Message to the datagram’s source. If the unreachable destination is a network or host, the message is sent by an intermediate gateway. But if the destination is an unreachable port, the destination host sends the message. (We discuss ports in Chapter 2.)
- Redirecting routes
A gateway sends the ICMP Redirect Message to tell a host to use another gateway, presumably because the other gateway is a better choice. This message can be used only when the source host is on the same network as both gateways. To better understand this, refer to Figure 1-7. If a host on the X.25 network sent a datagram to G1, it would be possible for G1 to redirect that host to G2 because the host, G1, and G2 are all attached to the same network. On the other hand, if a host on the token ring network sent a datagram to G1, the host could not be redirected to use G2. This is because G2 is not attached to the token ring.
- Checking remote hosts
A host can send the ICMP Echo Message to see if a remote system’s Internet Protocol is up and operational. When a system receives an echo message, it replies and sends the data from the packet back to the source host. The
pingcommand uses this message.
The protocol layer just above the Internet Layer is the Host-to-Host Transport Layer, usually shortened to Transport Layer. The two most important protocols in the Transport Layer are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP provides reliable data delivery service with end-to-end error detection and correction. UDP provides low-overhead, connectionless datagram delivery service. Both protocols deliver data between the Application Layer and the Internet Layer. Applications programmers can choose whichever service is more appropriate for their specific applications.
The User Datagram Protocol gives application programs direct access to a datagram delivery service, like the delivery service that IP provides. This allows applications to exchange messages over the network with a minimum of protocol overhead.
UDP is an unreliable, connectionless datagram protocol. As noted, “unreliable” merely means that there are no techniques in the protocol for verifying that the data reached the other end of the network correctly. Within your computer, UDP will deliver data correctly. UDP uses 16-bit Source Port and Destination Port numbers in word 1 of the message header to deliver data to the correct applications process. Figure 1-8 shows the UDP message format.
Why do applications programmers choose UDP as a data transport service? There are a number of good reasons. If the amount of data being transmitted is small, the overhead of creating connections and ensuring reliable delivery may be greater than the work of re-transmitting the entire data set. In this case, UDP is the most efficient choice for a Transport Layer protocol. Applications that fit a query-response model are also excellent candidates for using UDP. The response can be used as a positive acknowledgment to the query. If a response isn’t received within a certain time period, the application just sends another query. Still other applications provide their own techniques for reliable data delivery and don’t require that service from the Transport Layer protocol. Imposing another layer of acknowledgment on any of these types of applications is inefficient.
Applications that require the transport protocol to provide reliable data delivery use TCP because it verifies that data is delivered across the network accurately and in the proper sequence. TCP is a reliable, connection-oriented, byte-stream protocol. Let’s look at each of these characteristics in more detail.
TCP provides reliability with a mechanism called Positive Acknowledgment with Re-transmission (PAR). Simply stated, a system using PAR sends the data again unless it hears from the remote system that the data arrived OK. The unit of data exchanged between cooperating TCP modules is called a segment (see Figure 1-9). Each segment contains a checksum that the recipient uses to verify that the data is undamaged. If the data segment is received undamaged, the receiver sends a positive acknowledgment back to the sender. If the data segment is damaged, the receiver discards it. After an appropriate timeout period, the sending TCP module re-transmits any segment for which no positive acknowledgment has been received.
TCP is connection-oriented. It establishes a logical end-to-end connection between the two communicating hosts. Control information, called a handshake, is exchanged between the two endpoints to establish a dialogue before data is transmitted. TCP indicates the control function of a segment by setting the appropriate bit in the Flags field in word 4 of the segment header.
The type of handshake used by TCP is called a three-way handshake because three segments are exchanged. Figure 1-10 shows the simplest form of the three-way handshake. Host A begins the connection by sending host B a segment with the “Synchronize sequence numbers” (SYN) bit set. This segment tells host B that A wishes to set up a connection, and it tells B what sequence number host A will use as a starting number for its segments. (Sequence numbers are used to keep data in the proper order.) Host B responds to A with a segment that has the “Acknowledgment” (ACK) and SYN bits set. B’s segment acknowledges the receipt of A’s segment, and informs A which sequence number host B will start with. Finally, host A sends a segment that acknowledges receipt of B’s segment, and transfers the first actual data.
After this exchange, host A’s TCP has positive evidence that the remote TCP is alive and ready to receive data. As soon as the connection is established, data can be transferred. When the cooperating modules have concluded the data transfers, they will exchange a three-way handshake with segments containing the “No more data from sender” bit (called the FIN bit) to close the connection. It is the end-to-end exchange of data that provides the logical connection between the two systems.
TCP views the data it sends as a continuous stream of bytes, not as independent packets. Therefore, TCP takes care to maintain the sequence in which bytes are sent and received. The Sequence Number and Acknowledgment Number fields in the TCP segment header keep track of the bytes.
The TCP standard does not require that each system start numbering bytes with any specific number; each system chooses the number it will use as a starting point. To keep track of the data stream correctly, each end of the connection must know the other end’s initial number. The two ends of the connection synchronize byte-numbering systems by exchanging SYN segments during the handshake. The Sequence Number field in the SYN segment contains the Initial Sequence Number (ISN), which is the starting point for the byte-numbering system. For security reasons the ISN should be a random number.
Each byte of data is numbered sequentially from the ISN, so the first real byte of data sent has a Sequence Number of ISN+1. The Sequence Number in the header of a data segment identifies the sequential position in the data stream of the first data byte in the segment. For example, if the first byte in the data stream was sequence number 1 (ISN=0) and 4000 bytes of data have already been transferred, then the first byte of data in the current segment is byte 4001, and the Sequence Number would be 4001.
The Acknowledgment Segment (ACK) performs two functions: positive acknowledgment and flow control. The acknowledgment tells the sender how much data has been received and how much more the receiver can accept. The Acknowledgment Number is the sequence number of the next byte the receiver expects to receive. The standard does not require an individual acknowledgment for every packet. The acknowledgment number is a positive acknowledgment of all bytes up to that number. For example, if the first byte sent was numbered 1 and 2000 bytes have been successfully received, the Acknowledgment Number would be 2001.
The Window field contains the window, or the number of bytes the remote end is able to accept. If the receiver is capable of accepting 6000 more bytes, the window would be 6000. The window indicates to the sender that it can continue sending segments as long as the total number of bytes that it sends is smaller than the window of bytes that the receiver can accept. The receiver controls the flow of bytes from the sender by changing the size of the window. A zero window tells the sender to cease transmission until it receives a non-zero window value.
Figure 1-11 shows a TCP data stream that starts with an Initial Sequence Number of 0. The receiving system has received and acknowledged 2000 bytes, so the current Acknowledgment Number is 2001. The receiver also has enough buffer space for another 6000 bytes, so it has advertised a window of 6000. The sender is currently sending a segment of 1000 bytes starting with Sequence Number 4001. The sender has received no acknowledgment for the bytes from 2001 on, but continues sending data as long as it is within the window. If the sender fills the window and receives no acknowledgment of the data previously sent, it will, after an appropriate timeout, send the data again starting from the first unacknowledged byte.
In Figure 1-11 re-transmission would start from byte 2001 if no further acknowledgments are received. This procedure ensures that data is reliably received at the far end of the network.
TCP is also responsible for delivering data received from IP to the correct application. The application that the data is bound for is identified by a 16-bit number called the port number. The Source Port and Destination Port are contained in the first word of the segment header. Correctly passing data to and from the Application Layer is an important part of what the Transport Layer services do.
At the top of the TCP/IP protocol architecture is the Application Layer. This layer includes all processes that use the Transport Layer protocols to deliver data. There are many applications protocols. Most provide user services, and new services are always being added to this layer.
The most widely known and implemented applications protocols are:
While HTTP, FTP, SMTP, and Telnet are the most widely implemented TCP/IP applications, you will work with many others as both a user and a system administrator. Some other commonly used TCP/IP applications are:
- Domain Name System (DNS)
- Open Shortest Path First (OSPF)
- Network File System (NFS)
Some protocols, such as Telnet and FTP, can be used only if the user has some knowledge of the network. Other protocols, like OSPF, run without the user even knowing that they exist. As the system administrator, you are aware of all these applications and all the protocols in the other TCP/IP layers. And you’re responsible for configuring them!
In this chapter we discussed the structure of TCP/IP, the protocol suite upon which the Internet is built. We have seen that TCP/IP is a hierarchy of four layers: Applications, Transport, Internet, and Network Access. We have examined the function of each of these layers. In the next chapter we look at how the IP datagram moves through a network when data is delivered between hosts.
 DCA has since changed its name to Defense Information Systems Agency (DISA).
 During the 1980s, ARPA, which is part of the U.S. Department of Defense, became Defense Advanced Research Projects Agency (DARPA). Whether it is known as ARPA or DARPA, the agency and its mission of funding advanced research have remained the same.
 Interested in finding out how Internet standards are created? Read RFC 2026, The Internet Standards Process.
 To find out more about FYI documents, read RFC 1150, FYI on FYI: An Introduction to the FYI Notes.
 In current terminology, a gateway moves data between different protocols, and a router moves data between different networks. So a system that moves mail between TCP/IP and X.400 is a gateway, but a traditional IP gateway is a router.