Chapter 4. Planning
When in trouble or in doubt Run in circles, scream and shout.
At this stage, we’ve already looked at a good deal of the theory behind IPv6. It’s now a good time to start thinking about the issues around deploying IPv6 in a wider environment, such as your company, college, or ISP network. In this chapter, we provide recommendations for what to think about and what to do when planning an IPv6 deployment; how to introduce IPv6 to your network, how to interoperate with IPv4, and planning for the growth of IPv6, all with an eye to maintaining stability and manageability on your network. We provide worked examples of IPv6 deployment for networks which are hopefully quite similar to yours, and also highlight under exactly what circumstances our recommendations are applicable. By the end of this chapter you should hopefully have a toolbox of techniques for implementing IPv6, and also the right mental framework for using that toolbox.
Since we will be talking in some detail about the planning process, it’s incumbent upon us to outline the important building-blocks and techniques of IPv6 network design before we talk about how we actually put them together. So, ahead of outlining step-by-step plans, we need to tell you about getting connectivity, getting address space, and the intricacies of selecting transition mechanisms, amongst other things. With those under your belt, you’ll be in a position to get the most out of the worked examples.
Note that a significant portion of this chapter is about network planning, and planning for larger networks at that. If you are more of a Systems Administrator, rather than a Network Administrator, then you still may want to skim this chapter before moving on to the latter chapters. For those of you staying on, let’s get stuck right in to the detail.
Transition mechanisms are so called because they are ostensibly ways you can move your network to IPv6. In reality, IPv4 and IPv6 are likely to be co-operating on most networks for a long while, so they might better be called inter-operating techniques or IPv6 introduction mechanisms. In any event, there are quite a few of them, and they have a wide variety of capabilities. Some allow you to connect to the IPv6 Internet, even if intervening equipment only speaks IPv4 (tunnels, 6to4, Teredo). Some are suitable for providing internal IPv6 connectivity until your infrastructure supports IPv6 (tunnels, 6to4, ISATAP). Others are to help IPv4-only hosts communicate with IPv6-only (NAT-PT, TRT, Proxies). There are even some to help IPv4-only applications (Bump in the Stack/API).
While there is a plethora of mechanisms available, you will in all probability only need to understand and use a small fraction of them. (We provide a table that gives an overview later.) At a minimum, you’ll want to know about dual-stack, configured tunnels and proxies. You may want to browse through the others to see if they’ll be useful in your network.
The dual-stack transition mechanism is perhaps not so elegant as others we will discuss, but is common, useful and many of the other mechanisms we’ll talk about require at least one dual-stacked host. We expect that dual-stacking a network will be the way most people choose to deploy IPv6, unless they have unusual requirements.
As the name implies, dual stacking involves installing both an IPv4 and an IPv6 stack on a host. This means the host can make decisions about when connections should be made using IPv4 or IPv6; generally this is done based on the availability of IPv6 connectivity and DNS records. The IPv4 and IPv6 stacks can and often are completely independent: logical interfaces may be numbered separately, brought up and down separately and essentially treated as being separate machines.
One problem with the dual-stack method is that the shortage of IPv4 addresses means that you may not have enough to give to every host. There is a proposal called DSTM (Dual Stack Transition Mechanism) that allows for the temporary assignment of IPv4 addresses to nodes while they need them, so a large group of dual-stacked hosts can share a small number of IPv4 addresses, akin to dialup hosts sharing addresses out of a pool.
The fact that these dual-stacked hosts can originate and receive IPv6 and IPv4 packets is extremely powerful, allowing them to form a connection between IPv4 and IPv6 networks. We’ll look at ways in which this is possible next.
The principle behind tunnelling is the encapsulation of IPv6 packets in IPv4 packets. If you haven’t encountered this notion before, it might sound rather peculiar at first—wrap packets in other packets? But it’s actually a very powerful technique.
The central idea to understand is that just like Ethernet headers surround IP packets, which surround TCP and UDP headers, which surround protocols such as SMTP, you can just as easily insert another packet where a TCP packet would go and rely on the routing system to get it to the right place. As long as the receiving and transmitting ends have an agreed convention for how to treat these packets, everything can be decoded correctly and life is easy.
Static tunnelling is meant to link isolated islands of IPv6 connectivity, where the networks are well-known and unlikely to change without notice. One example would obviously be branch offices—the Galway division of X Corp. has a dial-on-demand link to the Dublin branch with both IPv6 and IPv4 connectivity, say. The way it works is as follows: the egress points of the linked networks are configured to encapsulate IPv6 packets to specified IPv6 destinations through statically configured IPv4 gateways. The packets proceed over the normal IPv4 routing system and are decapsulated at the other end, with the IPv6 packet then being forwarded to the correct host by the IPv6 routing system. If a packet is lost or dropped in the IPv4 part of the forwarding system, the usual TCP or application retransmission mechanisms come into play, just as if the packet had been lost due to, e.g., an Ethernet glitch. The intention is that the IPv4 section of the journey happens in as transparent a fashion as possible to the IPv6 stacks and applications.
It’s important to note that this IPv4 forwarding is not happening over any kind of TCP or UDP “port”—it’s another protocol commonly referred to as IPv6 over IPv4.
So, where are you likely to see configured tunnels in practice? There seem to be three common situations, all used to work around pieces of IPv4-only infrastructure.
- ISP to customer
- Tunnel broker
Here your ISP may not be providing IPv6 support and instead you get an IPv6 connection via a third party, who are known as tunnel brokers. There are many people who provide tunnels as a public service such as http://www.freenet6.net/ and http://www.sixxs.net/.
- Linking internal sites
In some cases, sites within an organization may be joined by sections of network that aren’t IPv6 capable, and until they are upgraded tunnels are necessary to join up the sites. In these cases you have the option of putting the tunnel endpoints on either side of the IPv4-only blockage, or bringing all the tunnels back to a central point. Deciding which is appropriate probably depends on if you have centralized or autonomous IT management.
Example 4-1 shows an example of how a configured tunnel is set up on a Cisco router. We’ll leave the ins and outs of this until the Section 4.1.2 later in this chapter, but you can see that it isn’t a complex configuration and only involves specifying the IPv4 and IPv6 addresses of the tunnel end points.
! interface Loopback0 ip address 192.0.2.1 255.255.255.255 ! interface Tunnel1 description Tunnel for customer BIGCUST no ip address ipv6 address 2001:db8:8:6::1/64 tunnel source Loopback0 tunnel destination 192.168.200.2 tunnel mode ipv6ip ! ipv6 route 2001:db8:70::/48 Tunnel1 !
RFC 2893 describes the encapsulation used for IPv6-in-IPv4 tunnels, and the notion of
configured tunnels. It also describes the notion of an automatic
tunnelling. In this situation, the prefix
::/96 is set aside for things called
IPv4 compatible addresses, where the rightmost
32 bits of the IPv6 address is considered to be an IPv4 address.
IPv6 packets addressed to these addresses could be automatically
encapsulated in an IPv4 packet addressed to the corresponding IPv4
address and tunnelled to its destination.
This means that two hosts that both speak IPv4 and IPv6 could talk IPv6 to one another, even if neither had a connection to the IPv6 Internet. While initially this might seem useful, the real question is why wouldn’t they just speak IPv4? In fact, automatic tunnelling has some security implications; for example, a host that replies to a compatible address may generate IPv4 packets, which may not be expected on the network. As a result compatible addresses are not usually assigned to interfaces, but are used as a way of indicating that IPv6 should be tunnelled. For example, setting the default IPv6 route to the IPv4 compatible address of a dual-stacked router would result in packets being tunnelled to that router.
In general, automatic tunnelling isn’t something that you will need to consider at a planning stage as anything more than a configuration device. Its close relative, 6to4, is something considerably more relevant, as we will see.
An upstream ISP(s) supporting IPv6.
Applying for IPv6 address space.
Arranging a “tunnel” with another IPv6 user.
The only thing a 6to4 user needs is a global IPv4 address, reachable on protocol 41. Again note that this is a protocol number, not a port number.
Here’s an example of how it works. Suppose that a 6to4 machine
is using IPv4 address
from a public allocation. By virtue of the fact that the machine has
this IPv4 address, by definition, it can also use the entire IPv6
get this address by taking the 6to4 prefix
2002::/16 and replacing bits 17 to 49 with
the 32 bits of the IPv4 address. Usually, the machine configures a
`’6to4” pseudo-interface which has a selected address from the 6to4
range of its IPv4 address. Other machines within the organization
can then be assigned addresses from the 6to4 range, and outgoing
packets should be routed to the host with the 6to4
So, 6to4 automatically assigns you a range of addresses, but how can we get packets to and from the IPv6 Internet and your network?
Packets from the IPv6 Internet sent to an address in the range
2002:c000:0204::/48 will be
routed to the nearest 6to4 Relay Router. Relay
routers are routers which advertise routes to
2002::/16, into the local or global
routing table, and they’re connected to both the IPv4 and IPv6
Internet. The router looks at the 6to4 address, extracts the
embedded IPv4 address, and so encapsulates the IPv6 packet in an
IPv4 packet addressed to
192.0.2.4. When the packet arrives at
192.0.2.4 it will be decapsulated
and routed as a normal IPv6 packet according to the normal IPv6
routing rules within your organization. (The whole strategy might
remind you of the tunnelling mechanism described in Section 4.1.3 earlier in this
To get packets back to the IPv6 Internet from your 6to4
network, we need a relay router for the opposite direction. An IPv4
been assigned for this job, so the default IPv6 route on
be set to point to
2002:c058:6301::. This means that packets
going to the IPv6 Internet will be encapsulated and sent to
22.214.171.124, which will be routed by the
normal IPv4 routing system to the nearest 6to4 relay router with
this anycast address. The relay router, which is again connected to
both the IPv4 and IPv6 Internet, will forward the packet to the IPv6
Internet and the packet will then make its way to its
Figures Figure 4-1 and Figure 4-2 illustrate how packets get from a 6to4 network to the IPv6 Internet and back again. 6to4 also allows for a short-cut for packets between 6to4 networks, where they can be sent directly to the appropriate IPv4 address.
The details of 6to4 are explained in RFC 3056, but it was written before the allocation of the IPv4 anycast address, so RFC 3068 covers the allocation and use of the anycast address. We’ll cover the configuration of 6to4 in Section 5.5.2 in Chapter 5.
So, when is it a good idea to use 6to4? Well, 6to4 has advantages over configured tunnels for people who don’t have a fixed IP address. Specifically, your tunnel broker or ISP needs to know your IPv4 address if they are to route packets for a fixed IPv6 address space to you. If your IPv4 address keeps changing, then you need to keep updating their configuration. With 6to4, when your IPv4 address changes, so do your IPv6 addresses, and they implicitly have your new IPv4 address embedded in them. This makes them good for most kinds of dial-up and certain kinds of DSL user.
6to4 could also be used by an organization with fixed IPv4 addresses in the absence of an IPv6-capable ISP or nearby tunnel broker. Unfortunately, there are two disadvantages to using the technique here. First, you don’t know where the nearest relay router will be, and second, you may find it tricky to get reverse DNS for your 6to4 prefix. However, it does mean you don’t have to depend on a single tunnel broker.
An organization with a large IPv4 infrastructure might consider deploying separate 6to4 prefixes internally and using it to provide islands of IPv6 connectivity internally. They could also provide their own relay router to control the egress of IPv6 from the organization. See the Section 6.6.1 section in Chapter 6 for some advice on running a 6to4 relay router.
One peculiarity of IPv6 is that it is neither forward nor backward-compatible. In other words, IPv4-only hosts cannot communicate with IPv6-only hosts, and vice versa. Even on a globally reachable IPv4 host with a working IPv6 stack, the machine still cannot communicate with IPv6-only hosts unless you configure one of the transition mechanisms or provide native connectivity.
Various people are eager to fix this problem, and 6to4 and
Teredo go a long way to provide IPv6 client hosts with automatic
connectivity to the IPv6 Internet. Dan Bernstein suggested a
mechanism to try to extend this to servers. The idea is that each
IPv4 server with an IPv6 stack automatically configures a
well-known 6to4 address, say
2002:WWXX:YYZZ::c0de. Then when an
IPv6-only client tries to connect to a server that only had an
IPv4 DNS record, it then could generate the corresponding
well-known 6to4 address and try to connect to that.
Dan’s argument was that as people gradually upgraded the software on their servers to a version including AutoIPv6, more of the IPv4 Internet would become available over IPv6 without any further effort being expended. To make AutoIPv6 happen would require a tweak in the DNS libraries on IPv6-only hosts and for vendors to arrange automatic configuration of 6to4 and the well-known address: a simple matter of tweaking boot-up scripts.
AutoIPv6 hasn’t been taken further than the idea stage yet. Some consideration probably needs to be given to how it would interact with firewalls, load balancers and other complex network hardware, as well as how it would impact native IPv6 deployment. However, it would seem that it could only improve the situation for IPv6-only hosts. We mention AutoIPv6 here mainly to highlight the problem of how to connect IPv6-only and IPv4-only hosts. We’ll see other possible solutions to this problem later in this section when we consider mechanisms like SIIT.
We know that there are many hosts that are stuck behind NAT devices, which can usually only deal with TCP, UDP and limited kinds of ICMP. As we have noted, configured tunnels and 6to4 make use of IPv4’s protocol 41, which is neither TCP nor UDP. This means that it may not be possible for NATed hosts to use tunnels, 6to4 or indeed any other mechanisms using odd protocol numbers.
Teredo is a mechanism that tunnels IPv6 through UDP in a way that should allow it to pass through most NAT devices. It is a remarkably cunning design, intended as a “last-ditch” attempt to allow IPv6 connectivity from within an organization where end hosts may not have any other suitable networking available.
The operation of Teredo is somewhat similar to 6to4 as it requires a certain amount of infrastructure, namely Teredo servers and Teredo relays. Servers are stateless and are not usually required to forward data packets. The main function of Teredo servers is facilitate the addressing of and communication between Teredo clients and Teredo relays, so they must be on the public IPv4 Internet. They also occasionally have to send packets to the IPv6 Internet, and so need to be connected to it.
Relays are the gateways between the IPv6 Internet and the Teredo clients. They forward the data packets, contacting the Teredo servers if necessary. They must be on the IPv4 and the IPv6 Internet.
Much of the complication of Teredo involves sending packets to create state on the NAT device. These packets are given the name Teredo bubbles. Clients initially contact the Teredo server, allowing two way conversation with it. The client forms an address that is a combination of the server’s IPv4 address and the IPv4 and port number allocated to the NAT device by this initial communication.
From then on, if a Teredo relay wants to forward packets to a Teredo client, it can contact the server to ask it to ask the client to send a packet to the relay. This packet will establish the necessary state on the NAT device to allow direct communication between the relay and the client.
Provision is also made for direct client-to-client operation and other optimizations, depending on the specifics of the NAT device you are behind. (There is a process a Teredo client can go through to determine what kind of NAT it is behind.)
Teredo uses the prefix
3FFE:831F::/32 and UDP port 3544. Since
the IPv6 address assigned to a client depends on the server’s
address and the NAT’s address, there is a possibility that it will
change frequently, especially if the NAT’s IPv4 address is
Christian Huitema from Microsoft is an important driving force behind Teredo. His draft describing the current state of Teredo’s development is available at http://www.ietf.org/internet-drafts/draft-huitema-v6ops-teredo-03.txt. Microsoft is very interested in technology like Teredo because many Windows machines are stuck behind NAT devices and Microsoft would like to be able to offer new technology and services to these machines and their users. Teredo is available as part of the peer-to-peer update for Windows XP, and though other vendors have not yet implemented it, it looks likely to become widely used. You can also get access to a preview of the server-relay technology component of Teredo—email email@example.com for more details. (Although Teredo is currently at a somewhat experimental stage. some code is already shipping.)
While Teredo is likely to become widely used in unmanaged networks as a way for a computer to connect itself to the IPv6 network, Teredo is the sort of technology that you don’t want to include in a deployment plan. Teredo is intended to be a last resort, used before any IPv6 infrastructure is available and when you have no access to a public IPv4 address. Your deployment should put infrastructure in place that eliminates the need for Teredo. However, if you are just trying to deploy IPv6 on your desktop and you’re stuck behind a NAT, then Teredo may be your only choice.
In the same way as you can have “IPv6 over Ethernet” or “IPv6 over token ring,” there is a mechanism to run an IPv6 network using IPv4 as the layer 2 transport, and this mechanism is called 6over4. This is different from tunnels and 6to4, because it aims to allow full neighbor discovery with the IPv4 network acting as a LAN. Remember, IPv6 makes use of layer 2 multicast, so 6over4 achieves this by using IPv4 multicast.
In the same way that Ethernet uses the EUI-64 interface IDs,
6over4 needs a way to form interface IDs so it uses the IPv4
address: a node with address
10.0.0.1 will end up with link-local
Similarly, there is a mapping between IPv6 multicast addresses and
IPv4 multicast addresses, so
126.96.36.199. All this is explained in
detail in RFC 2529.
In a way, 6over4 is a little like carrying IPv6 over MPLS, in that MPLS encapsulates IPv6 such that the internal details of the routing become invisible to the IPv6 layer 3 devices.
Since 6over4 is just another medium type that you can run IPv6 over, it doesn’t have any special prefix associated with it. (If you want to use 6over4 you have to get your address space and external connectivity from some other source.)
6over4 doesn’t really seem to have a lot of momentum, probably as a result of it requiring working IPv4 multicast infrastructure and the work on ISATAP, which provides many of the features 6over4 would have provided. It is also not widely implemented, so you probably do not need to consider it while planning your use of IPv6.
ISATAP is a rather funky acronym standing for Intra-Site Automatic Tunnel Addressing Protocol. The idea is very similar to 6over4, in that it aims to use an IPv4 network as a virtual link layer for IPv6. Probably the most important difference is that it avoids the use of IPv4 multicast.
To get this to work, ISATAP needs to specify a way to avoid
the link-local multicast used by neighbor solicitation and router
solicitation. To avoid the need for neighbor solicitation, ISATAP
uses addresses with an interface ID of
::0:5EFE:a.b.c.d which are assumed to
correspond to an IPv4 “link-layer” address of
a.b.c.d. Thus link-layer addresses on
ISATAP interfaces are calculated as opposed to solicited.
Avoiding multicast for router solicitations requires some sort of jump-start process
that provides you with the IPv4 addresses of potential routers. It
is suggested that these might be got from a DHCPv4 request or by
looking up a hostname like
isatap.example.com using IPv4
connectivity. Once the node has the IPv4 addresses of potential
ISATAP routers it can then send router solicitations to each,
encapsulated in an IPv4 packet. The routers can reply and the nodes
can configure addresses based on the advertised prefixes and their
ISATAP interface IDs.
So, what does ISATAP buy us? Without an ISATAP router, it acts
like automatic tunnelling, but using link-local ISATAP
addresses of the form
fe80::5EFE:a.b.c.d rather than IPv4
compatible addresses like
::a.b.c.d, allowing communication between
a group of hosts that can speak IPv4 protocol 41 to one another.
This could be a group of hosts behind a NAT, or a group of hosts on
the public Internet.
With an ISATAP router you can assign a prefix to this group of hosts and the router can provide connectivity to the IPv6 Internet via some other means (either a native connection, a tunnel, 6to4 or whatever). So, the thing that defines which group of hosts are on the same virtual subnet is the ISATAP router they have been configured to use.
ISATAP has some nice features, especially if you are doing a sparse IPv6 deployment in a large IPv4 network. You may not want to manually configure tunnels or deploy IPv6 routers for each subnet that you are deploying an IPv6 node on. ISATAP lets you deploy a number of centrally located ISATAP routers which can then be accessed from anywhere in the IPv4 network without further configuration.
Note that you can achieve something quite similar to this with
188.8.131.52 takes the
place of the ISATAP router. However, with 6to4 the prefixes you use
are derived from the IPv4 addresses, so if you are stuck behind a
NAT you get bad 6to4 addresses. With ISATAP the interface ID is
derived from the IPv4 address and the prefix comes from the ISATAP
router, so you can give out real addresses IPv6 inside the NATed
The draft describing ISATAP can be found at http://www.ietf.org/internet-drafts/draft-ietf-ngtrans-isatap-22.txt. Unfortunately, ISATAP implementations are a bit thin on the ground at the moment: Windows XP supports ISATAP, and KAME and USAGI snapshots used to include ISATAP support, but its development is being hindered by intellectual property concerns.
The previous techniques we have discussed allow us to use IPv4 infrastructure to enable IPv6 hosts to talk to one another, or to the IPv6 Internet at large. SIIT is the first technique we’ll mention that’s intended to allow IPv4-only hosts to talk to IPv6-only hosts.
SIIT is Stateless IP/ICMP Translation. The idea is that it allows you to take an IPv4 packet and rewrite the headers to form an IPv6 packet and vice versa. The IP level translations are relatively simple: TTL is copied to Hop Limit, ToS bits to traffic class, payload lengths are recalculated and fragmentation fields can be copied to a fragmentation header if needed.
Since TCP and UDP haven’t really changed, they can be passed through relatively unscathed. However the differences between ICMPv4 and ICMPv6 are more significant, so SIIT specifies how to do these translations too.
There is one other tricky issue, which is how to translate addresses between IPv4 and IPv6. Getting an IPv4 address into an IPv6 address is straightforward, just embed it in the low 32 bits. Since IPv6 addresses are much larger, there’s not a lot of point trying to encode them in an IPv4 address, so some mapping must be done. NAT-PT and NAPT-PT are ways of doing this, which we’ll discuss in a moment.
Note that while SIIT involves copying lots of header fields around it doesn’t actually require any state to be kept on the translating box, other than the rule to map IPv4 address back to IPv6 addresses.
If you want to know the details of SIIT see RFC 2765. Remember that SIIT is very definitely a translation technology: it takes IPv6 packets and removes all the IPv6 headers and replaces them with IPv4 headers. This is very different to tunnels, Toredo or ISATAP; they encapsulate IPv6 packets, retaining all their IPv6 headers. As we have all seen with IPv4 NAT, translation can cause problems with applications like FTP that transmit addresses internally. Also remember that if remote addresses of connections are logged by applications, then they will be translated addresses.
NAT-PT is an application of SIIT that allows the mapping of a group of IPv6 hosts to a group of IPv4 addresses, in much the same way that IPv4 NAT allows a group of IPv4 hosts using private addresses to use a group of public addresses. The extra PT in NAT-PT stands for protocol translation.
RFC 2766, which describes NAT-PT, also describes a “DNSALG” system for IPv6. DNSALG is a way of rewriting DNS requests and responses as they pass through the NAT system. This, in principle, means that DNS query for an IPv6 host inside the NATed network can be translated to one of the IPv4 addresses in use on the NAT automatically.
We have to admit that we haven’t seen any NAT-PT devices in action, though there are both commercial and free implementations available.
TRT, Transport Relay Translation, is described in RFC 3142. It is similar in idea to SIIT, but rather than translate between IPv4 and IPv6 at the IP and ICMP levels, instead we translate at the transport level, i.e., TCP and UDP. A machine doing TRT will have some range of IPv6 addresses that it translates to a range of IPv4 addresses. When a TCP connection is made to one of these addresses the TRT machine will make a TCP connection to the corresponding IPv4 address on the same port. Then as TCP data packets are received the data is forwarded on, and similarly for UDP.
TRT has the disadvantages of translation, mentioned in the previous section; however, it avoids certain issues related to fragmentation. It does also require the storage of state associated with the ongoing TCP and UDP sessions that SIIT does not. It also tends to be deployed on an application-specific basis; in other words, it doesn’t try to translate every possible protocol. This may be an advantage or a disadvantage, depending on your setup!
Bump in the Stack/API
Bump in the stack (BIS) is basically another SIIT variant, but the motivation is slightly different. Suppose you have some piece of software that you want to use over IPv6, but you can’t get an IPv6-capable version of it. Even if you have great IPv6 connectivity, this software is pretty useless to you. BIS is a trick to make software like this usable.
Say the software makes tries to make a connection to
www.example.com, with address
2001:db8::abcd. When the software looks up
address mapper component of BIS picks an IPv4
address from a pool configured for BIS, say
192.168.1.1, to represent this host and
returns this IPv4 address to the software. The software then uses
this address normally.
Meanwhile, BIS intercepts packets coming out of the IPv4 stack
that are destined to
and uses SIIT to rewrite them as IPv6 packets destined to
2001:db8::abcd. Packets going in the
opposite direction are similarly translated.
There is a variant of BIS called Bump in the API (BIA). It
operates in a similar way: an address mapper intercepts name lookup
calls and returns a fake IPv4 address for IPv6 hosts. The
application uses the address as usual. However, library functions
getpeername know about these fake
addresses and actually translate these to/from IPv6 addresses before
proceeding (otherwise) as normal.
Bump in the stack is described in RFC 2767 and Bump in the API is described in RFC 3338.
Both BIS and BIA have the usual drawbacks associated with translation: embedded addresses cause problems and logging of addresses may be inaccurate. They do have some advantages over NAT and TRT though because they distribute the translation task to the end hosts and consequently may scale better.
Proxies are another way to connect IPv6-only networks to IPv4-only networks. Many people are already familiar with web proxies, where a web browser can be configured to make all requests to the proxy rather than directly to the appropriate web server. The web proxy then fetches the web page on behalf of the browser.
A web proxy running on a dual-stacked host can potentially accept requests over both IPv4 and IPv6 from web browsers and then fetch pages from both IPv4 and IPv6 servers, as required.
Proxying is not limited to HTTP either. A dual-stacked recursive DNS server behaves very similarly, accepting requests over IPv4 and IPv6 and answering those requests by making a sequence of requests to other DNS servers as necessary. Likewise, a dual-stacked SMTP server can receive mail for an IPv6-only domain and forwarded it as needed.
The main advantage of proxying is that it is a technology that is relatively familiar and it does not require any complex translation. The down side is that it can require some application support. Proxies are likely to be an important bridge between IPv4 and IPv6 for the foreseeable future.
We cover HTTP proxying in some detail in Chapter 7 (Section 7.3.4), and the issue of dual-stack DNS servers in Chapter 6, in the Section 6.1.3 section. We also give an example of port forwarding, a form of proxying that can be used to get IPv4-only applications to talk over IPv6, in Section 7.12 of Chapter 7.
Summary of Transition Mechanisms
Since there are such a large number of transition mechanisms that have been identified as being useful for IPv6 deployment, we offer you Table 4-1. It provides a one sentence description of each. Table 4-2 gives a one-sentence “serving suggestion” for each of the mechanisms.
Run IPv4 and IPv6 on nodes.
Dual-stack, but dynamically allocate IPv4 addresses as needed.
Virtual point-to-point IPv6 link between two IPv4 addresses.
Automatic encapsulation of IPv6 packets using “compatible addresses.”
Automatic assignment of
IPv6 in UDP through a NAT.
Using IPv4 as a link layer for IPv6, using IPv4 multicast.
Using IPv4 as a link layer for IPv6, using a known router.
Rules for translating IPv6 packets straight into IPv4.
Using SIIT to do NAT with IPv4 on one side and IPv6 on the other.
Translating IPv6 to IPv4 at the UDP/TCP layer.
Using SIIT to make IPv4 applications speak IPv6.
Using a special library to make IPv4 applications speak IPv6.
Using application level trickery to join IPv4 to IPv6 networks.
Dual stack everything, if you have enough IPv4 addresses.
Otherwise dual stack a few border devices.
Can be used instead of dual stacking border devices.
Not that widely available.
Use to hop over IPv4-only equipment.
Only used as a configuration device.
Good for isolated IPv6 networks (e.g., home/departmental networks).
A last resort for people stuck behind NAT.
Not widely deployed because of IPv4 multicast requirement.
Useful for sparse IPv6 deployments within IPv4 networks.
Not deployed by itself.
Proxies are probably a cleaner solution, where available.
Getting software that supports IPv6 would be better.
Dual-stack proxies for SMTP, HTTP and DNS will be important for some time.
Obtaining IPv6 Address Space and Connectivity
Getting IPv6 connectivity is in theory extremely easy. If you already have an existing IPv4 service, some of the tunnelling transition mechanisms discussed previously will suffice in the short term to get you connected to the greater IPv6 Internet. If you have no existing connection, or are looking to get an “IPv6-native” connection, you will have to talk to the ISPs serving your area. We will discuss the options here in greater detail later. Suffice it to say for the moment that getting IPv6 connectivity is approximately as hard as getting IPv4 connectivity.
Obtaining address space in IPv6 is also, in theory, extremely easy for the vast majority of the organizations who might want it. The hard and fast rule is: go to your upstream provider and they will provide you with address space. This address space will be from the allocation of the provider, and is known as PA, or Provider Aggregate space. In this case your upstream provider is determined by who you get your IPv6 connectivity from, so this may be your ISP, a tunnel provider elsewhere in the Internet, or even the 6to4 mechanism.
If your upstream provider is your ISP or a tunnel broker, they should tell you which prefixes to use. In the case of an ISP you’ll probably have to ask them to allocate you a prefix, in the case of a tunnel broker you’ll probably be allocated a prefix when the tunnel is initially configured.
If you have no upstream providers, you are either the kind of organization that should be looking at getting an allocation by talking to the RIRs directly, or the kind of organization that will never be using globally routable address space—a small office with specialist needs perhaps, or an organization for which security is paramount. (Having said that, it’s difficult to imagine an organization that would not want to connect to the Internet these days.).
Another source of addresses is the 6bone, the original IPv6 test network, though as 6bone addressing is being phased out, we could only recommend it in an emergency.
Finally, you can receive address space via a tunnel, which is a special case of simply getting it from an upstream provider, or a tunnel broker, a kind of a middleman for providing automatically generated tunnels. (We’ll talk more about that later.)
Let’s have a look at each of these mechanisms for getting addresses now.
Of course, the place of first resort for most organizations will be their upstream provider. Usually these providers will have some kind of form for you to fill in; this may even greatly resemble your RIR’s documentation, or reference it, so it might be useful for you to look at the RIR information in Section 4.2.5 later in this chapter.
If you have multiple upstream providers, and are worried about which you should pick, well, just get a prefix from each of them! IPv6 is designed for this.
Congratulations! You already have an IPv6 address space of your very own, by virtue
of having addresses in the IPv4 Internet. We explained the mechanics
of 6to4 in Section
4.2.2 earlier in this chapter and looked at how to configure
it in the Section 5.5.2
in Chapter 5, but all you really
need to know here is that if you have a public IP address
192.0.2.4, then the prefix
2002:c000:0204::/48 is yours, because
192.0.2.4 in hexadecimal is
c0000204. For something small and quick, like making a particular
web site reachable over IPv6 in a hurry, 6to4 can’t be beat.
The main downside of 6to4 from an operations point of view is that the procedures for delegating reverse DNS for 6to4 addresses aren’t well-defined yet. This means that if you choose to use 6to4 addressing people won’t be able to translate your IPv6 addresses into hostnames easily. Furthermore, the routing required to make these addresses work is not entirely within your control. These factors would combine to make it unsuitable for serious production use.
6bone addresses are in the range
and were the original blocks of addresses assigned for testing IPv6
in the real world. These addresses are not so relevant these days,
given the availability of “real” addresses from the RIRs. The
current plan for 6bone addresses is that no new addresses will be
assigned by the 6bone testbed after 1 January 2004, but existing
addresses will remain valid until 06/06/2006 (full details of the
phaseout are in RFC 3701). After this date it is anticipated that
3FFE::/16 addresses will no
longer be routed in the Internet at large. (Note the long transition
period—we can take from this that renumbering is not quite as easy
as we would all like it to be.)
If you are starting from scratch, we couldn’t recommend using 6bone addresses today. If you already have 6bone addresses you are safe enough for now, but probably want to start thinking about obtaining addresses from your upstream or the local RIR.
Details of the 6bone are available at http://www.6bone.net/.
Only Intermittently Connected
What do you do if you have a sizable internal network, but you are only occasionally connected to the Internet, possibly using different upstream providers? You might be in this situation if you were in a country where Internet access was very expensive, or if you were running a wireless community network. One of the fundamental questions is, how do you number your machines internally in order to maintain internal connectivity when your externally allocated prefix goes away? Until recently, the answer was to use IPv6 site-local addressing, perhaps in combination with an internal dynamically updated DNS. Unfortunately, since the deprecation of site-local addressing by RFC 3879 you are probably on your own if you want to use this method.
Your realistic options for addressing occasionally connected networks at this point are the same as for the always-connected case: going to your RIR for an allocation, or going to a nominated upstream provider. If you are thinking of using address space without explicitly informing either an RIR or an ISP that you are doing this, don’t. This kind of behavior in IPv4 caused lots of trouble, and we’d like to forestall you even considering it.
In the case where going to your RIR isn’t really practical (for one thing, it can cost significant amounts of money to obtain address space from an RIR) and going to an ISP isn’t viable, then you are basically stuck. It is for these “corner cases” that we feel some kind of site-locals scheme will be created. We speculate about what might happen in Section 9.1.1 in Chapter 9.
The acronym RIR stands for Regional Internet Registry, and currently there are only a handful of them in the world. They are the bodies collectively responsible for the administration and allocation of IP addresses to ISPs, enterprises, and end-users of the Internet. Their jurisdiction is roughly geographical, with RIPE serving the European region, ARIN the North Americas, LACNIC for Latin American and Caribbean, and APNIC the Asia-Pacific region, although there are overlaps and occasional inconsistencies that should be corrected as more RIRs are created. ARIN has traditionally absorbed the greater part of the issuance of addresses, not only in North America but also internationally in the regions not covered by the RIRs, because of North America’s role in creating the early Internet.
Politically speaking, RIRs are bottom-up organizations—the policies and plans flow from the members of the organization, and these policies are debated in as fair and as open a manner as one could hope for. In theory, this gives the ability to create or influence policy to any member, provided their arguments are lucid and well-phrased. In practice it is of course more difficult, but no superior mechanism has been developed, and there are some quarters in which the idea of democratic policy-making is viewed with dread; so bear this in mind while attempting to make sense of the paragraphs below.
Relevance to IPv6
As you may have guessed, since the RIRs have jurisdiction over IPv4 Internet address space, they have both de facto and de jure jurisdiction over IPv6 address space.
Throughout the lifetime of IPv6, the RIRs have evolved in their attitude towards it. Initially each RIR had a different and inconsistent policy; for example, ARIN used to charge for IPv6 address space as well as IPv4, a hurdle that has since been removed. Some commentators have remarked that a great abundance of address space, allocated in the main from the ISPs rather than the RIRs gives the RIRs much less to do, and effectively puts them out of a job. This, in combination with the traditional conservatism of network operators, may or may not go some way towards explaining the nature of IPv6 policies in the past. Thankfully, due to the efforts of various concerned people, more consistent IPv6 allocation policies have been approved and passed by the membership of the main RIRs. At the moment, that consistency seems to have been a useful intermediate stage rather than something the RIR communities were really insistent upon, since the RIRs are currently diverging in policy again. However, for this book, we are going to look at the current RIPE policy as it stands. Bear in mind that this may and probably will change over time—check your RIR site for details!
RIR operations background
First, you are only going to be talking to RIRs if you are the RIR responsible entity within an organization. End users do not have to talk to the RIRs in IPv6—they just go to their upstream ISP. In all likelihood, if you are in that position you already have an existing relationship with an RIR. You may however need to fill out an application for some IPv6 space, so we will deal with some of that detail here.
We deal with RIPE as a representative example of how to obtain IPv6 address space, since the policies are roughly harmonized. (As we said above, this is subject to change, but the direction of change appears to be in the more liberal rather than less liberal direction.)
With respect to getting IPv6 address space in the region covered by RIPE (generally “Europe,” for large values of Europe) there are a number of documents to read and digest. The first is RIPE-261, accessible via the URL http://www.ripe.net/ripe/docs/ipv6-sparse.html.
This presents a nice overview of the address space allocation algorithm that RIPE are using to enable the maximization of aggregation, and better aggregation is one of the stated goals of IPv6. It is useful to have this out in the open, because it tells ISPs what their next allocation of addresses is likely to be. This makes planning for ISPs easier, even if other aspects of the policy change.
The current IPv6 policy in force is RIPE-267, which can be found at http://www.ripe.net/ripe/docs/ipv6policy.html. The policy states the conditions under which addresses are allocated, and also indicates which forms must be filled in with which information in order to actually apply. Since these things change very quickly we are not going to examine the specifics of these forms here.
- Be an LIR
This is reasonably self-explanatory. Your organization must be a Local Internet Registry, and be a member of RIPE already.
- Don’t be an end site
This is also self-explanatory. You must not be an end site—in other words, singly homed, proving no connectivity to anyone else; solely a leaf node.
- Provide IPv6 connectivity by advertising aggregated prefix
The requirement here is to plan to provide IPv6 connectivity to organizations to which it will assign
/48s, by advertising that connectivity through its single aggregated address allocation. This is where it starts to get complicated. Disentangling this sentence provides us with three main components: you must plan to provide the connectivity (if someone asks, you can’t refuse them out of hand)—you must assign
/48s (which is to say, subnettable address space)—and you must advertise this via the supernet you will get, and not a separate route for each
- Plan to assign 200
/48s in two years
You must have a plan for making at least 200
/48assignments to other organizations within two years. This is perhaps the most controversial element of the current policy. The number 200 is intended as a line in the sand—a semi-arbitrary demarcation point to designate some applications worthwhile and others not, because the new philosophy of the routing table requires being fascist about who is allowed a top-level allocation and who is not. The point about
/48s is that the organization in question can’t just be an end-user who could fit everything in a
/64—there has to be some detail to the network, some subnetting. However, it’s no news to anyone that if 200 customers requiring
/48s have to be found, they will be, so it’s not entirely clear what benefit accrues by requiring that specific number. It’s probably best not to think of this number as necessarily a problem; rather think of it as a motivation for finding something or somethings in your network to which 200
/48assignments could be made, or will eventually have to be made.
The RIRs operate policy fora where elements of particular policies can be debated and hopefully changed. If you are looking to change anything you feel is unreasonable, you would be positively invited to take part in these. One such is the IPv6 working group in RIPE, findable at http://www.ripe.net/ripe/wg/ipv6/. The address policy working group is also important, and you can find that at http://www.ripe.net/ripe/wg/address-policy/.
How do we address things?
How do we route things?
How do we name things?
The topic of primary importance is obviously addressing, but we will also talk about intra-site communication, multihoming, and VLANs. DNS we talk about primarily in Chapter 6. (For the moment, suffice it to say that you can put IPv6 addresses in the DNS just as well as IPv4 ones.)
Planning the addressing of networks in IPv6 is simpler than IPv4. The algorithm to use is to first identify which networks under your control require distinct prefixes. You might assign different prefixes in order to apply different security or QoS properties to groups of addresses. When you’ve decided on your subnets, you then need to decide on automatic or manual addressing. In the automatic configuration scenario envisaged by RFC 2462, the addressing within a prefix is taken care of by the usual EUI-64 procedure. Conversely, in a manually configured situation, the same procedures with respect to address allocation within a prefix will have to be undergone as with IPv4: recording which machines have which addresses, and so on.
As in IPv4, you can of course still manually assign addresses. However, manual address assignment is considered harmful for many common pieces of network equipment. For example, assigning static addresses to desktops may be pointless if all desktop machines reside on one subnet and so can be identified by a single prefix.
For other portions of the network, such as firewalls, routers and some servers, manual address assignment may make sense. In this case your organizations usual address management techniques should be followed. Of course, if you are using software to manage your address space, the software may have to be updated to understand IPv6. If you’re looking for a free address management product that can use IPv6, you might like to look at FreeIPdb, available from http://www.freeipdb.org/. Sadly, spreadsheets, which are unfortunately in widespread use as IP-address registration tools, usually do not have a uniqueness constraint applicable to rows, making them next to useless for the purpose.
Why subnet? It’s commonly done when you are growing your network, either by having existing customers/users come along with more servers to number, or occasionally when merging networks or starting up. When more address space is available, it is often used to group machines by function, for example putting finance and engineering on different subnets. Being able to do this is a function of having enough address space, having planned correctly for growth, and being able to manipulate the netmask.
The netmask, or subnet mask to give it its family name, is always paired with the address of a host, and indicates the size of the network that it is directly connected to. It’s specified in terms of the number of bits in your prefix that are common to every machine on that network.
For example, in the network starting at
192.0.2.0 and with a subnet mask of
/24, the first three
octets—that’s 24 bits—are shared. In IPv4, the very first and very
last addresses are reserved, so you may assign addresses from
192.0.2.1 all the way to
In IPv4, you need to size your subnet masks just right. If you assign too much address space to a LAN, the space is wasted and you might have to renumber. If you assign too little, the LAN will outgrow it and you will have to renumber.
Another example in IPv4: if you have a
/16 in the above-mentioned CIDR format,
you also have 256
/24s, and 65536
/32s. If you were faced with a
couple of server farms, a dialup network or two, and some hosting
customers, the most appropriate way to subnet might be to divide
/16 into chunks depending on
the current and anticipated future size of the subnetworks you need
to number. So the servers might get
/23s, the hosting customers
/29s and so on. The biggest mistake you
can make is to arrive at a situation where you have underestimated
growth. since that generally requires a non-contiguous
allocation to be made from somewhere else in
your address space, which adds another routing table entry to your
internal routing protocol table, creates another address space
disconnected from the first one with the same security requirements,
and is generally regarded as Not a Good Thing. Similarly,
over-estimating growth leads to inefficient allocation, wasted
address space, problems with your RIRs, and so on. The optimal
choice of subnetting effectively hedges bets of future growth
against covering existing infrastructure efficiently.
However, in IPv6 the same problems do not occur. The RFC 3177
recommendation that a
assigned to end sites in the general case means that pretty much
everyone has 16 bits to work with when subnetting. This gives you
much more flexibility to create subnets freely than in IPv4, where
you were limited to just enough IP addresses to cover what you could
justify two years in advance.
How come 16 bits? Because just as every site can have a
/48, every subnet in a site can
That’s another IETF recommendation. It’s appropriate, nay
encouraged, to assign a
every subnet in your network, regardless of size. This is pretty
shocking to those of us coming from the IPv4 CIDR world—we’re so
used to rationing addresses among networks that it’s almost absurd
to imagine “wasting” address space like this.
On the other hand, this is where the advantages of IPv6 really
start to shine: by assigning a
/64 to each network, you assign more
address space than any network could possibly ever need, and
therefore have much more confidence in the stability of your
addressing plan. By allowing for 64 bits in the host part of the
address, it’s safe to use stateless autoconfiguration to hand out
persistent addresses to servers and clients alike. Even for such
minimal subnets as point-to-point links, where one would assign a
/30 in IPv4, it’s best to use a
/64 to ensure that you don’t
encounter problems in the future with some incorrect assumptions
about subnet size being made by your equipment. You have 65,536 of
them to assign—feel free to use them.
Your addressing plan does not necessarily need to be complex.
It is perfectly valid to split your
/48 allocation into a bunch of
/64s and start assigning them in sequence
as need arises. (There are certainly worse ways to use address
Then again, you may wish to impose a certain amount of
structure and aggregation on your plan. If you have four sites, you
may split the
/48 into four
/50s, like Table 4-3. Then, if you like,
you could simplify routing between your four sites by advertising
/50 as an aggregate instead
/64s. Of course,
you would still only advertise the aggregate
/48 to your upstream ISP.
That said, scalability is a key advantage of IPv6, and it
would be unwise to carve up all of your address
space without leaving room to manoeuvre. One way around this would
be to assign
/52s or smaller to
each of the four sites, which should still leave more than enough
room to assign
/64s to each LAN,
but will leave space in your allocation to allow you to grow your
network further, or change your addressing plan completely without
overlapping with already-used space (which should avoid conflicts
during any transition period.)
Very much the same approach can be taken by a high-end
provider that has been assigned a
/32 by their RIR. Again we have 16 bits of
address space to carve up, and in this instance aggregation between
PoPs may be even more important. It’s a matter of striking a
balance; making sure that each PoP has more space than it will ever
need, but that you leave room to assign further PoPs or regions in
the event of unexpected growth.
There are a couple of resources that are worth investigating if you wish to look deeper into this topic: RFC 3531 on managing the assignment of bits of an IPv6 address block, and “Sipcalc,” an IP subnet calculator at http://www.routemeister.net/projects/sipcalc/.
DHCP is a prerequisite in sufficiently large IPv4 networks because of two very important features: its ability to automatically assign an address to any machine requesting such and keep track of them, which is stateful address assignment, and its ability to supply other network-related configuration information (such as DNS servers).
The position in IPv6 networks is slightly different. DHCPv6 is not an absolute necessity in IPv6 networks, particularly small ones, because the address assignment problem is taken care of by autoconfiguration, which as we remember is stateless address assignment, a lá RFC 2462. However, for larger networks, and for when there is no other way to usefully configure certain kinds of information, DHCPv6 is a useful addition to the network manager’s toolbelt.
The main point to consider is under what circumstances one would use DHCPv6. At the moment, router advertisements can give you prefix (that is to say, routing) information, and address autoconfiguration can (obviously) give you addresses. For most networks the key remaining piece is DNS information: nameservers to use, and default search domains. There are efforts underway to make DNS configuration information easier to obtain (for example, by creating specially scoped addresses for DNS servers within a site). You can read more about these in Chapter 9, but these ideas have not yet solidified. It’s our expectation you will have to keep using DHCPv6 in your network for DNS configuration information in the short term at least, although you can have autoconfiguration running in parallel for address generation with no problems.
Changes to DHCP for IPv6
After a long gestation period, DHCPv6 was finally born in
RFC 3315. There are several changes from DHCPv4 worthy of note.
Since broadcasts no longer exist in IPv6, the server receives
messages on a well-known link-scoped multicast address instead:
FF02::1:2, and uses new port
numbers: UDP 546 and 547 instead of the old 67 and 68 “bootp”
ports. The client also uses its link-local address to send queries
initially, which illustrates a major conceptual difference; IPv6
nodes have addresses, valid, working addresses, by virtue of
having a link. They can have communication on-link without DHCP,
unlike IPv4 hosts. Furthermore, since it is necessary to support
prefix deprecation, clients must continue to
listen for server-originated reconfiguration messages, which can
be used not only for prefix deprecation, but for changing anything
there’s a DHCP option for. These communications can be secured by a variety of
means, but RFC 3315 defines an MD5 authentication scheme between
server and client, while IPsec is possible between relays and servers. Finally, if you want your clients to
obtain their addressing information via DHCPv6 you must configure
the RAs in your network to define the Managed Autoconfiguration
flag. You would do this on IOS by setting ipv6 nd managed-config-flag on a
Some interesting developments are on the horizon, including the notion of securing DHCP transactions between client and server—not just relay and server—over IPsec, outlined in RFC 3118, and the notion of “local” DHCP options, which could be defined on the client to mean more or less anything the administrator wants. We advise you to keep track of the IETF DHC working group if you are interested in learning more.
Multihoming can, in fact, be done in IPv6 exactly the same way as it is done in IPv4, with network prefixes being advertised from multiple upstream providers, ensuring independent reachability in the event of link failure. There is nothing inherently “IPv4-esque” about multihoming, just as there is nothing inherent in IPv6 that makes that approach more or less difficult, apart from the increased size of addresses. In other words, this style of multihoming should be protocol-independent.
However, just because it can be done the way it was in IPv4 does not mean that it should. The designers of IPv6 have gone to some lengths to engineer the capability to move away from this model of multihoming, because although we know it works for a size of Internet up to the current one, it will certainly not scale greatly above that, and therefore something new is required. The concepts of multiple addresses and address selection introduced by IPv6 mean that new styles of multihoming are possible, and we examine them in more detail below. These new styles of multihoming fulfil the same set of goals as IPv4 multihoming, but they do require some extra effort to understand.
Unfortunately, we are not yet at a stage where these new methods of multihoming can be deployed on a production basis. In fact, there is a lot of momentum for rethinking the whole multihoming paradigm, and that kind of reworking is probably on the order of years before it is ready for implementation. We discuss contenders for the multihoming crown in Chapter 9, but we’ll talk a little bit about how multihoming works in general below, since it may have to be taken into account in your network design.
Multiple upstream providers, no BGP
In IPv4, it is generally found that one host with a single physical network interface has only one single address. In IPv6 of course, any interface may have multiple addresses, perhaps provided by some combination of static configuration and router prefix advertisement. This allows for a form of host-based multihoming, where the host makes the decision about which network to originate requests from, rather than an egress router making decisions based on information provided to it via BGP. On the plus side, there is obviously less overhead and complexity on the network level since you do not have to maintain a routing table via BGP—and this can translate into savings in router hardware—but on the minus side, in-progress connections are no longer independent of link failure, and a host encounters problems when trying to assure optimal connection origination characteristics; either the host is participating in routing and has the best possible information about making connections, in which case the load that was centralized is now multiplied all over your server farm, or the host is making decisions on incomplete information, and the optimality of the routing can not be assured (indeed, perhaps far from it).
Nevertheless, it is a viable option for certain circumstances, particularly those where money is at a premium, and incoming connections can be managed carefully to make link failure unimportant. For server farms that are dominated by traffic where connections are created and torn down quickly, such as web servers primarily using HTTP, it might even be termed suitable.
Furthermore, if you have your server farm management outsourced or hosted elsewhere, and you are not in control of network configuration, but your servers are configured to “hear” router prefix advertisements from your hosting provider, you may find yourself effectively availing of this service with little effort required on your part.
Decisions governing source address selection are covered in the “Address selection” section in Chapter 3, but to reiterate, the authoritative document is RFC 3484.
If you are the kind of web farm that is a content provider, then in this model, your responsibility is to advertise as many AAAA records for your web sites as possible, thus ensuring as much reachability as possible. See the address selection section for more details.
Multiple Upstream Providers, BGP
If you are in a situation where you have multiple upstream
providers, and have your own
/35 or, these days,
/32 to advertise (i.e., you’re probably
an ISP), then your situation is such that you can continue to
speak BGP to your peers and upstream providers. If this is the
case, then operationally things are quite similar to IPv4.
Multiattaching is a term for connecting to the same ISP multiple times, and may be done with or without BGP. Multiattaching doesn’t have a great reputation from the end-organization point of view, primarily because failure modes that take out your ISP still end up taking out your Internet connectivity, despite having spent the money for multiple connections. However, it has some benefits—primary amongst them being that the Internet at large does not suffer from the extra AS and path bloat required when doing multiple provider multihoming. For IPv6, it also has the possibility to allow you to take different chunks of PA space from your upstream, meaning that a small degree of address independence is possible. Multiattaching is only useful under limited circumstances, however.
Managing IPv4 and IPv6 Coexistence
IPv4 and IPv6 will no doubt continue to coexist in your network for some years. Taking on this additional management burden successfully involves considering some entirely new questions, but many problems turn out to have answers that are simple extensions of the IPv4 answer. For the others, we outline what the best current practice consensus is, inasmuch as that is known!
- Bandwidth planning
Bandwidth planning is probably the least important of the considerations, but worth having a strategy for nonetheless. It’s our expectation that since traffic is essentially driven by user needs—whether those needs are fulfilled over IPv4 or IPv6—there’s probably going to be little enough variation in the bandwidth used. However, there is the chance that a wildly popular IPv6 application, say peer-to-peer networking a lá Microsoft’s Three Degrees, might appear, creating new demand for bandwidth. Also, if your IPv6 infrastructure is physically separate, you will obviously have to dimension that accordingly. If it is not separate, there may be the potential for IPv4 traffic to suffer at the hands of IPv6, or vice versa, if there is congestion.
- Network management
Incorporating network management is, despite implementation difficulties, relatively easy from a decision-making point of view. Either your commercial software package supports it, or it doesn’t, in which case you’ve to build a separate IPv6 management infrastructure (ouch) or get a new package. And if you’ve a home-grown set of tools, perhaps based on MRTG, Nagios, or the like, we’re pleased to inform you that IPv6 support is already in many open source management tools, and will be incorporated in more as time goes on. For example, one of the most widely deployed ones, Nagios, has IPv6 support in the 1.4.x series, currently in beta, but can easily have IPv6 support “retrofitted” by simply making the ping command which is executed by Nagios to monitor hosts a ping6 command instead. Similar techniques can be used elsewhere if necessary.
- Security considerations
Security considerations arise when there are two different ways to talk to your network devices, routers, etc. Unfortunately this part of managing the coexistence of these two protocols is often either ignored or worried about too much. Fortunately, some useful work has already been done on this, and you will find some of these issues discussed in the Section 6.4 section of Chapter 6, as well as later on in this chapter.
Fudging Native Connectivity with Ethernet
Frequently during our deployment planning we might run into equipment that does not support IPv6, and cannot be upgraded quickly. The IETF-supplied transition mechanisms, using various types of tunnel, are good ways around this problem, but they are not the only solution.
Since there is nothing intrinsically wrong with having separate routers on a LAN for IPv4 and IPv6, there are a variety of creative design hacks that one can use to provide native connectivity around a difficult router, awkward firewall or unhelpful layer 3 switch. If one treats the IPv4 and IPv6 networks as separate layouts sharing a single infrastructure, then a variety of options open up for providing IPv6 connectivity alongside IPv4, as opposed to studiously and fastidiously coupled with IPv4.
This is a long-winded way of suggesting that you deploy a dedicated IPv6 router (or, as appropriate, IPv6 firewall) alongside your troublesome IPv4-only kit. We discuss this in greater detail in the Section 6.6.3 of Chapter 6.
At this stage, we have looked at a good deal of the background information and deployment techniques relevant to IPv6. With this knowledge under our belt, it’s time to start thinking about applying it to your own situation. As with any process, deciding what to do is half the battle—and executing on those decisions is the other.
The first question to consider during the planning process is the motivation for the change. You’re thinking about enabling IPv6 in your network—why? Perhaps you have been handed a business requirement to support it by a certain date. Perhaps the standards for your specific network mandate it. Is there a technology trial planned? Or maybe you are an ISP who needs to deliver native IPv6 routing services to its edge networks. Indeed, perhaps customers are even asking for it!
Whatever the motivation is, it will help to establish what the important parts of the implementation are by identifying which areas of the network need attention. (For example, if you are converting your desktop network, the important parts are the desktop network itself, its path to the outside world, and its path to internal services.) You may be required to be more or less formal, depending on your organizational environment, but we would strongly recommend the production of some kind of document listing the existing network elements, describing their ability to support IPv6, and identifying of which parts will need to run IPv6 in the future. This allows you to prioritize your rollout correctly. You will come back to this document many times during the deployment, so keep it safe.
With your network document in hand, you can then begin to construct a deployment schedule, keeping in mind your original motivation for the change. A deployment schedule is, at its simplest, a list of things to change and a time to change them. For organizations with change request procedures, the schedule should probably be submitted as one request, since while there may be many distinct changes in the plan, the motivation behind them all is the same. The change request system should hopefully take care of communicating what is being done and why within your organization.
For example, perhaps you have a requirement to IPv6-ify your desktop network. It might be that your desktop network is highly segregated—perhaps on it’s own VLAN. Modulo operating system support, the more segregated the network, the easier it is to turn on IPv6 for that specific piece of it. Conversely, for large flat networks, enabling IPv6 is a much larger job, purely because it’s much more of an all-or-nothing proposition, and incremental deploy-then-test methods are not applicable. An example deployment schedule for such a segregated VLAN might be as simple as “Switch over desktops to IPv6 capable stack on evening of 21st; allow one week to settle. Switch over main router to dual stack on evening of 28th; test outgoing and incoming IPv4 and IPv6 connectivity.” It should also have a section for fall-back, or reverting to the previous state of affairs if there is some kind of catastrophic failure.
The above raises some important general points. Any sufficiently large organization will have more than one person affected by what you are going to do. It is your responsibility to communicate about these changes, either through the change management process when that is appropriate, but directly to the stakeholders where necessary. Communication is a key element of any deployment plan, and IPv6 is no different. Tell everyone you can about what you’re doing, why you’re doing it, and when you expect it to be finished. Furthermore, a deployment plan for any new service, not just IPv6, should also have an operational component to it. How does this new service interact with what the help desk does already? To whom should calls or emails about it be directed? And so on. This final component of the generic IPv6 rollout we call an Operational plan, and it should list who will have to look after what you’ve done, and support it. The deployer should try to plan for the indefinite period of IPv4-IPv6 coexistence!
So, to reiterate: Decide why you are doing an IPv6 deployment. Identify what you need to change, and make sure everyone who cares knows what you’re doing, and when. Schedule, perform, and test those changes. Tell the operations folks what’s been done, and if you have network development and security folks, they need to know too. If caution dictates, do this as an incremental process so you can fully absorb the impact on your network. Always have a reversion plan in the highly unlikely event something goes very wrong. Finally, note that all of the above implies an already existing organization and an already existing network. For “green-field” setups, things are slightly different—we talk about those later.
Of course, this is a sadly generic deployment plan; you could use it for almost any big change. But that doesn’t make it any less valid as a framework; however, it is the details of each network, and the details of actually configuring a particular desktop or router to do IPv6 that would most readily cause a deployment to fail. We describe how to do the most common IPv6-relevant operations in Chapters Chapter 5, Chapter 6, and Chapter 7, which will hopefully be useful input into your plans. (But for those looking for more concrete details of individual configurations at this stage, we recommend skipping ahead to the Section 4.7 later in this chapter, where we present three important model deployments.)
Inputs to Deployment Plans
Now, however, we need to drill down into more specific analysis. Below we consider various influences on a deployment plan. We consider the most important case, existing IPv4 infrastructure, first, then talk about considerations around converting hosts and routers.
Existing IPv4 Infrastructure
This will be by far the most common starting point for IPv6 deployment, and will continue to be for years. The good thing is that IPv6 is, as designed, able to run in parallel on almost any kind of layer 2 media: Ethernet, ATM, and so on. This means that you can start with as minimal a deployment as you want, by connecting IPv6 capable hosts to your existing layer 2 infrastructure. Adding to or changing the IPv6 deployment is very easy, and as time goes on, the amount of administrator effort required for getting IPv6 up and running on new equipment will go down. The tricky element obviously, is managing the two in simultaneously.
As noted above, there are various transition mechanisms that can help with deploying IPv6. One of the most useful for low-overhead connectivity is the dual-stack approach, where the OS can communicate using each protocol (IPv4 and IPv6) separately. We find that in situations where performance is not absolutely paramount, having a dual stack means that experimenting with IPv6 becomes very easy, as we illustrate below. For situations where dual stack is not feasible, there are other mechanisms to deal with IPv6-only hosts, and we look at those too.
In summary, existing IPv4 infrastructure is in general no problem for a deployment plan. One very useful transition mechanism is running dual-stack, and we find it does not introduce interoperability problems.
Converting a host at a time: dual stack
At some point, you will want all of your equipment, where feasible, to be running IPv6. This is really just a matter of setting up the dual-stack system on each host. Obviously that’s a certain amount of work per machine, and while ad-hoc deployments may be suitable for small networks, for large networks being systematic is necessary.
One sensible way to proceed for converting hosts is to create a standard patch distribution for such old machines and operating systems as require it. Apply the patches via your standard systems maintenance or scheduled outage interface and then evaluate the change. (You may prefer to do this with a sacrificial machine or two first, if you run unusual applications or have a particularly different O.S. configuration.) Usually vendors will have extensively stress-tested their stacks before letting the public see them, but occasionally your situation may trigger an obscure problem, so it is wise to evaluate patches before rolling out. Having done that, you can, at your leisure, convert the rest of the hosts on the network. Another option is to allow IPv6 to be deployed as part of your normal upgrade cycle—once the operating system versions you install supports IPv6, you can just deploy it with IPv6 enabled (again after appropriate testing).
The great benefit of this rolling dual-stacked deployment is that there is no flag day: in other words, a day where everything changes. Experienced network managers know that changes on massive scales quickly expose hidden dependencies that can make life highly exciting for hours or even days. Apart from standard scheduled outage management, the overhead of the gradual roll-out is really quite small. Obviously the more equipment converted in a single session, the more you can amortize the cost (in both time and money).
At the end of this process, you can have systems that can pick up addresses via IPv6 router solicitation and behave as if they were solid citizens of both the IPv4 and IPv6 Internet. This is an important stepping-stone on the way to implementing almost any deployment plan.
Roy, Durand, and Paugh have a draft, http://www.ietf.org/internet-drafts/draft-ietf-v6ops-v6onbydefault-03.txt, about their experiences of turning dual-stack on by default within Sun. One key element they found was that dual-stack machines, numbered privately in IPv4, would experience problems when attempting to make IPv6 connections in networks with no on-link IPv6 routers.
From a network managers perspective, if you are rolling out dual-stack throughout a network, or if dual-stack is mandated for you, as much on-link IPv6 infrastructure as possible is one obvious way to short-circuit many classes of performance or reliability problems experienced by these machines. There are other transition mechanisms which may also help, some relying on existing IPv4 infrastructure; you will find them discussed in Section 4.1 earlier in this chapter.
In summary, we feel the rolling dual-stack method to be quite well understood. Deployment plans that involve converting networks of desktop machines could use it with relatively small risk.
Connectivity and routers
One thing that you’ll want to consider before doing an organized large scale roll out of IPv6 is how to provide connectivity. Deploying one or two test hosts with their own tunnels or 6to4 connectivity is relatively easy and sensible. However, it is probably not a good idea to deploy a LAN of many hosts all with their own individual tunnels to the IPv6 Internet! As we saw in Chapter 3, IPv6 routers play an important part in the IPv6 configuration process, so if you are deploying more than a couple of hosts, consider configuring a router and using a tunnel or 6to4 on the router. If you don’t have dedicated router hardware, that’s fine: most operating systems that support IPv6 can be configured as a router.
Dedicated routers themselves raise different questions. Depending on your manufacturer, you may have to buy an OS upgrade in order to have an IPv6 capable machine. Before that upgrade is bought or borrowed you’ll need to do some planning. There are two points to be aware of when doing this planning. First, IPv6 dual stack obviously uses more CPU and memory resources than a single stack and sometimes routers don’t have much of either to spare. Check your vendor’s specifications to make sure the upgrade will fit on the router in question. (Router memory upgrades in particular can have unpleasant step functions in the financial resources required.)
The second issue that applies particularly to dedicated routers is that, because IPv6 is a younger protocol, the IPv6 path in a router can be less well optimized than the IPv4 path. For core networks that expect to process millions of packets per second, this can be catastrophic, and if your router has this issue, we advise you to look at other options such as using a separate router for IPv6 or using 6PE. Chapter 5 goes into more detail on the level of support in Cisco and Juniper routers. We’ll talk more later about network topologies and how to ship traffic around.
With all this in mind, the question arises as to whether to use one’s existing IPv4 router(s) for IPv6 traffic. Like always, this decision comes down to a balance of tradeoffs. The two main cases to consider are WAN links and ingress/egress routing, and the issue with both is whether the safety and resilience of a separate infrastructure justifies the management and cost overhead of supporting that infrastructure, even if that infrastructure is a single PC with a tunnel.
If you use a separate router on your LAN for IPv6, then you can gain experience without fear of impacting your production IPv4 systems. As time goes on, however, you may find that this flexibility actually works against providing a reliable IPv6 service. It duplicates the administration and maintenance work involved and can create an easily-ignored support “ghetto” that busy staff without spare time will, despite best intentions, find themselves unable to gain experience with. Face it, who has spare time in this day and age?
Most importantly, it will also prevent you of taking full advantage of any IPv6 support that might become available on your WAN link. Workarounds like IPv6-in-IPv4 tunnels, while great to start with, don’t scale very well and are prone to failure in ways that a native network is not. Maintaining it, of course, invites all the same problems of support ghettos. Once you have confidence that you understand IPv6 and its potential impacts on your network, the ideal way to leverage your existing experience and its similarity with IPv4 is to deploy it exactly in parallel, and use all the same troubleshooting and monitoring mechanisms to maintain it.
As a result of this, our observation is that sites that experiment with IPv6 will typically start with an entirely separate infrastructure and move toward integration as time goes on and experience grows. On the other hand, sites looking to save money and have a deployment period with a fixed timescale, generally re-use existing infrastructure where possible.
Converting a host at a time: single stack
As above, this is a conversion process allowing your systems to run IPv6. However, in this case, you turn off the IPv4 stack when you have completed IPv6 configuration. This is a scenario that you would probably only contemplate when one part of a network that is already converted to IPv6 is working well or if you need to deploy a large number of hosts but don’t have the IPv4 address space available. The most important thing to remember is that routers and infrastructure service systems need to be in place first. IPv6-only machines that do not receive RAs are limited to purely local communication, so you need a working IPv6 router to communicate with the outside world. Even if you do have fully functional IPv6 connectivity, you may need to think about how you will reach IPv4-only sites (including most of the web and DNS servers currently on the Internet). Your conversion plan will therefore need to address these dependencies very carefully.
You will very probably encounter problems in the act of performing the conversion. You could expect the issues to broadly fall into the following categories:
- The IPv4 stack that wouldn’t die
In some cases, particularly with the older commercial operating systems, removing IPv4 is actually not yet possible. More accurately, removing it while retaining IPv6 can be problematic. However with popular, more modern operating systems, we’re glad to say it is in general possible—for example, Windows allows you to bind and unbind protocols from an interface, and there was some work done on modularization of IPv4 in Linux. If you can’t actually remove IPv4 you can always choose not to configure any IPv4 addresses.
- Too simple
There may be devices within your network (one classic example being network-enabled printers) that only speak IPv4 and will only ever speak IPv4. In this case it will require certain servers to retain their IPv4 addresses to front-end these devices.
Another possibility in this category is software that only supports IPv4 and an IPv6 version will not be available in the near future. In some cases it is possible to work around these issues; have a look at Chapter 7.
- Low service availability
The service that you thought was available over IPv6 turns out to be available in the approximately twenty minutes that it stays up without crashing. In this case it may be possible to isolate the users of the service such that they continue to use dual-stack hosts while the rest of the network moves toward IPv6 alone. Sometimes the crashing problem may be easy to fix: a programming or configuration error. Sometimes there is another daemon that effectively achieves the same thing: samba instead of NFS for file sharing for example.
We have to say that most of the IPv6 services we have deployed have a similar level of reliability to their IPv4 counterparts, which is not surprising given that the transport level is essentially the only thing which is changing.
Your system management process here involves the same test and rollout phase as before, only the dangers of removing IPv4 are significant—you are not only adding extra capabilities, you are removing old capabilities, and any users that were using the machine via IPv4, or any services that the machine needed to talk to over IPv4, had better be running on IPv6 also or things will get messy. For that reason alone it is probably best to run such infrastructure servers as are necessary (DHCP, DNS, and so on) on dual-stack until everything is running safely on IPv6.
In summary, if your deployment plan has an IPv6-only network in it, and it must communicate with an existing IPv4-only network, proxies or other front-ending should be deployed and tested first. If the IPv6-only network is “green-field” and does not need to communicate with IPv4 services, life is easier. We highly recommend dual-stacking infrastructure servers that provide DNS and DHCP. Additional single-stacked IPv6 servers performing the above functions are acceptable if the management and money overheads are acceptable.
No Existing IPv4 Infrastructure
At the moment, and probably for quite some time to come, this is the least likely scenario unless you are setting up a research lab. In many ways, since you have one less transport protocol to worry about, your life becomes much easier: there’s no need to have separate firewalling rules, separate routing or anything like that. However, until the time when significant parts of the Internet can be reached via IPv6, you are likely to want to communicate with IPv4 entities somewhere. There are a variety of ways to do this, some of which are covered in this chapter, Chapter 6, and Chapter 7. The most relevant question for this scenario is whether or not you can get IPv4 addresses on the edge of your network. If you can, then you have the option of using various dual-stacked proxy techniques or using a router to do some form of NAT or gatewaying. Otherwise, you may have to rely on an upstream proxy server or some other mechanism to gain access to the IPv4 Internet.
Generally, your choice will be whether to modify topology on layer 2 rather than layer 3. If things are routed in your existing network there is generally a good reason for that (WAN links, security) and those reasons will be invariant under the application of IPv6. Of course the routers are a particularly crucial aspect of networking under both IPv4 and IPv6, which means that it may not be possible to change them as easily as we might like. Topology on layer 2 is relevant to intra-site communication, and may require one of the transition mechanisms to properly enable same. In the base case, IPv6 communication can flow naturally over normal switches, and as long as multicast is supported, everything should “just work.” If one wants to separate out IPv4 and IPv6 communication, choices begin to appear. You can do it at a VLAN level, in which case your hosts must support the 802.1q VLAN tagging protocol; rare, but not impossible. Examples of how you might do this may be found in the Section 6.6.3 section in Chapter 6.
Edge to core or core to edge
Historically speaking, it was envisaged that IPv6 would begin to appear in networks in an edge-to-core direction. In other words, given that one of the main benefits of IPv6 was to number large networks natively, it was envisaged that it would be enabled where the maximum benefit accrued. In fact, our experience is that it is going mostly in the opposite direction: the core is only slowly being dual-stacked or otherwise enabled for IPv6, and the edges which previously had to make do with tunnels are switching over to native connections. Based on the realization that most managers are somewhat scared to switch over a well-functioning core, this has prompted a move toward entirely separate IPv4 and IPv6 infrastructure. If existing IPv4 infrastructure and applications absolutely must not be disturbed, this is a good approach. In practice it is very rarely the case that you can have entirely separate infrastructure, especially when the expense of purchasing additional hardware is made clear. (There are of course still cases when it makes sense to buy a limited set of desktops or servers additional network cards, and create a separate switch VLAN for them.)
Conversely, with an edge-to-core implementation, the key question is building support inwards. In the case of ISPs, for example, CPE can often be less flexible and upgrading it to support IPv6 may be problematic. DSL routers are perhaps the canonical example of this, but old equipment is a problem for everyone, not just ISPs. Allowing IPv6 to transit your core until it is natively enabled is a matter for transition mechanisms discussed elsewhere.
Router placement and advertisement
Same IPv4/IPv6 router, with same exit route (i.e., native onward connectivity).
Same IPv4/IPv6 router, with different exit route (e.g., via a tunnel).
Separate IPv4 and IPv6 router (e.g., Figure 6-1).
These differences are important when considering your onward connectivity, but they will be transparent to the end host. In a flat (broadcast) network, such as a single LAN, your router’s announcements will ensure that every IPv6-capable host receives an address and connectivity. If you happen to have more than one router on your LAN, both will announce themselves; if they are advertising different prefixes then your hosts will receive separate addresses from each.
Also be aware that if your prefix changes from time to time—for example, if you use 6to4 with a dynamic IPv4 address as the endpoint—then the addresses of all your hosts will change as well. This should happen fairly transparently, but you will need to set the lifetime of the advertised prefixes just right; long enough to overcome network instabilities, but short enough to time out when they are no longer valid.
While we like to insist that IPv6 is just like IPv4 in all the best ways, there are some interesting consequences to router advertisement that can catch you out if you use VLANs extensively. When a router turns up on a network, it will typically announce itself and start assigning addresses. If the router is not on the network it is supposed to be on—for example, by being plugged into a switchport on the wrong VLAN—it will start handing out addresses that will, briefly, work (for small values of “work”—they’re not likely to be in the DNS and might not match any access control lists you or others have defined).
When the operator notices the error and pulls out the patch cable, the addresses will suddenly stop working, but they will hang around until they time out, and chances are that the machines that have them will continue to try to use them. Since mistakes happen, you might want to consider configuring reasonably short timeouts for router advertisement; after all, if the router does go away for a bit, its addresses aren’t going to be much use anyway. Note also that, even when correctly configured, leakage of packets across VLAN boundaries is a well-documented feature of network equipment.
It all gets even more interesting if your router or switch runs a trunking protocol such as VTP. Rather than simply not working if you plug it into a non-VTP port, it’s likely that traffic to the default VLAN will still get through, and you’ll start getting addresses from somewhere. Typically , it’ll almost certainly be wrong somewhere.
In summary, we have shown a number of the possible influences on a deployment plan. You will need to consider at a minimum addressing, routing and naming in your deployment plan, as well as organizational concerns such as who will pay for it, and who will support it.
In this section we present an overview of the deployment of IPv6 in some representative networks. We look at both the technical and organizational aspects of same. The first example we look at is that of an enterprise-class IPv4-connected network, the second a transit ISP, and the third—a special case—an Internet Exchange Point.
Enterprise-class IPv4-connected network
- Step 1
XYZ Corp, a company owning its own network, decides to implement a pilot IPv6 program to provoke a thorough audit of their in-house applications, which recently demonstrated fragility in the face of network instability. The pilot IPv6 programme will establish the minimum necessary IPv6 connectivity to test the applications on the internal desktop and server networks. External IPv6 connectivity is not absolutely required but will be delivered if possible.
The development team are instructed that when they are going through the code-base for the company applications, they should alter the code to be address independent and to be more resilient to failures. The implementation team have to deliver a working IPv6 platform not for the development team, who are anticipated to take quite some time when reworking the code, but for the testing team, so there is ample time for the deployment to take place.
- Step 2
The deployment team begin the communication process by running an internal IT staff course in IPv6; they might use this book, vendor materials, and so on. They set up a machine for the IT department which has a tunnel via a tunnel broker, enabling them to become familiar with addressing, routing, and new features like router solicitation in an environment where it doesn’t particularly matter whether connectivity is up or down. (Attempting to deploy a new protocol where a sizable proportion of staff have never executed any IPv6 related command is not recommended.)
They begin the network analysis process, and arrive at the conclusion that three things need to change: desktop network, server network (which are both separately addressed and routed networks in IPv4, and should remain so in IPv6) and egress routing. The company has decided that fiddling with their single egress router is not something they want to do, and therefore elects to get external IPv6 connectivity via some spare commodity kit they have lying around. Neither do they want to dual-stack all the internal routers between the egress router and the desktop network in question, so they decide on tunnelling as a “quick fix.”
- Step 3
The network design process results in an addressing architecture and subnetting architecture that looks very similar to the existing IPv4 network, except that where an existing RFC 1918 /16 was used for the internal network, the company’s upstream ISP agrees to supply them with a tunnel and a /48 from their PA space. From an addressing point of view, they assign a single /64 to each WAN link for their remote offices, who are not yet IPv6 enabled, and reserve /64s for their server and desktop networks. Any tunnels between routers will also be numbered out of consecutive /64s. While it may not be optimal, it should work. The formal deployment plan now consists of commissioning a tunnel-capable router, dual-stacking the internal router between the desktop and server networks, dual-stacking the desktop network, and then dual-stacking the server network, with approximately a week’s worth of testing between each step. Internal IT staff are reluctant to push an IPv6 stack into the standard patching methodology, so a supervised manual install and reboot of approximately 300 workstations is done by ten volunteers, which goes slowly but without incident. Simultaneously with this, a spare Cisco 3600 series is found, and connected to the DMZ which hangs off the existing router. Tunnels are brought up to the outside world, and to the router of the internal desktop network, for which delicate holes are punched in the firewalls. Both are found to be working.
- Step 4
Internal IT staff balk at the notion of a full conversion of the existing server farm, so only four servers are converted: the two on which the server-side of the application runs, and the DNS/DHCP servers. IPv6 addresses are kept in AAAA records in the same internal zone in the same internal DNS servers—no IPv6 is exposed to the outside world. The server upgrade exposes a bug in one of the being-rewritten applications where if it makes a quad-A DNS request and does not get an answer, it returns a strange error to the user instead of falling back to A requests.
- Step 5
The development team has IPv6 service enough to test their reworked application, and solicits feedback. The project is declared closed until the issue is re-opened by a later management fiat.
Transit-providing medium-size ISP
- Step 1
Management in the company decides that it is time to gain experience with IPv6. While there hasn’t been much direct customer demand to date, there are a couple of large influential clients that have it on their long-term radar, and there is a need to gain understanding now so as to avoid buying new equipment that might impede IPv6 deployment over its lifetime in the network.
A single individual is tasked with the job of gaining familiarity with IPv6, setting up a small test network (one router, one server and connectivity to the IPv6 Internet) and beginning the process of educating the rest of the operations staff.
- Step 2
The “IPv6 expert” procures a UNIX-based server and a spare Cisco 7200 router with Ethernet and ATM connectivity. An IPv6-in-IPv4 tunnel is configured to one of the ISP’s peers who have already set up IPv6, and address space is obtained from them. At the same time, the ISP begins the process of requesting IPv6 address space from RIPE, which involves preparing a deployment plan.
After connectivity is successfully set up, a variety of IPv6-capable services are configured on the server, including a web server (Apache 2), an SMTP daemon (Exim), an IMAP server (Courier IMAP) and a DNS server (Bind 9). The server is placed in the domain ipv6.ISPNAME.net, and acts as the primary DNS server for that domain. The ISP asks its (IPv4-only) DNS secondaries to carry the forward and reverse DNS zones, thereby checking on an isolated subdomain whether the addition of IPv6 records causes any unexpected problems.
To begin the very first stages of integration, IPv6 connectivity is enabled on the local office LAN of the operations centre by means of VLAN trunking on the Cisco 7200. IPv6 is then enabled manually one machine at a time on the LAN, and any problems are noted and dealt with.
A policy is instituted that any new network equipment bought must either be IPv6 capable, or have a roadmap for native IPv6 connectivity in a short timeframe.
- Step 3
Having gained experience with the initial deployment, it is time to begin expanding the network and taking the first steps to integration. Expertise begins to grow throughout the company.
The ISP receives its own address space from RIPE and, while the deployment is still small, renumbering begins. This involves developing an addressing plan that will scale into the future. The organization has been granted a
/32prefix from RIPE. In the addressing plan, one-quarter of this (a
/34) is assigned for the deployment project, with the rest reserved for future use. This space is then divided into four chunks of size
/36each, one for each region in which the network operates.
In line with the rules of their Regional Internet Registry, the plan then allocates one
/48to each PoP in the network. Note that these are not configured yet and may not be for quite some time—they are reserved in the addressing plan for when that time comes. As infrastructure in any one PoP is dual stacked, addresses are assigned from the appropriate block for that PoP. Customers in each region will be given allocations from the corresponding
/36, which will allow the routing protocol to aggregate announcements between PoPs.
With renumbering complete, the way is now open for the ISP to run BGP and arrange peering and transit in the usual manner from other networks.
The ISP has an infrastructure based on, among other technologies, ATM and wide-area Ethernet. An additional router is procured and a dedicated IPv6 wide-area link is set up over ATM to another PoP. Private peering is arranged with a willing ISP that is also located at the same data center. This is still separate from the existing IPv4 routed network, but shares some of the switch infrastructure that, as it was used in the previous step, has been shown to be agnostic of IPv6 traffic.
Meantime, a policy is instituted that any service upgrades and new services should be IPv6-capable. Managed services staff can use the experience gained from the IPv6 server set up in the previous phase, and can carry out further experiments there before deploying IPv6-capable services in production.
Now that the prerequisites for deploying an IPv6 routing infrastructure are understood, the ISP surveys its existing network with a view to supporting dual-stacked operation. The policy of purchasing IPv6-capable equipment initiated in step 1 begins to pay dividends as the impact of IPv4-only equipment is minimized.
Early-adopter customers who are willing to participate in the IPv6 rollout can now be facilitated by means of IPv6-in-IPv4 tunnels or dedicated virtual circuits or VLANs on their wide-area links.
- Step 4
The time has come to integrate IPv6 support with the existing network, upgrading or deploying workarounds where necessary. Training is provided for all operations staff, conducted by those who have gained experience in previous phases. A deployment plan is drawn up by the IPv6 team and, after an initial run-through on a single router, is handed over to the operations team to implement (with support from the IPv6 team) so that they are happy that they have the expertise to deploy and support IPv6 on their infrastructure.
In the meantime, the remaining IPv4-only managed services are undergoing upgrades drawing on the experience of dual-stacking services in the previous step. IPv6 is now provided in the routing infrastructure and on managed services as a matter of course.
As necessary for a production deployment, the monitoring infrastructure is adjusted and upgraded to ensure that IPv6-specific faults are detected and dealt with.
Customers, who are dealing with internal requests for IPv6 connectivity, can be facilitated by means of transition mechanisms and, as the rollout proceeds, with native connectivity as and when they are ready to take advantage of it.
- Step 5
IPv6 is now rolled out and supported network-wide. Ongoing upgrades maintain existing IPv6 connectivity, removing workarounds where they were necessary and improving performance where only software-based forwarding was available on hardware-based routers. The IPv6 routing policy is brought into line with IPv4 and native peering is preferred over tunnels with existing peers and transit providers.
Special case: Internet Exchange Point
An Internet Exchange Point (IXP) is a facility that provides a place multiple for Internet Service Providers to meet and exchange traffic. Their aim is to save money for the ISPs and improve connectivity for their customers. Think of it as a switch into which multiple customers connect over WAN links; it’s a way to get direct peer-to-peer connectivity in a scalable fashion.
There are two basic scenarios for how IPv6 might be used within the context of an IXP. First, an exchange itself might like to enable IPv6 services to offer to its members, and second, a member might like to participate in IPv6 peering across an exchange.
- Step 1
The members of the IXP decide to implement IPv6 as fully as possible within the exchange as part of the goals for the next financial year. As part of the usual schedule of rolling switch upgrades they specify that vendors will be unable to respond to tenders without including details on their level of support for IPv6.
- Step 2
IXP operations decides to do the easy bit first, and applies for special IXP address space from their nearest RIR. They examine the RIR Comparative Policy Overview, which specifies that to qualify for this space, “the IXP must have a clear and open policy for others to join and must have at least three members.” The IXP qualifies, so they continue with their application. The exchange point mesh is itself “neutral” and should not be seen to receive transit from any particular member.
The address space that is received is for the peering mesh only. While it’s assumed that the direct peers of an IXP will route this
/48, it’s likely that other more remote networks will reject advertisements of such a small network. The operations team therefore assumes that this address space, while unique, is not globally routable and so can’t be reached from all places on the internet. Services such as looking glasses and NTP servers that need to be globally reachable must still get their address space from one (or more) transit providers. Thankfully in this case, one of the members is already providing IPv4 address space for the services LAN, and can persuaded to provide IPv6 address space for it too. There is little danger in this particular case of the members falling out and withdrawing address space, so it is viewed as an acceptable risk.
- Step 3
Fully-capable IPv6 switches and operating system versions are obtained, and a scheduled upgrade is performed. This upgrade also dual-stacks the existing server in the services LAN, as well as its associated services. Testing reveals no problems.
Members now have the choice of presenting at the exchange with a second IPv6-only router, or simply dual-stacking their IXP router. Policies are rewritten to ensure members turn off RAs on their IXP present routers, and peering is negotiated between members as usual. The operations team extends its monitoring system to include member IPv6 addresses, implemented via a database. Successful peering happens within weeks of the upgrade, and the project is declared a success.
We have brought up some of the issues which you may have to consider when planning your IPv6 experience, including obtaining address space, obtaining connectivity, the possible transition mechanisms and managing the indefinite coexistence of IPv4 and IPv6, as well as detailing some clever (and not-so-clever) techniques to help you work around awkward equipment.
 Actually, a tunnel broker is someone who finds a tunnel for you and the tunnel may in turn be provided by a fourth party!
 More explicitly, if the router had address
10.0.0.1 this might be achieved by
running a command such as route add
-inet6 default ::10.0.0.1. Not all operating systems
support this, but you can see examples of this in Table 5-13.
 The protocol number for encapsulated IPv6.
 In case you are wondering where 5EFE comes from, it is the Organizationally-Unique Identifier (OUI) assigned by IANA that can be used for forming EUI-64 addresses.
 This simple phrase means the ISP at the other end of your leased line(s), DSL connection(s), or wireless link(s).
 We’ll say what the RIRs are shortly, but for now you just need to know that they are the people who allocate addresses to ISPs.
 Not everything is “perfect” yet, in other words.
 Historically, the most likely of the two to have happened.
 Which is almost everything, believe us.
 Entities that forward DHCP messages.
 Together with defining an IPv6 service to monitor.
 For advice on how to manage and plan maintenance properly see The Practice of System and Network Administration by Limoncelli and Hogan (Addison-Wesley), but be prepared to feel embarrassed at how disorganized you are.
 Possible examples include processing firewall rules or fast hardware forwarding.
 This is a way of tunnelling IPv6 over MPLS.
 A euphemism for “break it then learn how to fix it.”
 Probably removing the very stack that allowed you to install IPv6 in the first place!
 Although they may be able to communicate with a proxy on the same link, and hence the outside world.
 You may need to configure
127.0.0.1, as some software
becomes distressed if you don’t have a loopback
 If an application is re-engineered entirely to support IPv6 there is of course the danger of introducing bugs, security problems, etc.