Chapter 2. Hardware configuration planning 49
at the other end of the link. For example, from the DS6000 to a switch/director, the FICON
adapter can negotiate up to 2 Gbps if the switch/director also has 2 Gbps support. The
switch/director to host link can then negotiate at 1 Gbps.
There are two types of host adapter cards you can select:
Longwave and shortwave. With
longwave laser, you can connect nodes at distances of up to 10 km (without repeaters). With
shortwave laser, you can connect distances of up to 300 m.
Each Fibre Channel/FICON host adapter provides one port with an LC connector type. There
are cable options that can be ordered with the DS6000 to enable connection of the adapter
port to an existing cable infrastructure.
Topologies
When configured with the FICON attachment, the DS6000 can participate in point-to-point
and switched topologies. The supported switch/directors for FICON connectivity can be found
at:
http://www-03.ibm.com/servers/storage/disk/ds6000/interop.html
For more information about host attachment see Chapter 5, “Host attachment” on page 143.
2.10.4 Preferred Path
DS6000 has an architecture called Preferred Path. On DS8000 and ESS, each port of a host
adapter is accessible to both servers so that when the host has multiple paths to host
adapters, I/O can be load-balanced or round-robin. Unlike DS8000 or ESS, each host port of
the DS6000 is related to only one of the servers. So even if the host has multiple paths to host
port, I/O may not become load-balancing or round-robin according to the configuration. This
concept is almost same as the DS4000 (formerly FAStT)
Figure 2-27 on page 50 to Figure 2-29 on page 51 shows the preferred path I/O activity of the
DS6000. As illustrated in Figure 2-27 on page 50, when the host has multiple paths, but only
one path to each server, I/O is always active/standby for a LUN (Extent Pool). An alternate
path is used in the case the server or path fails. When a host has multiple path to each server,
I/O can be load-balanced or round-robin across the paths connected to one side of the
servers. And in case the path fails, the rest of the paths are used for I/O and will not cause
failover. Only when the server fails, failover occurs. If the configuration is large capacity and
high performance, required especially for sequential workload, multiple paths to the
configuration may be effective. But if the configuration is small or workload is random, a
multipath configuration may not be needed. For a small configuration or random workload,
DDMs can be saturated earlier than the host port.
According to the operating system and multipath driver, to determine I/O activity
(load-balancing/round-robin/active-standby), users have to configure the OS or drivers
setting. For example, the Subsystem Device Driver’s (SDD) default setting is load-balance,
but if you want to set other activity, you have to use the
datapath set command.
If a host has only single path to one side of the servers, the host can still access the LUN
related to the other side of the server as shown in Figure 2-29 on page 51. I/O goes through
the interconnect bridge of the servers. But from a performance and RAS point of view we
strongly do not recommend, such type of configuration.
Note: When configuring multipath, the host must have same number of Host Bus Adapters
(HBA) as the number of host ports, to get the most effective performance. If the number of
HBAs is less than host ports, it may be a bottle neck.