High-Speed Interconnect

In Figure 2.2 through Figure 2.4, we’ve referred to the connection networks used in cluster, MPP, and NUMA architectures as high-speed interconnects . Performance of this interconnect is an important consideration in parallel architectures. Interconnect performance is measured in two dimensions: bandwidth and latency. Bandwidth is the rate at which data can be moved between nodes and is measured in megabytes (MB) per second. Latency is defined as the time spent in setting up access to a remote node so that communications can occur. Interconnects should have low latency in order to maximize the number of messages that can be set up and placed on the interconnect in a given period of time. As the number of nodes in a configuration increases, more data and messages are passed around between nodes. It’s important that the interconnect have a high enough bandwidth and a low enough latency to support this message traffic.

Cluster interconnects often are implemented using standard LAN-based technology such as Ethernet or Fiber Distributed Data Interchange (FDDI). When a cluster is configured with a large number of nodes, the limitations of the network can degrade performance. The bandwidth and latency of the cluster interconnect is the key to improving the scalability of a cluster.

Figure 2.5 shows a sample network configuration in a two-node cluster. Nodes in the cluster are connected to the network with both primary and standby interface cards. When the primary ...

Get Oracle Parallel Processing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.