Chapter 3. IBM System x3850 X5 and x3950 X5 71
򐂰 Two x3850 X5 servers connected to form a single image 8-socket server. This
configuration is sometimes referred to as a
2-node server.
3.6.1 Memory scalability with MAX5
The MAX5 memory expansion unit permits the x3850 X5 to scale to an additional 32 DDR3
DIMM sockets.
Connecting the single-node x3850 X5 with the MAX5 memory expansion unit uses four QPI
cables, part number 59Y6267, as listed in Table 3-7. Figure 3-13 shows the connectivity.
Figure 3-13 Connecting the MAX5 to a single-node x3850 X5
Connecting the MAX5 to a single-node x3850 X5 requires one IBM MAX5 to x3850 X5 Cable
Kit, which consists of four QPI cables. See Table 3-7.
Table 3-7 Ordering information for the IBM MAX5 to x3850 X5 Cable Kit
3.6.2 Two-node scalability
The 2-node configuration also uses native Intel QPI scaling to create an 8-socket
configuration. The two servers are physically connected to each other with a set of external
QPI cables. The cables are connected to the server through the QPI bays, which are shown in
Figure 3-7 on page 66. Figure 3-14 on page 72 shows the cable routing.
MAX5: The configuration of two nodes with MAX5 is not supported.
Tip: As shown in Figure 3-12 on page 70, you maximize performance when you have four
processors installed because you have four active QPI links to the MAX5. However,
configurations of two and three processors are still supported.
Rack rear
Part number Feature code Description
59Y6267 4192 IBM MAX5 to x3850 X5 Cable Kit (quantity 4 cables)
72 IBM eX5 Implementation Guide
Figure 3-14 Cabling diagram for two node x3850 X5
Connecting the two x3850 X5 servers to form a 2-node system requires one IBM x3850 X5
and x3950 X5 QPI Scalability Kit, which consists of four QPI cables. See Table 3-8.
Table 3-8 Ordering information for the IBM x3850 X5 and x3950 X5 QPI Scalability Kit
No QPI ports are visible on the rear of the server. The QPI scalability cables have long rigid
connectors, allowing them to be inserted into the QPI bay until they connect to the QPI ports,
which are located a few inches inside on the planar. Completing the QPI scaling of two x3850
X5 servers into a 2-node complex does not require any other option.
Figure 3-15 on page 73 shows the QPI links that are used to connect two x3850 X5 servers to
each other. Both nodes must have four processors each, and all processors must be identical.
Rack rear
Part number Feature code Description
46M0072 5103 IBM x3850 X5 and x3950 X5 QPI Scalability Kit (quantity 4 cables)
Intel E7520 and E7530: The Intel E7520 and E7530 processors cannot be used to scale
to an 8-way 2-node complex. They support a maximum of four processors. At the time of
this writing, the following models use those processors:
򐂰 7145-ARx
򐂰 7145-1Rx
򐂰 7145-2Rx
򐂰 7145-2Sx
Chapter 3. IBM System x3850 X5 and x3950 X5 73
Figure 3-15 QPI links for a 2-node x3850 X5
QPI-based scaling is managed primarily through the Unified Extensible Firmware Interface
(UEFI) firmware of the x3850 X5.
For the 2-node x3850 X5 scaled through the QPI ports, when those cables are connected, the
two nodes act as one system until the cables are physically disconnected.
Firmware levels: It is important to ensure that both of the x3850 X5 servers have the
identical UEFI, integrated management module (IMM), and Field-Programmable Gate
Array (FPGA) levels before scaling. If they are not at the same levels, unexpected issues
occur and the server might not boot. See 9.10, “Firmware update tools and methods” on
page 509 for ways to check and update the firmware.
Partitioning: The x3850 X5 currently does not support partitioning.
1
24
3 3
42
1
QPI Links

Get IBM eX5 Implementation Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.