94 Implementing the IBM General Parallel File System (GPFS) in a Cross-Platform Environment
offsets accessed will cycle through the same file segment
not using shared memory buffer
not releasing byte-range token after open
no fsync at end of test
Data rate was 49454.62 Kbytes/sec, thread utilization 0.969
[root@nigeria perf]# ./gpfsperf create seq /gpfs-ib/gogo -r 256k -n 1024000000
-th 4
./gpfsperf create seq /gpfs-ib/gogo
recSize 256K nBytes 999936K fileSize 999936K
nProcesses 1 nThreadsPerProcess 4
file cache flushed before test
not using data shipping
not using direct I/O
offsets accessed will cycle through the same file segment
not using shared memory buffer
not releasing byte-range token after open
no fsync at end of test
Data rate was 50919.39 Kbytes/sec, thread utilization 0.958
3.4.3 InfiniBand GPFS cluster
This section uses the previous cluster (from 3.4.2, “Ethernet GPFS cluster” on page 91) and
modifies it to use the InfiniBand adapters.
Configure the InfiniBand network
Our InfiniBand network uses a QLogic switch 9024 and one port from each adapter. We use
the Subnet Manager build in the switch itself.
For more information about setting up the InfiniBand infrastructure, see HPC Clusters Using
InfiniBand on IBM Power Systems Servers, SG24-7767
.
To set up GPFS to use InfiniBand, perform the following steps:
1. Create a symbolic link (symlink) for the libiverbs library, use the following command:
ln -s /usr/lib64/libibverbs.so.1.0.0 /usr/lib64/libibverbs.so
2. Identify the adapter name and port that will be used by GPFS, as shown in Example 3-42,
where ehca0 is the adapter name. Also be sure the state port is
Active and the physical
state is
LinkUp.
Example 3-42 Collecting infiniBand information
root@spain work]# ibstat
CA 'ehca0'
CA type:
Number of ports: 1
Firmware version:
Hardware version:
Node GUID: 0x00025500403bd600
System image GUID: 0x0000000000000000
Note: At the time of writing, the symlink is required because the InfiniBand libraries
might not work otherwise. Check for the latest InfiniBand drivers and OpenFabrics
Enterprise Distribution (OFED) libraries if this symlink is still required.
Chapter 3. Scenarios 95
Port 1:
State: Active
Physical state: LinkUp
Rate: 20
Base lid: 7
LMC: 0
SM lid: 1
Capability mask: 0x02010068
Port GUID: 0x00025500403bd602
The current configuration of the GPFS cluster is shown in Example 3-43.
Example 3-43 Current cluster configuration
[root@spain work]# mmlsconfig
Configuration data for cluster GPFS-InfiniBand.spain-gpfs:
----------------------------------------------------------
clusterName GPFS-InfiniBand.spain-gpfs
clusterId 723685802921743777
autoload no
minReleaseLevel 3.4.0.0
dmapiFileHandleSize 32
adminMode central
3. Add the InfiniBand settings as shown in Example 3-44. The verbsPorts parameter requires
the adapter name and the active port that is used for GPFS communication.
Example 3-44 Configuring the InfiniBand settings
[root@spain work]# mmchconfig verbsPorts="ehca0/1"
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@spain work]# mmchconfig verbsRdma=enable
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
[root@spain work]# mmlsconfig
Configuration data for cluster GPFS-InfiniBand.spain-gpfs:
----------------------------------------------------------
clusterName GPFS-InfiniBand.spain-gpfs
clusterId 723685802921743777
autoload no
minReleaseLevel 3.4.0.0
dmapiFileHandleSize 32
verbsPorts ehca0/1
verbsRdma enable
adminMode central
File systems in cluster GPFS-InfiniBand.spain-gpfs:
---------------------------------------------------
/dev/gpfs-ib

Get Implementing the IBM General Parallel File System (GPFS) in a Cross Platform Environment now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.