Chapter 4. Management and maintenance 203
By using the same commands, add the node named england. Four nodes are now in the
cluster, as shown in Example 4-17.
Example 4-17 Four cluster nodes up and running
[root@england ~]# mmgetstate -aL
Node number Node name Quorum Nodes up Total nodes GPFS state Remarks
------------------------------------------------------------------------------------
1 usa 1* 4 4 active quorum node
2 germany 1* 4 4 active quorum node
3 england 1* 4 4 active quorum node
4 slovenia 1* 4 4 active quorum node
4.2.2 Adding a disk to a file system
Adding a disk to a file system can be a regular occurrence if your file systems are rapidly
growing in size, and you need more free disk space. GPFS gives you the option to add disks
to your file system online, while the cluster is active, and the new empty space can be used as
soon as the command completes. When you add a new disk using the mmaddisk command,
you can decide to use the -r balance flag if you want all existing files in your file system to use
the new free space. Rebalancing is an I/O-intensive and time consuming operation, and is
important only for file systems with large files that are mostly invariant. You have to consider
that during the file system rebalancing, GPFS operations might be slower than usual, so be
sure to do a rebalancing when there is not high I/O on the file system. You may also add a
new disk without rebalancing, and do a complete rebalancing later, with the mmrestripefs
command.
Perform the following steps to add a new disk to a GPFS file system:
1. Determine whether any already-configured NSDs are available (not yet associated to a file
system) by using the mmlsnd -F command.
2. If there are no available NSD that are already configured in the cluster, you must create a
new NSD on a disk device. To create an NSD, you must know the following information:
The name of the device of the node on which you ran the mmlsnd - F command (other
nodes in the cluster are automatically discovered)
The type of disk usage that this NSD will handle (dataAndMetadata, dataOnly, and
metadataOnly descOnly descriptors)
Whether assigning one or more NSD servers to this new NSD is necessary (or whether
it is only directly attached to the nodes)
The failure group on which to configure it (to have GPFS manage replica)
Whether the new NSD has to be in a storage pool separate from the system.
Usually you have to define one or more (up to eight) servers to the NSD if data that is
contained in this NSD must also be accessible by nodes that do not have direct access to
that disk. When a node must read or write data to a disk on which it does not have direct
access, the node has to ask, through the network, for a node that is on the server list for
that NSD. Everything is done through the network. If no NSD servers are specified, the
disk is seen as a
direct attach disk; if other servers are configured in the cluster and need
to access data on that NSD, they cannot use this server.
3. Certain items, such as the NSD server list, can be changed later with the mmchnsd
command. Other items, such as the NSD name or the storage pool, are impossible to
change. Instead, to change the NSD usage or the failure group, you have to use mmchdisk
command.

Get Implementing the IBM General Parallel File System (GPFS) in a Cross Platform Environment now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.