Chapter 7. IBM Power Systems virtualization and GPFS 283
When you set up a production environment, a good approach regarding processor allocation
is to define the LPAR as uncapped and to adjust the entitled (desirable) processor capacity
according to the utilization that is monitored through the lparstat command.
7.1.3 VSCSI and NPIV
VIOS provides two methods for delivering virtualized storage to the virtual machines (LPARs):
Virtual SCSI target adapters (VSCSI)
Virtual Fibre Channel adapters (NPIV)
7.1.4 Virtual SCSI target adapters (VSCSI)
This implementation is the most common and has been in use since the earlier Power5
servers. With VSCSI, the VIOS becomes a storage provider. All disks are assigned to the
VIOS, which provides the allocation to its clients through its own SCSI target initiator protocol.
Its functionality, when more than one VIOS is defined in the environment, is illustrated in
Figure 7-3.
Figure 7-3 VSCSI connection flow
VSCSI allows the construction of high-density environments by concentrating the disk
management to its own disk management system.
To manage the device allocation to the VSCSI clients, the VIOS creates a virtual SCSI target
adapter in its device tree, this device is called vhost as shown in Example 7-1 on page 284.
Virtual I/O Server
Client
Partition
Client
Partition
Client
Partition
POWER Hypervisor
Phyical HBAs and storage
VSCSI server adapters
VSCSI client adapters
284 Implementing the IBM General Parallel File System (GPFS) in a Cross-Platform Environment
Example 7-1 VSCSI server adapter
$ lsdev -dev vhost2
name status description
vhost2 Available Virtual SCSI Server Adapter
On the client partition, a virtual SCSI initiator device is created, the device is identified as an
ibmvscsi adapter into Linux, as shown in Example 7-2; and as a virtual SCSI client adapter
on AIX, as shown in Example 7-3.
On Linux, the device identification can be done in several ways. The most common ways are
checking the scsi_host class for the host adapters and checking the interrupts table. Both
methods are demonstrated on Example 7-2.
Example 7-2 VSCSI Client adapter on Linux
[root@slovenia ~]# cat /sys/class/scsi_host/host{0,1}/proc_name
ibmvscsi
ibmvscsi
[root@slovenia ~]# grep ibmvscsi /proc/interrupts
18: 408803 2485 1515 1527 XICS Level ibmvscsi
21: 355 336 331 280 XICS Level ibmvscsi
On AIX, the device identification is easier to accomplish, because the VSCSI devices appear
in the standard device tree (lsdev). A more objective way to identify the VSCSI adapters is to
list only the VSCSI branch in the tree, as shown in Example 7-3.
Example 7-3 VSCSI client adapter on AIX
usa:/#lsparent -C -k vscsi
vscsi0 Available Virtual SCSI Client Adapter
vscsi1 Available Virtual SCSI Client Adapter
usa:/#
The device creation itself is done through the system Hardware Management Console (HMC)
or the Integrated Virtualization Manager (IVM) during LPAR creation time, by creating and
assigning the virtual slots to the LPARs.
GPFS can load the storage subsystem, specially if the node is an NSD server. Consider the
following information:
The number of processing units that are used to process I/O requests might be twice as
many as are used on Direct attached standard disks.
If the system has multiple Virtual I/O Servers, a single LUN cannot be accessed by
multiple adapters at the same time.
The virtual SCSI multipath implementation does not provide load balance, only failover
capabilities.
Note: The steps to create the VIOS slots are in the management console documentation.
A good place to start is by reviewing the virtual I/O planning documentation. The VIOS
documentation is on the IBM Systems Hardware information website:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/iphat/iphblconfi
gurelparp6.htm

Get Implementing the IBM General Parallel File System (GPFS) in a Cross Platform Environment now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.