Chapter 13. RAS, monitoring, and troubleshooting for an IBM Storwize V7000 system 525
Expansion canister and LED status
As shown in Figure 13-2, there are two 6 Gbps SAS ports side by side on the canister. They
are numbered 1 on the left and 2 on the right. Each port connects four PHYs; each PHY is
associated with an LED. These LEDs are green and are next to the ports.
Figure 13-2 Canister status LEDs
In Table 13-5, we describe the LED statuses of the expansion canister.
Table 13-5 Expansion canister LEDs statuses
13.1.2 Disk subsystem
The IBM Storwize V7000 system is made up of enclosures. There are two types of
enclosures: a 2U12 that takes 12 3.5-inch drives, and a 2U24 that takes 24 2.5-inch drives.
The drives fit into the front of the enclosure, and the rear of the enclosures are identical and
have slots for two canisters and two power supplies. Enclosures are used as either control
enclosures or expansion enclosures. They are differentiated by the type of canister and
power supply they contain.
Position Color Name State Meaning
Top Green Status On The canister
is active.
Flashing The canister
has a vpd error.
Off The canister is
not active.
Bottom Amber Fault On The canister
hardware
is faulty.
Flashing The canister is
being identified.
Off No fault canister
is being
identified.
526 Implementing the IBM Storwize V7000 V6.3
An array is a type of MDisk made up of disk drives that are in the enclosures. These drives
are referred to as
members of the array. Each array has a RAID level. RAID levels provide
different degrees of redundancy and performance, and have different restrictions on the
number of members in the array. An IBM Storwize V7000 system supports hot spare drives.
When an array member drive fails, the system automatically replaces the failed member with
a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives
can be manually exchanged with array members.
Each array has a set of goals that describe the wanted location and performance of each
array member. A sequence of drive failures and hot spare takeovers can leave an array
unbalanced, that is, with members that do not match these goals. The system automatically
rebalances such arrays when appropriate drives are available.
An IBM Storwize V7000 system supports the RAID levels shown in Table 13-6.
Table 13-6 RAID levels supported by an IBM Storwize V7000 system
Disk scrubbing
The scrub process runs when arrays do not have any other background processes. The
process checks that the drive LBAs are readable and array parity is in synchronization. Arrays
are scrubbed independently and each array is entirely scrubbed every seven days.
Solid-state drives
Solid-date drives (SSD) are treated no differently by an IBM Storwize V7000 system than
HDDs with respect to RAID arrays or MDisks. The individual SSD drives in the storage
managed by the IBM Storwize V7000 system are combined into an array, usually in RAID 10
or RAID 5 format. It is unlikely that RAID 6 SSD arrays are used due to the double parity
impact, with two SSD logical drives used for parity only.
A LUN is created on the array, which is then presented to the IBM Storwize V7000 system as
a normal managed disk (MDisk). As is the case for HDDs, the SSD RAID array format helps
protect against individual SSD failures. Depending on your requirements, additional high
availability protection, above the RAID level, can be achieved by using volume mirroring.
RAID level Where data is striped Minimum to maximum
members
0 Data is striped on one or
more drives.
1 - 8
1 Data is mirrored between
two drives.
2
5 Data is striped across several
drives with one parity.
3 - 16
6 Data is striped across several
drives with two parities.
5 - 16
10 Data is striped across pairs of
mirrored drives.
2 - 16

Get Implementing the IBM Storwize V7000 now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.