IBM Spectrum Virtualize NEW! Distributed RAID

With Traditional RAID (TRAID), reading from a single drive or multiple drives and writing to a single spare drive, the rebuild time is extended due to the spare drive’s performance. In addition the spares, when notbeing used, sit idle wasting resources.

Double parity in traditional RAID6 improves data availability by protecting against single or double drive failure in an array.

1Double parity improves data availability by protecting against single or double drive failure in an array but spare drives are idle and cannot contribute to performance.
Particularly an issue with flash drives
where ther rebuild is limited by throughput of single drive, or longer rebuild time with larger drives that potentially exposes data to risk of dual failure.2

In the traditional RAID6 each stripe is made up of data strips (represented by D1, D2 and D3 in this example) and two parity strips (P and Q). A strip is either 128K or 256K, with 256K being the default. Two parity strips means the ability to cope with twosimultaneous drive failures. Extent size is irrelevant.

With NEW Distributed RAID (DRAID), more drives participate in the rebuild and the bottleneck of one drive is removed as more drives means faster rebuild and there are no “idle” drives as all drives contribute to performance.

With the new DRAID implementation you can achieve:
– Faster drive rebuild improves availability and enables use of lower cost larger drives with confidence
– All drives are active, which improves performance especially with flash drives
– Spare capacity, not spare drives
– Rotating spare capacity position distributes rebuild load across all drives
– More drives participate in rebuild
– Bottleneck of one drive is removed
– More drives means faster rebuild
– 5-10x faster than traditional RAID
– Especially important when using large drives
– No “idle” spare drives
– All drives contribute to performance
– Especially important when using flash drives

3In this instance, where distribute 3+P+Q over 10 drives with 2 distributed spares, these 5 rows make up a pack. We allocate the spare space depending on the pack number. The number of rows in a pack depends on the number of strips in a stripe, this means the pack size is constant for an array. Extent size is irrelevant.

DRAID Performance Goals
A 4TB drive can be rebuilt within 90 minutes for an array width of 128 drives with no host I/O. With host I/O, if drives are being utilized up to 50%, the rebuild time will be 50% slower. Approximately 3 hours, but still that is much faster then TRAID time of 24 hours for a 4TB drive. Main goal of DRAID is to significantly lower the probability of a second drive failing during the rebuild process compared to traditional RAID.

IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware implementation

IBM® Spectrum Virtualize Software Version 7.6 provides software-defined storage capabilities across various platforms, including IBM SAN Volume Controller, IBM Storwize® V7000, Storwize V7000 (Unified), Storwize V5000, Storwize V3700, and Storwize V3500.
HyperSwap is the high availability solution for  IBM storage technologies like IBM Spectrum Virtualize™, IBM Storwize V7000, IBM Storwize V5000 and IBM FlashSystem™ V9000, that provides continuous data availability in case of hardware failure, power failure, connectivity failure, or disasters.

5 most important topics you need to know about IBM® Spectrum Virtualize Software and its HyperSwap configuration are:

  1. This solution rely on combination of storage system and application or operating system capabilities and usually they delegate to the host the management of the storage loss events
  2. A new Metro Mirror capability, the active-active Metro Mirror, is used to maintain two fully-independent copy of the data each site.
  3. The HyperSwap function will automatically optimize itself to minimize data transmitted between sites and to minimize host read and write latency
  4. The HyperSwap function will leverage the remote copy function Consistency Groups in order to supply a data consistency in case of critical event
  5. The HyperSwap configuration has been easier with the Spectrum Virtualize Software Version 7.6, thanks to the integration of all configuration steps in the already easy and efficient Graphical User Interface.

Screenshot from 2015-11-09 12:10:07







For further and detailed information about HyperSwap in a VMware environment you can read the specific Redbooks® available at:
IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware implementation