Really interesting point of view.
Really interesting point of view.
With Traditional RAID (TRAID), reading from a single drive or multiple drives and writing to a single spare drive, the rebuild time is extended due to the spare drive’s performance. In addition the spares, when notbeing used, sit idle wasting resources.
Double parity in traditional RAID6 improves data availability by protecting against single or double drive failure in an array.
Double parity improves data availability by protecting against single or double drive failure in an array but spare drives are idle and cannot contribute to performance.
Particularly an issue with flash drives
where ther rebuild is limited by throughput of single drive, or longer rebuild time with larger drives that potentially exposes data to risk of dual failure.
In the traditional RAID6 each stripe is made up of data strips (represented by D1, D2 and D3 in this example) and two parity strips (P and Q). A strip is either 128K or 256K, with 256K being the default. Two parity strips means the ability to cope with twosimultaneous drive failures. Extent size is irrelevant.
With NEW Distributed RAID (DRAID), more drives participate in the rebuild and the bottleneck of one drive is removed as more drives means faster rebuild and there are no “idle” drives as all drives contribute to performance.
With the new DRAID implementation you can achieve:
– Faster drive rebuild improves availability and enables use of lower cost larger drives with confidence
– All drives are active, which improves performance especially with flash drives
– Spare capacity, not spare drives
– Rotating spare capacity position distributes rebuild load across all drives
– More drives participate in rebuild
– Bottleneck of one drive is removed
– More drives means faster rebuild
– 5-10x faster than traditional RAID
– Especially important when using large drives
– No “idle” spare drives
– All drives contribute to performance
– Especially important when using flash drives
In this instance, where distribute 3+P+Q over 10 drives with 2 distributed spares, these 5 rows make up a pack. We allocate the spare space depending on the pack number. The number of rows in a pack depends on the number of strips in a stripe, this means the pack size is constant for an array. Extent size is irrelevant.
DRAID Performance Goals
A 4TB drive can be rebuilt within 90 minutes for an array width of 128 drives with no host I/O. With host I/O, if drives are being utilized up to 50%, the rebuild time will be 50% slower. Approximately 3 hours, but still that is much faster then TRAID time of 24 hours for a 4TB drive. Main goal of DRAID is to significantly lower the probability of a second drive failing during the rebuild process compared to traditional RAID.
IBM® Spectrum Virtualize Software Version 7.6 provides software-defined storage capabilities across various platforms, including IBM SAN Volume Controller, IBM Storwize® V7000, Storwize V7000 (Unified), Storwize V5000, Storwize V3700, and Storwize V3500.
HyperSwap is the high availability solution for IBM storage technologies like IBM Spectrum Virtualize™, IBM Storwize V7000, IBM Storwize V5000 and IBM FlashSystem™ V9000, that provides continuous data availability in case of hardware failure, power failure, connectivity failure, or disasters.
5 most important topics you need to know about IBM® Spectrum Virtualize Software and its HyperSwap configuration are:
For further and detailed information about HyperSwap in a VMware environment you can read the specific Redbooks® available at:
IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware implementation