Non-standard RAID levels

From Wikipedia, the free encyclopedia

Although all RAID implementations differ from the specification to some extent, some companies and open-source projects have developed non-standard RAID implementations that differ substantially from the standard. Additionally, there are non-RAID drive architectures, providing configurations of multiple hard drives not referred to by RAID acronyms.

Double parity

Diagram of a RAID DP (double parity) setup

Now part of RAID 6, double parity (sometimes known as row diagonal parity)[1] features two sets of parity checks, like traditional RAID 6. Differently, the second set is not another set of points in the over-defined polynomial which characterizes the data. Rather, double parity calculates the extra parity against a different group of blocks. For example, in our graph both RAID 5 and 6 consider all A-labeled blocks to produce one or more parity blocks. However, it is fairly easy to calculate parity against multiple groups of blocks, one can calculate all A blocks and a permuted group of blocks.[2]

This is more easily illustrated using RAID 4, Twin Syndrome RAID 4 (RAID 6 with a RAID 4 layout), and double parity RAID 4 (A1, B1 etc. each represent one data block; each column represents one disk):

  Traditional        Twin Syndrome       Double parity
    RAID 4              RAID 4               RAID 4
A1  A2  A3  Ap    A1  A2  A3  Ap  Aq    A1  A2  A3  Ap  1n
B1  B2  B3  Bp    B1  B2  B3  Bp  Bq    B1  B2  B3  Bp  2n
C1  C2  C3  Cp    C1  C2  C3  Cp  Cq    C1  C2  C3  Cp  3n
D1  D2  D3  Dp    D1  D2  D3  Dp  Dq    D1  D2  D3  Dp  4n

The n blocks are the double parity blocks. Block 1n would be calculated as A1 xor B2 xor C3, block 2n is calculated as A2 xor B3 xor Cp, while block 3n is A3 xor Bp xor C1. Because the double parity blocks are correctly distributed, it is possible to reconstruct two lost disks through iterative recovery. For example, B2 could be recovered without the use of any x1 or x2 blocks, as B3 xor Cp xor 2n = A2, and then A1 can be recovered by calculating A2 xor A3 xor Ap. Finally, B2 = A1 xor C3 xor 1n.

RAID-DP

RAID-DP[3] implements double parity within RAID 6.[4] The performance penalty of RAID-DP is typically under 2% when compared to a similar RAID 4 configuration.[5]

RAID 5E, RAID 5EE, and RAID 6E

RAID 5E, RAID 5EE, and RAID 6E (with the added E standing for Enhanced) generally refer to variants of RAID 5 or 6 with an integrated hot-spare drive, where the spare drive is an active part of the block rotation scheme. This spreads I/O across all drives, including the spare, thus reducing the load on each drive, increasing performance. It does, however, prevent sharing the spare drive among multiple arrays, which is occasionally desirable.[6]

Intel Matrix RAID

Diagram of an Intel Matrix RAID setup

Intel Matrix RAID (a feature of Intel Rapid Storage Technology) is a feature (not a RAID level) present in the ICH6R and subsequent Southbridge chipsets from Intel, accessible via the RAID BIOS. Matrix RAID supports as few as two physical disks or as many as the controller supports. The distinguishing feature of Matrix RAID is that it allows any assortment of RAID 0, 1, 5, or 10 volumes in the array, to which a controllable (and identical) portion of each disk is allocated.

As such, a Matrix RAID array can improve both performance and data integrity. A practical instance of this would use a small RAID 0 (stripe) volume for the operating system, program, and paging files; second larger RAID 1 (mirror) volume would store critical data. Linux MD RAID is also capable of this.

Linux MD RAID 10

The Linux kernel's software RAID driver (called "md", coming from "multiple devices") can be used to build a classic RAID 1+0 array, but also for building it as a single-level RAID layout,[7] with some additional features.[8]

The standard "near" layout, where each chunk is repeated n times in a k-way stripe array, is equivalent to the standard RAID 10 arrangement, but it does not require that n evenly divides k. For example, an n2 layout on 2, 3, and 4 drives would look like:

2 drives         3 drives          4 drives
--------         ----------        --------------
A1  A1           A1  A1  A2        A1  A1  A2  A2
A2  A2           A2  A3  A3        A3  A3  A4  A4
A3  A3           A4  A4  A5        A5  A5  A6  A6
A4  A4           A5  A6  A6        A7  A7  A8  A8
..  ..           ..  ..  ..        ..  ..  ..  ..

The four-drive example is identical to a standard RAID 1+0 array, while the three-drive example is a software implementation of RAID 1E. The two-drive example is equivalent to RAID 1.

The driver also supports a "far" layout where all the drives are divided into f sections. All the chunks are repeated in each section but are switched in groups (for example pairs). For example, f2 layouts on 2-, 3-, and 4-drive arrays would look like this:

2 drives             3 drives             4 drives
--------             --------------       --------------------
A1  A2               A1   A2   A3         A1   A2   A3   A4
A3  A4               A4   A5   A6         A5   A6   A7   A8
A5  A6               A7   A8   A9         A9   A10  A11  A12
..  ..               ..   ..   ..         ..   ..   ..   ..
A2  A1               A3   A1   A2         A2   A1   A4   A3
A4  A3               A6   A4   A5         A6   A5   A8   A7
A6  A5               A9   A7   A8         A10  A9   A12  A11
..  ..               ..   ..   ..         ..   ..   ..   ..

"Far" layout is designed for offering striping performance on a mirrored array; sequential reads can be striped, similar to as in RAID 0 configurations.[9] Random reads are somewhat faster, while sequential and random writes offer about equal speed to other mirrored RAID configurations. "Far" layout performs well for systems where reads are more frequent than writes, which is a common case. For a comparison, regular RAID 1 as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel.[10]

The "near" and "far" options can be used together; in that case chunks in each section are offset by n devices. For example, an n2 f2 layout stores 2×2 = 4 copies of each sector, thus requiring at least four drives:

A1  A1  A2  A2        A1  A1  A2  A2  A3
A3  A3  A4  A4        A3  A4  A4  A5  A5
A5  A5  A6  A6        A6  A6  A7  A7  A8
A7  A7  A8  A8        A8  A9  A9  A10 A10
..  ..  ..  ..        ..  ..  ..  ..  ..
A2  A2  A1  A1        A2  A3  A1  A1  A2
A4  A4  A3  A3        A5  A5  A3  A4  A4
A6  A6  A5  A5        A7  A8  A6  A6  A7
A8  A8  A7  A7        A10 A10 A8  A9  A9
..  ..  ..  ..        ..  ..  ..  ..  ..

The md driver also supports an offset layout, where each stripe is repeated o times. For example, o2 layouts on 2-, 3-, and 4-drive arrays are laid out as:

2 drives       3 drives           4 drives
--------       ------------       -----------------
A1  A2         A1  A2  A3         A1  A2  A3  A4
A2  A1         A3  A1  A2         A4  A1  A2  A3
A3  A4         A4  A5  A6         A5  A6  A7  A8
A4  A3         A6  A4  A5         A8  A5  A6  A7
A5  A6         A7  A8  A9         A9  A10 A11 A12
A6  A5         A9  A7  A8         A12 A9  A10 A11
..  ..         ..  ..  ..         ..  ..  ..  ..

In the examples above, k is the number of drives, while n#, f#, and o# are parameters to the mdadm's --layout option. Linux software RAID (Linux kernel's md driver) also supports creation of standard RAID 0, 1, 4, 5, and 6 configurations.

RAID 1E

Diagram of a RAID 1E setup

RAID 1E uses two-way mirroring on two or more drives.[11][12]

RAID-Z

RAID-Z is not actually a kind of RAID, but a higher-level software technology that implements an integrated redundancy scheme in the ZFS file system similar to RAID 5. RAID-Z is a data-protection technology featured by ZFS in order to reduce the block overhead in mirroring.[13]

RAID-Z avoids the RAID 5 "write hole" using copy-on-write; rather than overwriting data, it writes to a new location and then atomically overwrites the pointer to the old data.[14] It avoids the need for read-modify-write operations for small writes by only ever performing full-stripe writes. Small blocks are mirrored instead of parity protected, which is possible because the file system is aware of the underlying storage structure and can allocate extra space if necessary. RAID-Z2 doubles the parity structure to achieve results similar to RAID 6: the ability to sustain up to two drive failures without losing data.[15] In July 2009, triple-parity RAID-Z3 was added to provide increased redundancy due to the extended rebuilding times of multi-terabyte disks.[16]

Drive Extender

Windows Home Server Drive Extender is a specialized case of JBOD RAID 1 implemented at the file system level.[17]

Microsoft announced in 2011 that Drive Extender would no longer be included as part of Windows Home Server Version 2, Windows Home Server 2011 (codename VAIL).[18] As a result there has been a third-party vendor move to fill the void left by DE. Included competitors are Division M, the developers of Drive Bender, DriveHarmony from DataCore, and StableBit's DrivePool.[19]

BeyondRAID

BeyondRAID is not a true RAID extension, but consolidates up to 10 SATA hard drives into one pool of storage.[20] It has the advantage of supporting multiple disk sizes at once, much like JBOD, while providing redundancy for all disks and allowing a hot-swap upgrade at any time. Internally it uses a mix of techniques similar to RAID 1 and 5. Depending on the fraction of data in relation to capacity, it can survive up to three drive failures, if the "array" can be restored onto the remaining good disks before another drive fails. The amount of usable storage can be approximated by summing the capacities of the disks and subtracting the capacity of the largest disk. For example, if a 500, 400, 200, and 100 GB drive were installed, the approximate usable capacity would be 500+400+200+100+(-500)=700 GB of usable space. Internally the data would be distributed in two RAID 5-like arrays and two RAID 1-like sets:

           Drives
 | 100 GB | 200 GB | 400 GB | 500 GB |

                            ----------
                            |   x    | unusable space (100 GB)
                            ----------
                   -------------------
                   |   A1   |   A1   | RAID 1 set (2× 100 GB)
                   -------------------
                   -------------------
                   |   B1   |   B1   | RAID 1 set (2× 100 GB)
                   -------------------
          ----------------------------
          |   C1   |   C2   |   Cp   | RAID 5 array (3× 100 GB)
          ----------------------------
 -------------------------------------
 |   D1   |   D2   |   D3   |   Dp   | RAID 5 array (4× 100 GB)
 -------------------------------------

BeyondRaid offers a RAID 6–like feature and can perform hash-based compression using 160-bit SHA1 hashes to maximize storage efficiency.[21]

unRAID

unRAID is a Linux-based operating system optimized for media file storage.[citation needed]

Disadvantages include slower write performance than a single disk and bottle necking when multiple drives are written concurrently. However, unRAID allows support of a cache drive which dramatically speeds up the write performance. The data, on the cache drive, is temporarily unprotected until unRAID moves it to the array based on a schedule set within the software. The parity drive must be at least as large as the largest data drive to provide protection.[22]

CRYPTO softraid

In OpenBSD, CRYPTO is an encrypting discipline for the softraid subsystem. It encrypts data on a single chunk to provide for data confidentiality. CRYPTO does not provide redundancy.[23]

See also

References

  1. Peter Corbett, Bob English, Atul Goel, Tomislav Grcanac, Steven Kleiman, James Leong, and Sunitha Sankar (2004). "Row-Diagonal Parity for Double Disk Failure Correction". USENIX Association. Archived from the original on 2013-11-22. Retrieved 2013-11-22. 
  2. Patrick Schmid (2007-08-07). "RAID 6: Stripe Set With Double Redundancy - RAID Scaling Charts, Part 2". Tomshardware.com. Retrieved 2014-01-15. 
  3. NetApp RAID-DP enables disk firmware updates to occur in real-time without any outage.
  4. "R | Storage Networking Industry Association". Snia.org. Retrieved 2014-01-15. 
  5. Netapp RAID 4
  6. "Non-standard RAID levels". raidrecoverylabs.com. Retrieved 2013-12-15. 
  7. "RAID 10 Driver". 
  8. Main Page - Linux-raid
  9. Jon Nelson (2008-07-10). "RAID5,6 and 10 Benchmarks on 2.6.25.5". jamponi.net. Retrieved 2014-01-01. 
  10. "Performance, Tools & General Bone-Headed Questions". tldp.org. Retrieved 2014-01-01. 
  11. http://publib.boulder.ibm.com/infocenter/eserver/v1r2/index.jsp?topic=%2Fdiricinfo%2Ffqy0_craid1e.html
  12. Many LSI RAID cards include RAID1E functionality, sometimes called Integrated Mirroring Enhanced.
  13. user13278091 (31 May 2006). "When to (and Not to) Use RAID-Z". Roch blog. Oracle. Retrieved 29 August 2013. 
  14. Jeff Bonwick (17 November 2005). "RAID-Z". Blog. Oracle. Retrieved 29 August 2013. 
  15. Adam Leventhal's Weblog - Double-Parity RAID-Z
  16. Adam Leventhal's Weblog - Triple-Parity RAID-Z
  17. Separate from Windows' Logical Disk Manager
  18. http://www.theregister.co.uk/2010/11/25/vail_drive_extender_ditched/
  19. "Drive Bender Public Release Arriving This Week". We Got Served. Retrieved 2014-01-15. 
  20. Data Robotics, Inc. implements BeyondRaid in their Drobostorage device.
  21. Detailed technical information about BeyondRaid, including how it handles adding and removing drives, is: US US20070266037 
  22. "What is unRAID?". Lime-Technology.com. Lime Technology. 2013-10-17. Retrieved 2014-01-15. 
  23. "Manual Pages: softraid(4)". Openbsd.org. 2013-10-31. Retrieved 2014-01-15. 
This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.