07.06.2014 Views

2 - Raspberry PI Community Projects

2 - Raspberry PI Community Projects

2 - Raspberry PI Community Projects

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

RAID-5 RAID-5 addresses the asymmetry issue of RAID-4: parity blocks are spread over all of<br />

the N+1 disks, with no single disk having a particular role.<br />

Read and write performance are identical to RAID-4. Here again, the system stays functional<br />

with up to one failed disk (of the N+1), but no more.<br />

RAID-6 RAID-6 can be considered an extension of RAID-5, where each series of N blocks involves<br />

two redundancy blocks, and each such series of N+2 blocks is spread over N+2 disks.<br />

This RAID level is slightly more expensive than the previous two, but it brings some extra<br />

safety since up to two drives (of the N+2) can fail without compromising data availability.<br />

The counterpart is that write operations now involve writing one data block and two<br />

redundancy blocks, which makes them even slower.<br />

RAID-1+0 This isn't strictly speaking, a RAID level, but a stacking of two RAID groupings. Starting<br />

from 2×N disks, one first sets them up by pairs into N RAID-1 volumes; these N volumes<br />

are then aggregated into one, either by “linear RAID” or (increasingly) by LVM. This last<br />

case goes farther than pure RAID, but there's no problem with that.<br />

RAID-1+0 can survive multiple disk failures: up to N in the 2×N array described above,<br />

provided that at least one disk keeps working in each of the RAID-1 pairs.<br />

GOING FURTHER<br />

RAID-10<br />

RAID-10 is generally considered a synonym of RAID-1+0, but a Linux<br />

specificity makes it actually a generalization. This setup allows a system<br />

where each block is stored on two different disks, even with an<br />

odd number of disks, the copies being spread out along a configurable<br />

model.<br />

Performances will vary depending on the chosen repartition model and<br />

redundancy level, and of the workload of the logical volume.<br />

Obviously, the RAID level will be chosen according to the constraints and requirements of each<br />

application. Note that a single computer can have several distinct RAID arrays with different<br />

configurations.<br />

12.1.1.2. Seing up RAID<br />

Setting up RAID volumes requires the mdadm package; it provides the mdadm command, which<br />

allows creating and manipulating RAID arrays, as well as scripts and tools integrating it to the<br />

rest of the system, including the monitoring system.<br />

Our example will be a server with a number of disks, some of which are already used, the rest<br />

being available to setup RAID. We initially have the following disks and partitions:<br />

• the sda disk, 4 GB, is entirely available;<br />

• the sde disk, 4 GB, is also entirely available;<br />

• on the sdg disk, only partition sdg2 (about 4 GB) is available;<br />

Chapter 12 — Advanced Administration<br />

301

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!