Difference between revisions of "RAID"

From MythTV Official Wiki
Jump to: navigation, search
(Setup (Software RAID))
(RAID 0+1 (or 1+0, 01, 10))
Line 24: Line 24:
  
 
=====RAID 0+1 (or 1+0, 01, 10)=====
 
=====RAID 0+1 (or 1+0, 01, 10)=====
RAID 0+1 (mirroring a striped set) is as fast as RAID 0. Like RAID 0, you can make it even faster by adding more disks. Note: There is some argument over the difference between RAID 10 and RAID 01. In other words, should you stripe a mirror, or mirror a stripe? You can safely ignore those that argue this, as it really doesn't matter. Most hardware vendors stripe first; 0+1.
+
RAID 0+1 (mirroring a striped set) is as fast as RAID 0. Like RAID 0, you can make it even faster by adding more disks. Note: There is some argument over the difference in performance between RAID 10 and RAID 01. In other words, should you stripe a mirror, or mirror a stripe? You can safely ignore those that argue this, as it really doesn't matter on the subject of '''performance'''. Most hardware vendors stripe first; 0+1. However, on the subject of '''redundancy''', RAID 10 has higher chances than RAID 0+1 of surviving certain hardware failures.
  
 
===Capacity===
 
===Capacity===

Revision as of 18:59, 9 May 2006

RAID or Redundant Array of Inexpensive Disks is a mechanism for using multiple disk drives to provide redundant file storage.

Quick Overview

Performance Expectations

There are many 'opinions' on what RAID level is best for performance, since it can vary greatly depending on many factors, but with all things being equal, here's the facts about the most common RAID levels:

  • RAID 5 is the slowest*
  • RAID 1 is in the middle (speed is equivelent to not using RAID)
  • RAID 0 (and RAID 0+1, 10, 1+0, etc) is the fastest

And here's why:

RAID 5

RAID 5 (striping with parity) is the slowest because your computer has to calculate parity for every write operation and then write that parity to disk. This can be considerable overhead with software RAID or budget RAID controllers. Parity is the distrubed data that allows you to lose any hard drive but still not lose data. Read speads are very good, better than RAID 1. *You can significantly increase RAID 5 performance, sometimes even get it faster than RAID 1 performance, if you do the following:

  • Use hardware RAID solution
  • Use a Battery-Backed Write-back Cache, or BBWBC (only available on high-end RAID controllers). These only help data bursts, up to the size of the cache (typically 16-256 MB), not extended write operations.
  • Add more disks

In general, unless you have server-class hardware and SCSI disks, you can always expect RAID 5 writing to be slower than RAID 1 or 0. Reading is very fast, can be similar to RAID 0.

RAID 1

RAID 1 (mirroring) is typically not faster nor slower than using a single disk for writing. For reading it could use all the disks in parallell an thus improve the performance.

RAID 0

RAID 0 (striping) is the fastest with reading and writing because your computer reads and writes different data to two disks at the same time, theoretically doubling the performance of RAID 1. (Note: RAID 0 is not the same as disks spanning, or extending, a volume. Spanning is not RAID and provides no performance change or redundancy.) Software RAID 0 can be nearly as fast as hardware RAID 0. You can significantly increase the speed of RAID 0 by:

  • Addings more disks
RAID 0+1 (or 1+0, 01, 10)

RAID 0+1 (mirroring a striped set) is as fast as RAID 0. Like RAID 0, you can make it even faster by adding more disks. Note: There is some argument over the difference in performance between RAID 10 and RAID 01. In other words, should you stripe a mirror, or mirror a stripe? You can safely ignore those that argue this, as it really doesn't matter on the subject of performance. Most hardware vendors stripe first; 0+1. However, on the subject of redundancy, RAID 10 has higher chances than RAID 0+1 of surviving certain hardware failures.

Capacity

Assuming all disk are of equal size (If this isn't the case, use the size of your smallest disk), where N = number of disks and C = Disk Capacity:

  • RAID 0 = N x C (Total capacity of all disks, or 100% efficient)
  • RAID 1 = 1 x C (Capacity of one disk, efficiency varies, no more than 50% efficient, decreasing as drives are added)
  • RAID 5 = (N x C) - C (Total capacity minus one disk, efficiency varies, but no less than 66% and increases as drives are added)
  • RAID 0+1 = (N x C) / 2 (Half the total capacity of all disks, or 50% efficient)

Redundancy

  • RAID 0 = No redundancy. Losing any disk results in total data loss
  • RAID 1 = Lose all but 1 disk without any data loss.
  • RAID 5 = Lose 1 disk without any data loss.
  • RAID 0+1 = Lose up to half your disks (if they are on the same stripe set) without any data loss. If one disk from each stripe set is lost, all data is lost.

RAID 0+1 and RAID 10 may look very similar, but there is a distinct difference when it comes to redundancy. In a 4-disk configuration, both can survive losing 1 disk without any data loss. And both configurations will fail if you lose 3 disks. However, the probabilities of losing the RAID array are different if you lose 2 disks.

The number of combinations of 2-drive losses is C(4,2) = 4!/(2! * 2!) = 6. This means that if 2 drives fail, there only 6 different ways this can happen.

To determine the probabilities, you must count the number of configurations that would actually cause the entire RAID volume to go down.

RAID 0+1 (4 disks)

      [----RAID 1------]
  ____|____        ____|____
 | RAID 0  |      | RAID 0  |
 |____|____|      |____|____|

If one (out of two) drives from the first stripe fails AND one (out of two) drives from the second stripe fails, then an entire RAID 0+1 array will go down.

  • x|ok --- x|ok
  • x|ok --- ok|x
  • ok|x --- x|ok
  • ok|x --- ok|x

Since there are 4 ways this can happen, Pfailure(0+1) = 4/6 = 0.67%

RAID 10 (4 disks)

      [----RAID 0------]
  ____|____        ____|____
 | RAID 1  |      | RAID 1  |
 |____|____|      |____|____|

If both drives on the first mirror fail OR both drives on the second mirror fail, then an entire RAID 10 array will go down.

  • x|x --- ok|ok
  • ok|ok --- x|x

Since there are 2 ways this can happen, Pfailure(10) = 1 + 1 / C(4,2) = 2/6 = 0.33%

This means that RAID 10 has twice the probability of surviving the loss of two drives than RAID 0+1.

SCSI vs IDE

In general, SCSI outperforms IDE in RAID arrays because it is much better at handling multiple data reads/writes at the same time. If you must use IDE, use the fastest controllers aviailable (SATA) and the fastest disks available. Also, put each disk on it's own controller; avoid placing a disk on both channels of any IDE controller.

Notes

  • At a given platter RPM, it's not the drive that performs better, but the subsystem. SCSI has the advantage of greater availability of higher RPM (10K and 15K) drives. Though they can be found with SATA interfaces.
  • SATA also provides command queueing, the method used by SCSI to better handle multiple access requests.
  • SATA uses an individual cable per drive, which means it handles cable failure better than SCSI.

Which one do I choose?

If you don't care about your data, go with RAID 0 for speed. If you don't want to lose your data, RAID 1 for 2 disks. If you have 3 or more disks, it's really a toss up between RAID 5 (minimum 3 disks) and RAID 10 (minimum 4 disks, multiples of two thereafter). It comes down to speed and $. If you don't need speed and don't have $, go with RAID 5. If you need speed, get the $ and go with RAID 10. In regards to Myth, several have tried RAID 5 and the results are mixed. If you only have 1 or 2 SD tuners, RAID 5 should be fine. Once you get multiple HD tuners or multiple frontends, RAID 5 often can't keep up. Your results may vary.

RAID For Recordings Drive

A few Options exist for using RAID for the recording drives, depending on the goal you have for your recordings; speed, redundency or both. RAID 0 will allow you to gain the most speed from your drives, RAID 01 (or RAID 0+1) will give you speed and 1:1 redundency. RAID 5 gives you the most capacity for your dollar, but write speeds can be pretty bad.

RAID For Archives Drive

Having an independent drive array for archival of shows one wishes to keep allows the user to setup a RAID for speed for the recordings drive and a RAID for backup for the archival drive. This way, once a show has been recorded, commercial flagged and possibly even transcoded to another format or for permanent commercial removal, it can be moved to the archive. In such a case, RAID 5 and RAID 10 make the most sense. If you plan on a large amount of access to the archive, a RAID 10 may make more sense as it will most easily keep up with the transfer rate requirement while still allowing for redundency, but at the cost of the price for obtaining the number of drives requried. RAID 5 will have a slight speed advantage over just having numerous drives (JBOD, Just a Bunch Of Disks in hardware RAID, linear in mdadm), but will also have the advantage of getting the most archival bang-for-your-buck while still maintaining parity for the case of a lost drive.

Setup (Software RAID)

For setting up hardware RAID, see your RAID controller's documentation. The array will then appear as a single disk within your OS. For software RAID, creating a RAID array with mdadm is quite easy. The Software-RAID HOWTO Performance section will help here, as different RAID types have different best values for chunk and block sizes. Since we will be dealing with only large files (recorded mpegs. music files, etc) it is recommended to choose the largest chunk and block value that combine for the highest performance.

Partitioning

Before a RAID array can be created on a disk it must be partitioned, you can use cfdisk. The easiest way is to create a full drive partition.

Note: You must however, also set the type to "fd" or "Linux raid autodetect"!

RAID 5

The following line will create a RAID array with the following characteristics:

  • RAID 5 on /dev/md0
  • 3 drives, /dev/sda1, /dev/sdb1, and dev/sdc1
  • chunk size = 32K
  • no spare
  • verbose level of output
# mdadm -v --create /dev/md0 --force --chunk=32 --level=raid5 \
      --spare-devices=0 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1

RAID 10

RAID 10 is really the creation of 2 or more arrays. First you create the number of RAID 1, mirrored, arrays you wish to have,

# mdadm -v --create /dev/md0 --chunk=32 --level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1
# mdadm -v --create /dev/md1 --chunk=32 --level=raid1 --raid-devices=2 /dev/sdc1 /dev/sdd1

and so on until you have the number of drives you wish to concatenate into a RAID 0.

Once these have completed building (see below), you can create the RAID 0, striped, array,

# mdadm -v --create /dev/md2 --chunk=32 --level=raid0 --raid-devices=2 /dev/md0 /dev/md1

RAID Creation Confirmation

You will be prompted with the RAID parameters, and asked to continue,

mdadm: layout defaults to left-symmetric
mdadm: /dev/sdc1 appears to contain a reiserfs file system
    size = -192K
mdadm: size set to 293049600K
Continue creating array?

Status of RAID Creation

Upon confirmation you will only see

mdadm: array /dev/md0 started.

Once you run the command to create the RAID array, if you want to see the progress run,

# cat /proc/mdstat

and you will see something along the lines of,

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
md0 : active raid5 sdc1[3] sdb1[1] sda1[0]
      586099200 blocks level 5, 32k chunk, algorithm 2 [3/2] [UU_]
      [=>...................]  recovery =  5.8% (17266560/293049600) finish=69.8min speed=65760K/sec

unused devices: <none>

Generate Config File

Now we need to setup '/etc/mdadm.conf', this can be done by copying the output of

# mdadm --detail --scan

to '/etc/mdadm.conf', which should end up looking similar to,

ARRAY /dev/md0 level=raid5 num-devices=3 UUID=2d918524:a32c7867:11db7af5:0053440d
devices=/dev/sda1,/dev/sdb1,/dev/sdc1

RAID Filesystem Creation

Once your RAID array is created you can place a filesystem on it. JFS and XFS are the two recommended filesystems for large file arrays, especially for the recordings drive in Myth. Again, we will use The Software-RAID HOWTO Performance section and go with a 4K (4096) block size.

For XFS (replace md0 with your final RAID array if using a mixed mode array),

mkfs.xfs -b size=4096 -L Recordings /dev/md0 -f

or for JFS,

mkfs.jfs -c -L Recordings /dev/md0 -f

(It is not recommended to use a JFS partition for your boot drive when using GRUB)

Mounting

Thats it, now your ready to mount the filesystem! You can add a line to your /etc/fstab similar to,

/dev/md0       /MythTV/tv             xfs     defaults        0       0

or

/dev/md0       /MythTV/tv             jfs     defaults        0       0

which will mount the filesystem upon boot and allow the automount option of mount to work, so go ahead and mount the filesystem,

# mount -a

Monitoring

Most distributions have an init.d daemon setup to monitor your mdadm arrays that will monitor your arrays and allow you to be notified when anything of note occurs.

Links

Great page with information on the different hardware and software raid chipsets, their current linux support

Wikipedia Entry for RAID

mdadm MAN page (via man-wiki)

Software-RAID HOWTO