Okay, with properly run hardware RAID, RAID 1 is the slowest (Data's written to both drives at the speed of the slowest drive), RAID 5 is next fastest (As the data spans between the drives, it's 1/3 faster than a single drive or RAID 1), and RAID 0 is fastest (Data's written to both drives simultaneously). If you have drives to spare (4 drives), and you don't mind "Losing" half the capacity for the backup, then RAID 0+1 is the best combination, giving both high speed AND data redundancy (50% capacity, 200% performance). If you don't have excess drives (3 drives), or you wish to get more capacity from the drives you have, then RAID 5 is best, as it gives you 66% of the capacity of the drives and 133% of the performance. If you have only 2 drives, you can opt to use them as RAID 1 (which will give you 50% capacity and 100% performance) or RAID 0 (which gives you 100% capacity (but 0% redundancy), and 200% performance). This is all assuming you're using hardware RAID, which is not reliant on CPU overheads or anything like that. Software RAID changes this, as it uses the CPU and the regular IDE controllers, but can't (generally) write to multiple drives simultaneously. -- Robert "Anaerin" Johnston
RAID For Recordings Drive
A few Options exist for using RAID for the recording drives, depending on the goal you have for your recordings; speed, redundency or both. RAID 0 will allow you to gain the most speed from your drives, RAID 01 (or RAID 0+1) will give you speed and 1:1 redundency, and RAID 5 sits inbetween, giving you a slight speed increase over RAID 1 but with parity to recreate the data on a failed drive if needed.
With HD becoming the prevalent standard...
RAID For Archives Drive
Having an independent drive array for archival of shows one wishes to keep allows the user to setup a RAID for speed for the recordings drive and a RAID for backup for the archival drive. This way, once a show has been recorded, commercial flagged and possibly even transcoded to another format or for permanent commercial removal, it can be moved to the archive. In such a case, RAID 5 and RAID 10 make the most sense. If you plan on a large amount of access to the archive, a RAID 10 may make more sense as it will most easily keep up with the transfer rate requirement while still allowing for redundency, but at the cost of the price for obtaining the number of drives requried. RAID 5 will have a slight speed advantage over just having numerous drives (JBOD, Just a Bunch Of Disks in hardware RAID, linear in mdadm), but will also have the advantage of getting the most archival bang-for-your-buck while still maintaining parity for the case of a lost drive.
Creating a RAID array with mdadm is quite easy. The Software-RAID HOWTO Performance section will help here, as different RAID types have different best values for chunk and block sizes. Since we will be dealing with only large files (recorded mpegs. music files, etc) it is recommended to choose the largest chunk and block value that combine for the highest performance.
Before a RAID array can be created on a disk it must be partitioned, again you can use cfdisk. The easiest way is to create a full drive partition.
Note: You must however, also set the type to "fd" or "Linux raid autodetect"!
The following line will create a RAID array with the following characteristics:
- RAID 5 on /dev/md0
- 3 drives, /dev/sda1, /dev/sdb1, and dev/sdc1
- chunk size = 32K
- no spare
- verbose level of output
# mdadm -v --create /dev/md0 --force --chunk=32 --level=raid5 \ --spare-devices=0 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
RAID 10 is really the creation of 2 or more arrays. First you create the number of RAID 1, mirrored, arrays you wish to have,
# mdadm -v --create /dev/md0 --chunk=32 --level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1 # mdadm -v --create /dev/md1 --chunk=32 --level=raid1 --raid-devices=2 /dev/sdc1 /dev/sdd1
and so on until you have the number of drives you wish to concatenate into a RAID 0.
Once these have completed building (see below), you can create the RAID 0, striped, array,
# mdadm -v --create /dev/md2 --chunk=32 --level=raid0 --raid-devices=2 /dev/md0 /dev/md1
RAID Creation Confirmation
You will be prompted with the RAID parameters, and asked to continue,
mdadm: layout defaults to left-symmetric mdadm: /dev/sdc1 appears to contain a reiserfs file system size = -192K mdadm: size set to 293049600K Continue creating array?
Status of RAID Creation
Upon confirmation you will only see
mdadm: array /dev/md0 started.
Once you run the command to create the RAID array, if you want to see the progress run,
# cat /proc/mdstat
and you will see something along the lines of,
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] md0 : active raid5 sdc1 sdb1 sda1 586099200 blocks level 5, 32k chunk, algorithm 2 [3/2] [UU_] [=>...................] recovery = 5.8% (17266560/293049600) finish=69.8min speed=65760K/sec unused devices: <none>
Generate Config File
Now we need to setup '/etc/mdadm.conf', this can be done by copying the output of
# mdadm --detail --scan
to '/etc/mdadm.conf', which should end up looking similar to,
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=2d918524:a32c7867:11db7af5:0053440d devices=/dev/sda1,/dev/sdb1,/dev/sdc1
RAID Filesystem Creation
Once your RAID array is created you can place a filesystem on it. JFS and XFS are the two recommended filesystems for large file arrays, especially for the recordings drive in Myth. Again, we will use The Software-RAID HOWTO Performance section and go with a 4K (4096) block size.
For XFS (replace md0 with your final RAID array if using a mixed mode array),
mkfs.xfs -b size=4096 -L Recordings /dev/md0 -f
or for JFS,
mkfs.jfs -c -L Recordings /dev/md0 -f
(It is not recommended to use a JFS partition for your boot drive when using GRUB)
Thats it, now your ready to mount the filesystem! You can add a line to your /etc/fstab similar to,
/dev/md0 /MythTV/tv xfs defaults 0 0
/dev/md0 /MythTV/tv jfs defaults 0 0
which will mount the filesystem upon boot and allow the automount option of mount to work, so go ahead and mount the filesystem,
# mount -a
Most distributions have an init.d daemon setup to monitor your mdadm arrays that will monitor your arrays and allow you to be notified when anything of note occurs.
Great page with information on the different hardware and software raid chipsets, their current linux support
Wikipedia Entry for RAID
mdadm MAN page (via man-wiki)
--Steveadeff 16:25, 11 January 2006 (UTC)