RAID
Contents
- 1 Quick Overview
- 2 Setup (Software RAID)
- 2.1 Partitioning
- 2.2 RAID 5
- 2.3 RAID 1+0
- 2.4 RAID10,F2
- 2.5 RAID Creation Confirmation
- 2.6 Status of RAID Creation
- 2.7 Generate Config File
- 2.8 RAID Filesystem Creation
- 2.9 Mounting
- 2.10 Monitoring
- 2.11 Software Raid Online Capacity Expansion (OCE) (for raid 5 with XFS)
- 2.12 Spinning down hard drives
- 3 Links
Quick Overview
Common Modes
Level | Description | Minimum Disks | Space Efficiency | Fault Tolerance | Read Performance | Write Performance | Image |
---|---|---|---|---|---|---|---|
RAID 0 | Block-level Striping
Provides improved performance and additional storage, but no redundancy. Reliability is worse than a single disk. |
2 | N | 0 | Nx | Nx | ![]() |
RAID 1 | Direct Mirroring
Provides improved read performance and redundancy, but no additional storage |
2 | 1 | N-1 | Nx | 1x | ![]() |
RAID 1E | Data Duplication
Pools disk space, and ensures data is stored on at least two disks. |
3 | N/2 | 1 (worst case) |
Nx | N/2x | ![]() |
RAID 0+1 | Mirror of Stripes
This is a nested array where a RAID 1 array is made using RAID 0 arrays |
4 | N/2 | 1 (worst case) |
Nx | N/2x | ![]() |
RAID 10 | Stripe of Mirrors
This is a nested array where a RAID 0 array is made using RAID 1 arrays |
4 | N/2 | 1 (worst case) |
Nx | N/2x | ![]() |
RAID 5 | Block-level Striping with Parity
One block on each stripe is reserved for parity, which can be used to calculate missing data in the event of a drive failure. |
3 | N-1 | 1 | (N-1)x | (N-1)x (best case) 1/2x (worst case) |
![]() |
RAID 6 | Block-level Striping with Double Parity
Two blocks on each stripe are reserved for parity, which can be used to calculate missing data in the event of a drive failure. |
4 | N-2 | 2 | (N-2)x | (N-2)x (best case) 1/2x (worst case) |
![]() |
Which one do I choose?
The choice of which RAID level to choose depends on the application it will be used for. The performance values in the table above are optimal conditions, but will differ depending on whether you want to optimize for throughput or operations. For raw throughput on bulk files, striping will be the fastest, and larger block sizes will reduce the load on the controller, further improving performance to a point. For small operations, independent disks are better, making mirroring, or striping with block sizes larger than that of the file, ideal. Parity in RAID 5/6 causes additional problems for small writes, as anything smaller than the stripe size will require the entire stripe to be read, and the parity recomputed.
For the system disk, reliability and IOPS are going to be favored over raw throughput, so RAID 1 or RAID 10 would be best suited. RAID 1E is another less standard alternative which can be done using Linux MDRAID in 'f2' mode.
For recording disks, MythTV will be storing bulk files. Normally RAID 5 or RAID 6 would be ideal for such a scenario, however MythTV may be recording multiple files simultaneously in small chunks, and the write behavior of parity sets will result in very poor performance in any storage system not using a non-volatile cache. The recommended method would actually be to not use RAID at all, and instead define the drives independently using Storage Groups.
For bulk storage of non-recorded media, such as music, pictures, and videos, the usage will be nearly all read only. RAID 5 or 6 would be a good trade off between redundancy and space efficiency. RAID 5 can only handle a single drive failure before data loss, so for larger arrays (6-drives or larger), RAID 6 would be a better option.
Setup (Software RAID)
For setting up hardware RAID, see your RAID controller's documentation. The array will then appear as a single disk within your OS. For software RAID, creating a RAID array with mdadm is quite easy. The Linux RAID HOWTO Performance or Software-RAID HOWTO Performance section will help here, as different RAID types have different best values for chunk and block sizes. Since we will be dealing with only large files (recorded mpegs. music files, etc) it is recommended to choose the largest chunk and block value that combine for the highest performance.
Partitioning
Before a RAID array can be created on a disk it must be partitioned, you can use cfdisk, fdisk, sfdisk or parted. The easiest way is to create a full drive partition.
RAID 5
The following line will create a RAID array with the following characteristics:
- RAID 5 on /dev/md0
- 3 drives, /dev/sda1, /dev/sdb1, and dev/sdc1
- chunk size = 32K
- no spare
- verbose level of output
# mdadm -v --create /dev/md0 --force --chunk=32 --level=raid5 \ --spare-devices=0 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1
RAID 1+0
RAID 1+0 is really the creation of 2 or more arrays. First you create the number of RAID 1, mirrored, arrays you wish to have,
# mdadm -v --create /dev/md0 --chunk=32 --level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1 # mdadm -v --create /dev/md1 --chunk=32 --level=raid1 --raid-devices=2 /dev/sdc1 /dev/sdd1
and so on until you have the number of drives you wish to concatenate into a RAID 0.
Once these have completed building (see below), you can create the RAID 0, striped, array,
# mdadm -v --create /dev/md2 --chunk=32 --level=raid0 --raid-devices=2 /dev/md0 /dev/md1
RAID10,F2
The Linux MD raid10 has another way to be created - it uses only one mdadm command. For an array of 4 drives use:
# mdadm -C /dev/md0 --chunk=256 -n 4 -l 10 -p f2 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
Note that this can even be done with only 2 drives (-n 2). For newer drives as of 2008 it is recommended by the people on the linux-raid kernel mailing list to use chunk sizes between 256 kiB and 1 MiB.
RAID Creation Confirmation
You will be prompted with the RAID parameters, and asked to continue,
mdadm: layout defaults to left-symmetric mdadm: /dev/sdc1 appears to contain a reiserfs file system size = -192K mdadm: size set to 293049600K Continue creating array?
Status of RAID Creation
Upon confirmation you will only see
mdadm: array /dev/md0 started.
Once you run the command to create the RAID array, if you want to see the progress run,
# cat /proc/mdstat
and you will see something along the lines of,
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] md0 : active raid5 sdc1[3] sdb1[1] sda1[0] 586099200 blocks level 5, 32k chunk, algorithm 2 [3/2] [UU_] [=>...................] recovery = 5.8% (17266560/293049600) finish=69.8min speed=65760K/sec unused devices: <none>
Tip: To watch this progress over time, use:
# watch cat /proc/mdstat
Generate Config File
Now we need to setup /etc/mdadm.conf (Ubuntu 9.10 /etc/mdadm/mdadm.conf), this can be done by copying the output of
# mdadm --detail --scan
to '/etc/mdadm.conf', which should end up looking similar to,
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=2d918524:a32c7867:11db7af5:0053440d devices=/dev/sda1,/dev/sdb1,/dev/sdc1
Note: the device names used by md raid commands might not be identical to those seen by other Linux commands (e.g. cfdisk). mdadm assumes that all of your partition names are sequential with no gaps. If you use 2 primary partitions they will be called /dev/sda1 and /dev/sda2, and a logical partitions this is then called /dev/sdb5. mdadm will report these partitions as sda1, sda2, and sda3.
RAID Filesystem Creation
Once your RAID array is created you can place a filesystem on it. JFS and XFS are the two recommended filesystems for large file arrays, espe cially for the recordings drive in Myth. Again, we will use The Software-RAID HOWTO [http://www.tldp.org/HOWTO/Software-RAID-HOWTO-9.html Per formance] section and go with a 4K (4096) block size.
For XFS (replace md0 with your final RAID array if using a mixed mode array),
mkfs.xfs -l size=64m -d agcount=4 -i attr=2,maxpct=5 -L Recordings /dev/md0 -f
These performance parameters, as well as the mount settings in /etc/fstab are also documented on the XFS Filesystem page.
or for JFS,
mkfs.jfs -c -L Recordings /dev/md0 -f
It is not recommended to use a JFS partition for your boot drive (/boot, or if no separate /boot the / (root) partition) when using GRUB.
Mounting
Thats it, now you are ready to mount the filesystem! You can add a line to your /etc/fstab similar to,
/dev/md0 /MythTV/tv xfs defaults,allocsize=512m 0 0
for xfs (above), or for jfs (below):
/dev/md0 /MythTV/tv jfs defaults 0 0
which will mount the filesystem upon boot and allow the automount option of mount to work, so go ahead and mount the filesystem,
# mount -a
Monitoring
Most distributions have an init.d daemon setup to monitor your mdadm arrays that will monitor your arrays and allow you to be notified when anything of note occurs.
Software Raid Online Capacity Expansion (OCE) (for raid 5 with XFS)
Online Capacity Expansion (OCE) allows you to add another hard drive to an already defined and set raid array. For example adding a fifth drive to a 4 drive raid 5 array. OCE reshapes the data so it will span all 5 drives and then allows you to use a file system grow command to make use of the new space. This is all done while the raid system is active and even allows you to continue to use your drives while you are adding a new drive. Previously this feature was available on high end hardware raid cards only.
I was able to do a raid 5 disk expansion in mdadm software raid with no ill effects. Following this page as a guide and it worked perfectly. Took about 6 hours to reshape the array. From 300gb x 4 raid 5 to 300gb x 5 raid 5. No lvm just an md0 mdadm device and I was still able to make 2 simultaneous HD recordings and watch an HD recording while this was going on.
page used as guide:
http://scotgate.org/?p=107
for me the process was:
mdadm --add /dev/md0 /dev/sde1
This would be different for each user based on the name of the raid filesystem and the drive you are wanting to add to it.
then:
mdadm --grow /dev/md0 --raid-devices=5
to find details about your raid reshaping status use:
cat /proc/mdstat
to speed up reshaping use (fill in whatever speed you want where the 100000 is. Default is 10000 :
echo -n 100000 > /proc/sys/dev/raid/speed_limit_max
The speed_limit_max entry there controls how fast the raid array rebuilds (how much of the array's bandwidth is available to the rebuild process). Make that bandwidth number higher and it goes faster but uses more of the throughput of the hard drives (leaving less available to say Myth recording on the degraded array). The array will be in a degraded state until reshaping is finished.
Now its time to grow your xfs filesystem (or substitute for your growing file system):
xfs_growfs (path to mounted raid filesystem)
Spinning down hard drives
Area is outdated. See relevent information at:
- Power saving - native spindown using hdparm
- Spindown drives - 3rd party spindown daemon
Links
Great page with information on the different hardware and software raid chipsets, their current linux support
Wikipedia Entry for RAID
mdadm MAN page (via man-wiki)
Software-RAID HOWTO
Linux MD RAID HOWTO