[mythtv-users] Linux software raid question

Carl L. Gilbert clg-social at rigidsoftware.com
Wed Jun 4 18:49:11 UTC 2008


On Tue, 2008-06-03 at 21:44 -0700, Yan-Fa Li wrote:
> On Tue, Jun 3, 2008 at 7:42 PM, Carl L. Gilbert
> <clg-social at rigidsoftware.com> wrote:
> > On Tue, 2008-06-03 at 17:41 +0200, Boleslaw Ciesielski wrote:
> >> Matt Nelson wrote:
> >> > On one of my mythtv backend servers I host my fileserver that stores all
> >> > of my media on two raid5 sets totaling 12 drives that are all attached
> >> > via sata.  These are all attached externally, and that is where my
> >> > question lies, if I connect these drives incorrectly will my raid sets
> >> > die, or does it not matter where they are connected since there is some
> >> > metadata that tells the linux software raid that it belongs to a certain
> >> > raid set?
> >>
> >> Yes, Linux software RAID will work regardless of the drive order. Your
> >> only order concern is the boot drive.
> >>
> >
> > No, this is not guaranteed.  It depends on the distro and how the drives
> > are referenced.  I think more recent distros should work.  But I know
> > for a fact that older ones would not.  Like old Redhat versions where I
> > first cut my teeth on RAID.
> >
> > Anyway, just step up to hardware RAID and don't even think about it
> > anymore.
> >
> 
> mdadm based raid is extremely reliable if you're using UUIDs.
> Hardware RAID is not only expensive but it doesn't scale as well as
> software RAID.  Here's a great blog entry by Jeff Bonwick one of the
> architects of Sun's ZFS.  He has a lot of great points and confirms a
> lot of my own thoughts about what to use all those cores on new CPUs
> for.  http://blogs.sun.com/bonwick/entry/the_general_purpose_storage_revolution.

Hardware RAID does not have to be expensive.
> 
> There are two large drawbacks of hardware RAID.  One is that you can't
> transport RAID disks between different manufacturers RAID cards.
> You're locked into to a specific vendor.  With Linux soft RAID, I just
> move the disks over to a new machine and off I go.  The other is the
> inability to run smartmontools and get at least a chance 60% chance of
> spotting a bad disk.  Most hardware RAID implemtations hide the low
> level block devices away from the operating system and do not give you
> access to the individual smart information per drive unless you run a
> vendor specific tool.
> 
This is an incredibly biased opinion. "Large drawbacks"??  Come on.  You
can transfer RAID arrays between computers by moving the card with them.
Very easy. Just as easy as software.

My 3ware RAID runs smartmonitor on it daily.  Completely smart
monitorable.
I also get an email if there is a problem.


> When one adds in the cost of hardware RAID, and any potential upgrades
> like battery backed RAM caches, I would much rather put that money
> toward a server quality motherboard with ECC RAM, a largish UPS unit
> and set up RAID drives with write-intent bitmaps[3] and do weekly data
> check sweeps[1] via cron combined with smartmon short and long disk
> checks[2] on a daily and weekly basis.  Most of the features of the
> high end harware RAID cards are now part of the feature set of Linux
> soft RAID, it really is that good.
> 

I don't know why you are down on hardware RAID, but its not as you
describe it and I dont see any advantage of software RAID.  It takes
more jumping through hoops.  Especially to boot on RAID.  Maybe things
are better now.

I think HW is just simpler.



> Yan
> 
> [1] e.g. echo "check" >> /sys/block/md0/md/sync_action
> [2] e.g. smartctl -t short|long /dev/sda
> [3] e.g. mdadm /dev/md0 -Gb internal

/dev/twe0 -d 3ware,0 -a -s (S/../.././02|L/../../6/04) -m
carl at erasmus.sargent.crib
/dev/twe0 -d 3ware,1 -a -s (S/../.././03|L/../../6/05) -m
carl at erasmus.sargent.crib

runs smartmon no problem.  Plus there is a web interface to manage the
drives from within the OS.





More information about the mythtv-users mailing list