[mythtv-users] question about RAID

Brian Wood beww at beww.org
Mon Jan 9 21:26:02 UTC 2006


Given that this must be taken with a bit of a grain of salt:

hdparm -tT /dev/hda
Timing cached reads:  2820 MB in 2.00 second = 1409.91 MB/sec
Timing buffered disk reads: 200 MB in 3.02 seconds = 66.31 MB/sec

hdparm -tT /dev/MD0
Timing cached reads: 2804 MB in 2.00 seconds = 1401.91 MB/sec
Timing buffered disk reads: 346 MB in 3.01 seconds = 115.02 MB/sec

This is with /dev/md0 being a RAID 0 array of two 250GB SATA drives, / 
dev/hda is a 250GB PATA drive alone on its controller channel.


Brian Wood
beww at beww.org



On Jan 9, 2006, at 1:36 PM, Johnathon Meichtry wrote:

> James,
>
> Thanks, I was putting it in layman's terms but your right I should  
> have been
> more precise.
>
> If one was going to be technically correct then it isn't even that  
> straight
> forward as with most systems you will probably have three random  
> access
> reads for every sequential write therefore with RAID0 you may get an
> alternating disk write (some would say simultaneous but really  
> there is no
> such thing as there is only one bus) but between them you will  
> probably have
> three reads with two probably being on the same disk therefore  
> RAID0 only
> giving you a marginal performance improvement without even taking into
> account disk 1 and disk 2 being on the same controller (i.e. IDE  
> PATA master
> and slave) and the resultant IO wait (not to mention the bottleneck  
> on the
> PCI bus).
>
> To me striping is one of the things where it looks great in theory  
> but not
> until you have three or more disks across multiple controllers do  
> you get a
> real-world noticeable performance improvement.  Not that I am  
> knocking RAID0
> but I believe that all it gives you is aggregation of partitions  
> and not
> much else so you might as well have at least three or more disks/ 
> slices and
> go for RAID5 so that if you lose one you don't lose the lot and you  
> have a
> better chance of getting a performance improvement.
>


More information about the mythtv-users mailing list