[mythtv-users] The Bigger... Disk contest, Fall 2007 edition

Brian Wood beww at beww.org
Thu Oct 18 21:36:42 UTC 2007


f-myth-users at media.mit.edu wrote:
>     > Date: Thu, 18 Oct 2007 13:35:31 -0600
>     > From: Brian Wood <beww at beww.org>
> 
>     > I can only speak from my experience over many years. Whenever I have a
>     > drive fail I put the dead one on my window sill. I have two piles, ATA
>     > and SCSI. The ATA pile is about a foot high, the SCSI pile has one drive.
> 
> [You never return them for the warrantee?]

It's just not worth the trouble IMHO. Stupid perhaps.

> 
> Three massive problems with this approach, all related to sample bias
> (and otherwise known as "the plural of anecdote is NOT data"):
> 
> (a) How many of each drive are in service?  (That's "drives whose
>     failures would cause them to wind up on your windowsill".)  After
>     all, if you've got 50 ATA drives and 3 SCSI drives, well...
> (b) Assuming that the answer to (a) is "half of each", then when's
>     the last time you added to the ATA pile?  [If it's "a long time
>     ago", then maybe ATA reliability has increased (alternatively,
>     perhaps SCSI reliability is decreasing... :)]
> (c) Are both types of drives subject to -exactly- the same sorts
>     of service?  Or are the SCSI drives in temperature-controlled
>     machine-room racks and are never powered off, whereas the desktop
>     drives get powered off every day and are in desktop machines that
>     get moved around, dropped, kicked, or knocked over, and in which
>     inquisitive little non-properly-ESD-protected hands occasionally
>     reach in to reconfigure things?  (Remember that a -lot- of what
>     kills drives is powercycling and thermal cycling; remember also
>     that desktops occasionally get tossed in a trunk, driven
>     somewhere, and then plugged right back in even though they were
>     brought to freezing and then not allowed to have all the
>     condensation evaporate off and the disk platters rewarm back to
>     nominal dimensions.  Sure, that's bad, but I'd seen it.  Also, the
>     senior technical engineer of a company I know that makes massive
>     disk-based stores for, e.g., banks told me that they routinely
>     lose at least one disk in each huge array every time the array
>     must be powercycled---despite using the most reliable disks they
>     can get their hands on.)
> 
> And yes, consult the CMU study; that's the one I was talking about
> when this came up here a few weeks/months ago.

All of your points are valid. The number of drives in service is about
equal.

The last failure as a SATA drive, but it was in a iMac machine with very
sub-optimum cooling.

But, as you might expect, the SCSI drives are in commercial-type servers
with good (and loud) cooling, and are almost never powered off, and the
AC power being supplied to them is well filtered and very clean. I also
think that commercial servers have power supplies that provide cleaner
DC output than most home PCs.

The PATA/ATA drives are in consumer-type cases, with less than optimum
cooling. They are powered off and on more often than the servers (though
still not very often) and are powered by consumer-type UPSs or straight
off the AC mains.

Plus, some of the consumer-type drives were purchased as re-furbs,
though I have noticed any difference between those and ones purchased new.

So it's not a fair test, I realize that.

For all I know the only difference between a modern SCSI drive and a
SATA one is the interface board. I know SCSI drives used to have better
bearings but I've heard that SATA drives are using the same ones.

I did find it interesting that Google's report indicated no major impact
from drive temperature, that goes against all the conventional wisdom
about electronic devices.

beww


More information about the mythtv-users mailing list