[mythtv-users] A warning about Samsung HDDs

Dewey Smolka dsmolka at gmail.com
Mon Oct 31 00:47:45 EST 2005


On 10/30/05, Brady <liquidgecka at gmail.com> wrote:
> Let me chime in on this..
>
> I worked for a large fortune 500 company a few years back. We did
> product qualification on virtually every computer component known to
> man. Mainboards, Hard Drives, PDAs, etc. The Hard Drive testing was
> slick. We would hammer a hard drive until it died in order to make
> sure that our products wouldn't be the failure points.

Wow. I'm really good at breaking stuff. Is your former employer hiring?

<snip>
> Anyways, on with my point.[...] Run the voltage a little high and then run the
> clock a little slow and some drives would fail in 30 hours of constant
> testing. Seagates on the other hand defied most failure reproduction.
> While they didn't usually last longer than other drives they typically
> failed in boring ways. Things like cache failure or just plain old
> power down failure. Most times we could recover the drives.

What can you recommend for someone to test these things (someone who
doesn't have access to a corporate R&D lab)? I make a habit of putting
hardware through some paces immediately after I buy it, but most of my
tests are software-based.

I don't really mind it if I find a failure before I put a piece into a
production environment (that's why I run the tests), but I'm a one-man
shop with limited resources. I generally assume (probably wrongly)
that if I've tested a given model to my satisfaction, that an
identical model can zip through the testing.

I also assume (probably also wrongly) that PSUs will do what they
claim and that a 5V plug will deliver 5V. Is this something I need to
test with every single unit, and can the power output of a PSU change
over time? Do I need to use a multi-meter on every output of the PSU,
and do I need to retest monthy? weekly? What kind of fluctuation does
it take to fry an HDD?

<snip>
> Why do I even mention this you ask? Well, I felt that Seagate drives
> where just well built enough to withstand the tests we put them
> through and tended to only fail due to normal "wear and tear." There
> was no easy way to kill a Seagate. They seemed a little more bullet
> proof than other drives.

My two Maxstors (200GB and 250GB) are approaching 2 years and I've had
no problem with either of them. These volumes are storage only and
contain more or less all the music, video, and still images that I've
bought, acquired, and/or produced over the last five years (10 years
for music). I've got probably 75% backed up on optical media, but I'm
not looking forward to the day when I have to restore 40-some DVD-Rs
worth of data.

As someone who has professionally destroyed hard drives (most of us
here are only practiced amateurs), what kind of recommendations/ best
practices can you offer to ensure maximum HDD life? What are some
warning signs we should look for, other than 'my drive has come to a
literal screeching halt'? SMART is nifty, but in my experience has
only warned me of problems after I knew there were problems. What can
I do to detect a drive failure before it occurs, and to get my data
off before the drive becomes unreadable?

> Anyways.. Its late (I think, I just moved across a time zone boundary
> and then day light savings happened.. I have no clue what time it is
> anymore) and I am starting my new job tomorrow so I should stop
> ranting.

Thanks for your input. I made the fatal mistake once of buying a
Samsung drive instead of a similar Seagate/Western Digital to save $30
or so. I won't do it again.


More information about the mythtv-users mailing list