Olympus-OM
[Top] [All Lists]

Re: [OM] Seeking Hard Drive Advice

Subject: Re: [OM] Seeking Hard Drive Advice
From: Ken Norton <ken@xxxxxxxxxxx>
Date: Sat, 3 Jul 2010 18:34:03 -0500
Since we're on the subject of hard-drives and reliability...

I've got a Barracuda 7200.8 250GB drive that has failed. In doing a
little research I've found that this particular unit has a larger than
average failure rate. Wouldn't bother me too much, as there is a
warranty, except there are files on there I need. It just happens that
my CDR backups for several important files have gone bad.  So, I'm
having to send the unit to a recovery place and paying the freight to
get the files moved onto a new drive. Of course, the bad part of that
is the warranty will be null and void the moment they crack the case
open. I've budgeted a few hundred bucks for this, but will gladly do
so as the recovered files are worth a few times that.

Back in the dark ages when 9GB drives were the fattest cows in the
barn, the company I worked for spec'd a particular model for inclusion
in our digital hard-drive systems we were selling. These Micropolis
drives experienced a 100% failure rate within a year if the drives
were positioned in the vertical position and a 100% failure within two
years if in the horizontal position. Spindle failure. Of course, we
had sold hundreds and hundreds of them....

In a parallel universe, here in the phone industry, we deal with
mega-millions of dollars in telephone, data and optical equipment.
What we've learned is that no matter how well engineered and built
something is, when stuff goes bad, it's pretty universal across all of
the installed units. It's pretty crazy in that within two weeks I had
five 120km CWDM SFPs fail or drift out of spec. Not a single one was
in the same location, same lambda or carrying the same type of
payload. The only explanable thing is that they all were put into
service within a couple weeks of each other. What is really creepy is
when totally passive devices like patch-panels or fiber-optic jumpers
decide to fail.  Every few months we have a "say what?" moment.
Recently, we installed a major fiber-optic DWDM system where we had
literally 3/4 of two different types of cards fail. These were
literally out-of-box or near out-of-box failures. Problem?  A fiber
connector. Bewildering.

Very specific to the hard-drive industry, the use of MTBF is a little
different than you'd expect. A MTBF of 200,000 hours (typical) means
that one out of every 200,000 manufactured units will fail every hour.
Of course, that also means based on freshly shipped units. Past the
first 1000 hours of operation, the rate of failure will go up.

AG
-- 
_________________________________________________________________
Options: http://lists.thomasclausen.net/mailman/listinfo/olympus
Archives: http://lists.thomasclausen.net/mailman/private/olympus/
Themed Olympus Photo Exhibition: http://www.tope.nl/

<Prev in Thread] Current Thread [Next in Thread>
Sponsored by Tako
Impressum | Datenschutz