[Techtalk] Faster badblock scans?

Julie txjulie at austin.rr.com
Wed Aug 27 08:18:41 EST 2003


Maria Blackmore wrote:
> On Tue, 26 Aug 2003, Julie wrote:
> 
> 
>>>I'm sorry to say this, but don't bother.  Don't just delete this post as
>>>being unhelpful, please read it and take note, it really is for your own
>>>good.
>>
>>While I appreciate the rant, and could quite easily have written
>>it myself, I'd still like an answer to my question.
> 
> 
> You cannot scan a drive any faster than it can read the data, your only
> solution would be the application of hdparm to speed up read access to the
> drive.  Also, your read speed will be hampered by the fact that data is
> being read a block at a time, which will increase overheads.

"hdparm" had already been applied.  It turns out that whatever values
were set at boot time had since been reset, probably during error
recovery (which hdparm warns about).  It would seem that the "-k" flag
I'd set back when I first setup the system didn't get saved.  I reset
the parameters and added "-k -K" to "EXTRA_PARAMS" in the appropriate
/etc/sysconfig files.

"badblocks" then ran a =lot= faster -- a 3GB partition in about 100
seconds for a read-only test.

>>For what it's worth, the drives are still in excellent condition,
> 
> 
> I wouldn't say that any drive with bad blocks showinup asis in excellent
> condition.

In 22 years of using small computer drives, my experience is that
disks slowly develop more and more bad blocks.  Even when I used
large computer drives, such as the old multi-platter disk packs,
those drives developed bad blocks as well.  One of my college
projects was to write a program which did what "e2fsck -c" does on
old DEC RL-02 drives under System III UNIX.

If you have the money to replace every drive that winds up with an
unrecoverable error -- great.  If you think performing a low-level
format on a drive is a bad idea -- great.  I don't.

>>Mostly the problem is that of the 60,000,000 blocks of IDE drive on
>>this machine, 40 or 50 of them seem to have gone bad over the past 12
>>to 18 months.
> 
> 
> ... is proof, to my mind.  It's not their presence that's the key, it's
> the fact that their numbers are increasing.

My experience is that it isn't that they are increasing, it's how
they increase, and back when geometries were more readily discernible,
where they increased.  I've had drives where one surface was slowly
going to hell in a hand basket and it was obvious that the drive just
had to go.  I've had other drives last 10 years with a slowly growing
list of bad blocks.  In the olden days drives came with a "Media error
list" printed on a sheet of paper.  When the sheet of paper was full
it was time to replace the disk.

>>And while rants are often fun,
> 
> 
> I didn't write it for fun, I wrote it because I don't like to see people
> losing data.

And sometimes people want basic answers to simple questions.

> (Listening to the sound of large numbers of 10k RPM SCSI drives :)
> (and is a firm believer in getting what you pay for)

When I can afford 1TB of 10k RPM SCSI drives, I'll buy them.  I don't
need the performance, and as much of the data is relatively static
(meaning, some of it hasn't changed since Reagan was president ...),
good backups suffice for my reliability concerns.  I put up with the
"you really need this super-high-performance disk" from the sales
droids.  When the fastest the data can leave this machine is 100Mb/s,
I just don't see needing a drive that is going to read data a whole
lot faster.

Really -- I appreciate your concern and all.  I don't appreciate the
lecture on how I should manage my disk storage.
-- 
Julianne Frances Haugh             Life is either a daring adventure
txjulie at austin.rr.com                  or nothing at all.
					    -- Helen Keller



More information about the Techtalk mailing list