[Techtalk] Building boxen, testing setup, SCSI RAID

Brian Sweeney bsweeney at physics.ucsb.edu
Tue Nov 13 10:00:57 EST 2001


Michelle Murrain wrote:

> Hi folks,

Hi Michelle =)


> 
> I had already decided, a bit ago, to build a box. I get this urge about 
> once a year, and although I usually learn a lot in the process, I also 
> swear off doing it each time after I'm done :-) I'm actually thinking 
> about building two this time.
> 


I get that urge about that often too.  But I end up doing it so much at 
work, it's usually sated pretty quickly.  Now you've got me thinking 
about it again...=)


> But, this will be about the 4th time I've done this, so maybe by now 
> I've learned a bunch of stuff, and this will go somewhat more smoothly.
> 
> I'm thinking about a few things:
> 
> 1) putting together a testing setup - where I can easily drop components 
> in and out, and not have to muss with a tower case, etc. Has anyone done 
> this, and what kind of case did you use? I think that a desktop case 
> would be the best candidate for this, but other ideas are welcome. I did 
> just set up a KVM switch in my office, so that I now have 4 computers I 
> can control at the same time.


I do this pretty often, actually.  A friend of mine (who's a lurker on 
this list...hi Vic ;-)) got me hooked on InWin cases.  We used to use I 
believe the mid-towers, and they worked pretty well; well designed and 
pretty space efficient.  Obviously, if you've got the room and want LOTS 
of play space, get a full tower.  But they're big and heavy.

I'd also suggest getting some of those hard drive swappable chasis.  Not 
hot-swappable, per se, but you can have different OS's on different 
drives and swap them around without opening up the case, you just have 
to power the machine off.  I know, it sounds obvious, but I tried it 
once without powering off to test a software RAID (IDE, and not the 
proper chasis for it).  It, um, went poorly ;-).


> 
> 2) AMD vs Intel.  I'm pretty sold on using AMD right now for these. Any 
> reasons anyone has for me to think I should go with Intel? I've heard 
> horrible things about the new chipsets that allow Intel processors to 
> use non RAMBUS ram. I am truthfully, a bit fuzzy on the present RAM 
> situation.


I'd go AMD.  I spent hours trying to compile and run a linux kernel 
about a week and a half ago, only to find there was a bug relating to 
some screwy way Intel designed the P4's(fixed in 2.4.14).  And there has 
been lots of ugliness around the RAM issue (though I don't understand it 
either).  And I like AMD.  Just 'cause. =).


> 
> 3) SCSI RAID - I think it's worth knowing how to do SCSI RAID, and 
> although it's pretty expensive to get a SCSI and RAID controllers, I 
> think it might in the end be worth the investment. But, what's better - 
> Hardware RAID or Software RAID? If I do software RAID, I don't have to 
> get a separate hardware controller, which is cheaper. But software RAID 
> takes CPU cycles and memory. Suggestions? I've been reading some RAID 
> how tos and docs - is it still true that you need to configure the 
> kernel to be able to see the RAID drives, or at this point do most 
> distros have these kernel modules out of the box?


You've pretty much hit the nail on the head, I think, with software vs. 
hardware RAIDs.  If you buy a RAID card, it should be faster (though 
some of the cheaper cards cheat, and use the machine's CPU in a similar 
fashion to a software RAID).  And I think hardware RAIDS are easier to 
manage, in general.

That's the thing, though; there really isn't much to most hardware RAID 
configurations, IMHO.  With SCSI you'd have to use initrd too boot from 
it, but you'd have to do that with SCSI drives whether it was RAID or 
not; and actually now that I think about it you'd probably have to do 
that with IDE RAID too (lots of IDE raid controllers appear as SCSI 
controllers to the OS).  And if the driver's in the kernel, it's pretty 
easy.  The software RAID would require more work, so you might feel like 
you've learned more.  OTOH, software RAID installs on a fresh machine 
(at least with RedHat) are a lot easier than they used to be, and you 
can software RAID across different types/sizes/etc of drives (I've heard 
of people software raiding IDE and SCSI drives together, though I've 
never tried it).  The software RAID setup's in the install GUI in 
RedHat, and it's really straightforward.  I think the tough part is 
recovering from a drive failure later on, but I haven't done much 
software RAID work recently, so take that with a grain of salt.

If you're going to go hardware RAID, I'd use one of the lower-end 
Adaptec cards.  Most of them have drivers for them already in the kernel 
(at least the 2.4's do; don't remember if the 2.2's do or not), they're 
pretty decently priced, and I've found Adaptec stuff to be very 
reliable.  I've also tested some of the Arco RAID0/1 cards, and have 
been rather unhappy with them (two machines have reported RAID failures 
within a month, and the "failed" hard drive from one of them has since 
proved clean.  Don't know what's up there.)  The nice thing about the 
Arco cards, and cards like them, is that you "jump" the on-board ide to 
them, and there's no drivers involved.  The card presents the BIOS with 
a single drive, and the BIOS presents it to the OS.  No muss, no fuss. 
But again, not much of a learning experience, and those are IDE drives 
anyway.

Hope that rambling helps; 'twas quite a bit longer than I'd intended. 
And as always, in the words of Dennis Miller:

"But that's just my opinion; I could be wrong."

Good luck,
Brian


> 
> (Note: this is *not* for a production machine)
> 
> Thanks all!
> 
> .Michelle
> 
> ---------------------------------------
> Michelle Murrain, Ph.D.
> tech at murrain.net
> AIM:pearlbear0
> http://www.murrain.net/public_key.html for pgp public key
> 
> 
> 
> _______________________________________________
> Techtalk mailing list
> Techtalk at linuxchix.org
> http://www.linuxchix.org/mailman/listinfo/techtalk







More information about the Techtalk mailing list