Start a Conversation

Unsolved

This post is more than 5 years old

10720

January 6th, 2009 01:00

PERC 4/SC write cache settings

Hi -

We have a poweredge with a perc 4/sc in raid 5 configuration. Its been set up with ext3 (which may have been a bad choice but too late for now). We're getting abysmal performance from it and I'm trying to figure out what I can do to rectify it. I'd like to enable the write cache (the server is on a UPS so I'm willing to take the risk of a power failure losing cache data) but I cant figure out if I can do this "live" through the dellmgr program without trashing the raid set. I've also read that changing the DirectIO / CachedIO setting to cachedio can help - is this something that can be done live as well?

Thanks

Marcus

January 6th, 2009 08:00

The sort of write performance we're seeing (from dd admittedly so this can probably be taken with a pinch of salt):

# dd if=/dev/zero of=testfile bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes transferred in 597.142945 seconds (3596264 bytes/sec)

342 Posts

January 6th, 2009 13:00

IF you have OMSA installed, and IF you installed the storage piece as an option install, you should be able to change the settings on the fly. I've got a machine with a PERC 4/DC and was able to do it. Here's a screen shot --> http://www.delltechcenter.com/page/PERC+4+Cache+OMSA

If you don't have OMSA installed, you can install it and start the service manually, that might work also.

How many drives are in the RAID 5 set ? Have you updated all the firmware and drivers ?

Is "abysmal" performance just through the dd test ? That's not a real indicative test of a servers performance (single threaded, single user) IOzone would be a better choice for testing - http://www.iozone.org/

January 7th, 2009 02:00

"How many drives are in the RAID 5 set ? Have you updated all the firmware and drivers ?

Is "abysmal" performance just through the dd test ? That's not a real indicative test of a servers performance (single threaded, single user) IOzone would be a better choice for testing - http://www.iozone.org/"
There are 4*250Gb SCSI (10k?) in the bays (its a 1950 with hotswap). I'm running Debian sarge currently (so omsa isnt easy :). I can probably get iozone results out of it overnight - is iozone safe to run on a live machine (ie is it non-destructive)?

I'll update the kernel to 2.6 and see if that helps (it's possible that will help with getting omsa to run as well). I've been avoiding the firmware/driver update but I could probably do it one weekend if thats going to help.

Thanks

Marcus

January 7th, 2009 02:00

"There are 4*250Gb SCSI (10k?) in the bays (its a 1950 with hotswap). I'm running Debian sarge currently (so omsa isnt easy :). I can probably get iozone results out of it overnight - is iozone safe to run on a live machine (ie is it non-destructive)? "
Sorry just double checked - its an 1800 with 4*250Gb SCSI in hotswap

342 Posts

January 7th, 2009 08:00

Haven't used iozone much, once a long time ago, double/triple check before you run it, not sure if it's destructive.

If you google "debian OMSA" there's some good threads to help out.

This page is a good read as well - http://www.delltechcenter.com/page/PERC6+with+MD1000+and+MD1120+Performance+Analysis+Report - not completely relevant (different RAID controller, # disks, and setup) but tons of data with different workloads and analysis of write-back and write through cache results.

Firmware, disk, and driver updates are usually a good thing. Not always the case, but as a general rule you try to get the product out the door first, and then tweaking for performance in the firmware and driver is sometimes done after the fact because it's easier to deliver software updates. This was definitely the case at the other big hardware manufacturer I work for in a previous life. Just the nature of the beast.
No Events found!

Top