I am struggling with a drive (RAID5) speed issue in a Dell PE T610 and am wondering if anyone of you have seen such problem. Here's a brief overview of the issue:
1. Drive Setup: PERC 6i, 1 RAID1 (C, 500GB), 1 RAID5 with 2 VDs (E, F, 500GB Each) and 1 RAID5 with 2 VDs (G, H, 250GB, 1750GB).
2. Copying anything from Network or C to H is capped at 4.5MB ~ 5MB. Initial copying speed can reach 40MB/s until around 2GB of data is transmitted, then the speed plunges.
3. Copying anything out of H is fast. This was shown from a HD monitor program. I don't have the exact number but the difference between read and write speed is around 10+ to 20+ times.
All drives are Seagates CS except for C which has Dell's. Researches on Google suggests that the issue may be either "Write Back Policy" or RAID5?
If you've seen this issue, please share your experience. Greatly appreciated!
Is the caching policy on the set to write back? Is there a speed issue from E to H to any of the other arrays? Is the controller firmware and driver up to date?
PERC 6 RAID controller driver 188.8.131.52 for Server 2003 32bit, released 6/30/09, recommended, urgent if yours is less than 1.21, *DON'T reboot, do firmware next: ftp.us.dell.com/.../RAID_DRVR_WIN_R211422.EXE
o For Server 2003 64bit you must use this one, v.64: ftp.us.dell.com/.../RAID_DRVR_WIN_R211424.EXE
o For Server 2008 you must use this one: ftp.us.dell.com/.../RAID_DRVR_WIN_R210509.EXE
o For Server 2008 64bit you must use this one, v.64: ftp.us.dell.com/.../RAID_DRVR_WIN_R210510.EXE
PERC 6/i Integrated RAID controller firmware 6.3.1-0001, A14, Released 11/1/11, urgent if yours is less than 6.1.1,
Windows update package: ftp.us.dell.com/.../RAID_FRMW_WIN_R313336.EXE
Can you run our online diagnostics tool on those hard drives to test them?
Thanks for replying. I may not be able to upgrade the firmwares as my management is strictly against it since this server hosts important data. Just to update you on what I was suggested: I was told that RAID5, specially 'partitioned' RAID5 has some reduced performance impacts. To my surprise, it seems to be true after doing some googling. HD Tune Pro displays at 1/2 write speed when comparing RAID5 to RAID1 but with 2x read speed. In anyway, I double checked RAID settings and these are the numbers:
Write Policy: Write Through
Stripe Element Szie: 64KB
Disk Cache Policy: Disabled
I guess my follow-up question is, have you had such a setup and had similar issue?? Would changing Write Policy change the performance??
I am in Dell support, so I do not have any deployed servers. Changing the write policy should help, but the extent of the change is dependent on the servers role and how busy the drives are. If you look in Windows performance monitoring for the physical disks, what does it have for the average disk queue length and % disk time? It is possible that the drives are simply too busy to handle the data transfer at normal speeds.
Here is a Microsoft article that shows some performance counters to look at.
"Write Policy: Write Through"....
If your not setting the policy to "write back" (WB) you would be better off without a raid card involved in your disk sub system, as "write through" (WT) has pitiful performance; if you have battery cache on the controller you are safe to change the policy to WB . Been using raid for over 20 years , changing "write though" to "write back" is non destructive in any way. I have toggled WT to WB and in reverse thousands of times.
If your higher ups are making decisions without knowledge, just because they have power. that is considered incompetency , they should not be calling the shots.
There is nothing slow about raid 5, not as fast as raid 10, but it no slacker as long as WB is enabled. WB is the most important parameter you can enable on raid adapters affordable by the common man.
I've been digging around our servers and forums and it indeed looks like the issue may be as simple as change Write Policy to Write Back instead of Write Through. I will be rebuild one of the more trivial VD with Write Back Policy and verify the disk performance. Will let you all know. Thanks,
You can change the cache policy on the fly, you do not need to rebuild to change it. You can either change in server administrator under the virtual disk properties or in the controller by pressing F2 on that virtual disk.
I finally had time to get back to issue. I attempted to change the cache policy to "Write Back" but nothing changes after I hit apply, policy still stayed on "Write Through". I then rebooted the system, removed a Virtual Group and rebuilt another RAID-5 "Write Back" array, this time with only one VD. After the initialization was done, the setting jumped back to "Write Through" again.
I feel like I am missing some important piece of the puzzle, can you point it out for me?? Am I missing an 'external battery'??
If you go to OMSA->Storage->Batteries what does it show for the state of the battery. It does sound like there is a battery issue that is causing it to stay in write through mode.
Pulled from OpenManage SA:
Name: Battery 0
Predicted Capacity Status: Ready
Learn State: Idle
Next Learn Time: 71 days 17 hours
Maximum Learn Delay: 7 days 0 hours
Learn Mode: Auto