Unsolved
This post is more than 5 years old
4 Posts
0
46432
Disk performance problems in R720 server
We bought a Dell R720 server and installed CentOS 6.3 on it. We have another older Dell server which also has CentOS 6.3 installed in it. When we ran a simple benchmark for disk performance the older server is 10 times faster for that benchmark as the new one. The benchmark process involves writing something into the disk and flushing it in a loop. We want to track down why this is slow. There are two disks in the new server and we configured them as RAID-0. df -h yields these:
[Older server]
[xxx@xxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 97G 28G 64G 31% /
tmpfs 1.9G 11M 1.9G 1% /dev/shm
/dev/sda2 193G 103G 80G 57% /home
[New server]
[xxxx@xxxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_snap-lv_root 50G 4.6G 43G 10% /
tmpfs 12G 312K 12G 1% /dev/shm
/dev/sda1 485M 37M 423M 9% /boot
/dev/mapper/vg_snap-lv_home 488G 220M 463G 1% /home
How to figure out what's making the newer server 10 times slower? How to fix it? Thanks.
Duminda
DELL-Daniel My
Moderator
Moderator
•
6.2K Posts
0
February 12th, 2014 10:00
Hello Duminda
To compare the two systems I will need more information. I will need to know what hardware is used on both systems you are comparing.
What RAID controller?
What model hard disk drives?
Thanks
dumrat
4 Posts
0
February 12th, 2014 20:00
Hi Daniel,
lshw output:
DELL-Daniel My
Moderator
Moderator
•
6.2K Posts
0
February 15th, 2014 09:00
It looks like this:
Old server:
PERC 6/i
500GB SATA 7200RPM
New server:
PERC H710
300GB SAS 15000RPM - I'm assuming the 5K RPM is a typo. If it is not a typo then that would be part of the performance problem.
Based on that information The new system should out-perform the old system by a substantial margin. The PERC 6 is a really nice controller, so the H710 is not going to be a LOT better but it should be better. Your big performance gain on the new server is with the 15k drives versus the 7200 RPM SATA drives. The old server should definitely not be 10x faster.
There is one setting on the controller that is disabled by default. It could have a big performance impact. The controller has cache memory, and typically the hard drives also have their own cache memory. By default the controller does not utilize the hard drive cache memory because it is not battery backed. In the event of sudden loss of power you can have data loss/corruption if non-battery backed cache is being utilized. If you have this setting enabled on the old server and disabled on the new server it could explain the benchmark results.
Thanks
dumrat
4 Posts
0
February 24th, 2014 18:00
Hi Daniel,
We still haven't been able to figure out the issue. About your last paragraph - Where to change these settings? I cannot see them in BIOS. Thanks.
Duminda
HeMan321
2 Posts
0
March 3rd, 2023 07:00
I know this is ancient, but did you ever get to the bottom of this? We just put a few SAS SSDs in to a old R620 with a H710 and found all disks (SAS SSDs and SATA HDs) seem to max out at 130MB/s!
Any idea what might be limiting them?