Both servers have a PERC 6/i with 6Gbps drives. I wouldn't think it would matter since the issue is purely memory with no disk i/o. We only have a couple of dell servers, so my experience with them is low.
Dell support told me "Downgrading the Bios may cause the newer system fail post. ".
The reason I suggested a bios change was to match the new-slow system with the old-fast system. I won't upgrade the old system, so downgrading the new one was the other option.
There just isn't much difference between the two servers.
Sorry ... I saw "transfer" and assumed you were doing disk tranfers, failing to see how it was a memory configuration issue :) Also, just an FYI ... the PERC 6 is a 3Gbps controller, so if you move on to disk benchmarks, go in with the right expectations. My bad for not paying attention. I'm not sure what else to suggest short of swapping components to narrow it down ... however, it is clear that your configs should not have such a disparity in performance. Have you tried reinstalling, in case something goofy happened during the install?
"Dell support told me downgrading the Bios may cause the newer system fail post."
You have the same chance of causing the system to fail post when upgrading the BIOS as you do downgrading. It is under warranty, so even if it does cause an issue, they will replace your board.
" you can't downgrade the firmware if 'old' turns out slow like 'new'"
Sure you can. You simply run the older package, saying yes I know that this is older than the current version. However, I fail to see how that would help your situation :)
One thing you didn't mention is WHICH PERC they had. I might assume they have the same PERC controller, but it can make a huge difference when comparing H200 to H700/512 to H700/1GB to PERC 6/i, etc., so I'd hate to make assumptions on such an important detail. I might also assume that all drives are 6Gbps?
I'm not just a benchmark snob :) If the new system will perform like its twin brother with the same hardware, I'll be happy.
I forgot to mention that I actually did swap the RAM between the two systems, to eliminate that as a possibility. The slow memory performance stuck with the server, not the RAM.
I originally had CentOS5.7 installed when the slow performance first came to light. I reinstalled RHEL5.3 just to make it identical with the fast system. I don't *think* its a linux config issue of some kind. Both systems have straigt-off-the-dvd installs and kernels.
I also tested both with Phoronix PTS, a boot dvd benchmark, to eliminate the OS from the picture. On those memory tests, the new system was only 17% slower. Still a big difference for 2 servers with almost identical configs. Thats what makes me wonder if the problem is some kind of bios/linux interaction.
Where would I find old BIOS files? Under the "downloads" for my service tag, it just shows the 2 most recent BIOS. 6.1.0 and 6.0.7. I'm on 6.0.7, I'll go ahead and upgrade just to try it.
Thanks for the direct link. There isn't an Other Versions link on any of the BIOS pages I see. If I dont get any other ideas before monday, I'll give the old bios a try. If it fouls up the system, so be it. I can't use it as is :)
Upgrading from 6.0.7 to 6.1.0 didn't have any effect.
The Other Versions isn't on the download page for the latest version ... just the 6.0.7 page (a "bug", I take it, that just started recently on some of these downloads).
Thank you very much Hamboney. That would be most helpful. I am going to swap the CPUs between the 2 boxes and retest. Dell is sticking to the proc speed as the reason. I can't believe that going from 2.53Mhz to a 2.40Mhz procs causes the system to run at half the speed. Hopefully the cpu swap plus your numbers will bring some more data to the table.
Theoretically, the 2.4 (both are same specs otherwise) should only be 5% slower than the 2.53, so even if you take your 17% variance with the bootable version, I don't think the difference is justified.
sysbench does need to be compiled. On any flavor of linux, it is an easy install. "./configure --without-mysql" then "make" and "make install"
I booted memtest and it reports 7321 MB/s. In line with yours. I did some previous benchmarks with a boot DVD of the Phoronix test suite. Using that reported a difference of about 17% between the old-fast system and the new-slow system. 17% still seems high, but it could at least be in line with one system having faster CPUs than the other. The real problem seems to be when linux is running. Then the difference between them goes to 200%
Thats why it is important to get some benchmark numbers using sysbench under linux. Is this terrible performance related to some linux/BIOS combination, CPU speed, or is it just my individual server? With only my 2 systems to test with I can't get enough data to make a guess. And my older system is a production server, so my testing options with it are much more limited.
SysBench doesn't seem like a viable option here. Different UNIX/LINUX install and compile optinos might skew the results. This is why I wanted a common test tool with the same boot code. My Linux won't be your Linux...
I suspect it isn't a hardware issue but more of a software configuration one ?
I would agree that the issue is more likely to be a linux-BIOS interaction problem, not a physical hardware problem.
I also agree that differences in linux flavor or kernel can have an impact on performance.
But, I'm not talking about a 5 or 10% performance difference. My old server running RHEL5.3 2.6.18 kernel moves memory at TWICE the speed of the almost-identical new server running the exact same OS and kernel. My brand new R710 moves memory slower than a 6 year old PE 1750 running the same kernel! I also tested the new server with CentOS 5.7 and it ran slow with that as well.
So while your linux/kernel won't be the same as mine, anything you have running on an R710 should perform a lot better than a PE 1750. If it doesn't, then we both have something to dig into :)
jeffmeek
1 Rookie
•
8 Posts
0
March 9th, 2012 09:00
Thanks for your input.
Both servers have a PERC 6/i with 6Gbps drives. I wouldn't think it would matter since the issue is purely memory with no disk i/o. We only have a couple of dell servers, so my experience with them is low.
Dell support told me "Downgrading the Bios may cause the newer system fail post. ".
The reason I suggested a bios change was to match the new-slow system with the old-fast system. I won't upgrade the old system, so downgrading the new one was the other option.
There just isn't much difference between the two servers.
theflash1932
9 Legend
•
16.3K Posts
0
March 9th, 2012 09:00
Sorry ... I saw "transfer" and assumed you were doing disk tranfers, failing to see how it was a memory configuration issue :) Also, just an FYI ... the PERC 6 is a 3Gbps controller, so if you move on to disk benchmarks, go in with the right expectations. My bad for not paying attention. I'm not sure what else to suggest short of swapping components to narrow it down ... however, it is clear that your configs should not have such a disparity in performance. Have you tried reinstalling, in case something goofy happened during the install?
"Dell support told me downgrading the Bios may cause the newer system fail post."
You have the same chance of causing the system to fail post when upgrading the BIOS as you do downgrading. It is under warranty, so even if it does cause an issue, they will replace your board.
theflash1932
9 Legend
•
16.3K Posts
0
March 9th, 2012 09:00
" you can't downgrade the firmware if 'old' turns out slow like 'new'"
Sure you can. You simply run the older package, saying yes I know that this is older than the current version. However, I fail to see how that would help your situation :)
One thing you didn't mention is WHICH PERC they had. I might assume they have the same PERC controller, but it can make a huge difference when comparing H200 to H700/512 to H700/1GB to PERC 6/i, etc., so I'd hate to make assumptions on such an important detail. I might also assume that all drives are 6Gbps?
theflash1932
9 Legend
•
16.3K Posts
0
March 9th, 2012 10:00
It is on the download page of 6.0.7 under Other Versions:
www.dell.com/.../DriverFileFormats
jeffmeek
1 Rookie
•
8 Posts
0
March 9th, 2012 10:00
I'm not just a benchmark snob :) If the new system will perform like its twin brother with the same hardware, I'll be happy.
I forgot to mention that I actually did swap the RAM between the two systems, to eliminate that as a possibility. The slow memory performance stuck with the server, not the RAM.
I originally had CentOS5.7 installed when the slow performance first came to light. I reinstalled RHEL5.3 just to make it identical with the fast system. I don't *think* its a linux config issue of some kind. Both systems have straigt-off-the-dvd installs and kernels.
I also tested both with Phoronix PTS, a boot dvd benchmark, to eliminate the OS from the picture. On those memory tests, the new system was only 17% slower. Still a big difference for 2 servers with almost identical configs. Thats what makes me wonder if the problem is some kind of bios/linux interaction.
Where would I find old BIOS files? Under the "downloads" for my service tag, it just shows the 2 most recent BIOS. 6.1.0 and 6.0.7. I'm on 6.0.7, I'll go ahead and upgrade just to try it.
jeffmeek
1 Rookie
•
8 Posts
0
March 9th, 2012 10:00
Thanks for the direct link. There isn't an Other Versions link on any of the BIOS pages I see. If I dont get any other ideas before monday, I'll give the old bios a try. If it fouls up the system, so be it. I can't use it as is :)
Upgrading from 6.0.7 to 6.1.0 didn't have any effect.
theflash1932
9 Legend
•
16.3K Posts
0
March 9th, 2012 10:00
The Other Versions isn't on the download page for the latest version ... just the 6.0.7 page (a "bug", I take it, that just started recently on some of these downloads).
Hamboney
79 Posts
0
March 9th, 2012 11:00
We have a few of the R710s here, new and old 11th Gens. I have downloaded the benchmark and will try it in the LAB on Monday
jeffmeek
1 Rookie
•
8 Posts
0
March 10th, 2012 08:00
Thank you very much Hamboney. That would be most helpful. I am going to swap the CPUs between the 2 boxes and retest. Dell is sticking to the proc speed as the reason. I can't believe that going from 2.53Mhz to a 2.40Mhz procs causes the system to run at half the speed. Hopefully the cpu swap plus your numbers will bring some more data to the table.
theflash1932
9 Legend
•
16.3K Posts
0
March 10th, 2012 09:00
Theoretically, the 2.4 (both are same specs otherwise) should only be 5% slower than the 2.53, so even if you take your 17% variance with the bootable version, I don't think the difference is justified.
Hamboney
79 Posts
0
March 12th, 2012 07:00
Does this tool have to be compiled / installed ? Ideally a portable application or ISO based tool would be best.
I will review the docs and see what is required to make it run...
Hamboney
79 Posts
0
March 12th, 2012 08:00
Using MemTest86+ v4.20 we have the following speeds reported: R710 - 12 GB RAM ( 6 x 2 GB UDIMMs ) - 6,879 MB/s
RAM Settings: 532 MHz ( DDR 3-1064 ) / CAS 7-7-7-20 Triple Channel
I have run it against a newer ( with H700 RAID integrated ) and older ( PERC 6/i Integrated RAID ) 11th Gen R710s with the same results.
jeffmeek
1 Rookie
•
8 Posts
0
March 12th, 2012 09:00
sysbench does need to be compiled. On any flavor of linux, it is an easy install. "./configure --without-mysql" then "make" and "make install"
I booted memtest and it reports 7321 MB/s. In line with yours. I did some previous benchmarks with a boot DVD of the Phoronix test suite. Using that reported a difference of about 17% between the old-fast system and the new-slow system. 17% still seems high, but it could at least be in line with one system having faster CPUs than the other. The real problem seems to be when linux is running. Then the difference between them goes to 200%
Thats why it is important to get some benchmark numbers using sysbench under linux. Is this terrible performance related to some linux/BIOS combination, CPU speed, or is it just my individual server? With only my 2 systems to test with I can't get enough data to make a guess. And my older system is a production server, so my testing options with it are much more limited.
Hamboney
79 Posts
0
March 12th, 2012 10:00
SysBench doesn't seem like a viable option here. Different UNIX/LINUX install and compile optinos might skew the results. This is why I wanted a common test tool with the same boot code. My Linux won't be your Linux...
I suspect it isn't a hardware issue but more of a software configuration one ?
Anyway to isolate the Linux differences ?
jeffmeek
1 Rookie
•
8 Posts
0
March 12th, 2012 11:00
I would agree that the issue is more likely to be a linux-BIOS interaction problem, not a physical hardware problem.
I also agree that differences in linux flavor or kernel can have an impact on performance.
But, I'm not talking about a 5 or 10% performance difference. My old server running RHEL5.3 2.6.18 kernel moves memory at TWICE the speed of the almost-identical new server running the exact same OS and kernel. My brand new R710 moves memory slower than a 6 year old PE 1750 running the same kernel! I also tested the new server with CentOS 5.7 and it ran slow with that as well.
So while your linux/kernel won't be the same as mine, anything you have running on an R710 should perform a lot better than a PE 1750. If it doesn't, then we both have something to dig into :)