Start a Conversation

Solved!

Go to Solution

2505

May 21st, 2019 11:00

Poor single thread performance on R620

I recently purchased some new to me r620 servers for a cluster.  Mostly they will be doing heavy database transactions, but generally they will have Hyper-V vms doing a variety of work.  It was during the database work, that I started realizing that the servers were performing much worse than my old r710.  Since then I've swapped out controllers, nics, and drives in search of performance comparable to other diskmark tests on similar systems posted online.  Mostly my random single threaded performance seems to be horrible.  Changing the bios to Performance helped a lot, but I'm still running slow.  Enabling/Disabling read, write, and disk cache changes behavior, but does not alter performance radically either way.  Every update is applied, and using no read ahead/write back/disk cache enabled for tests (best results).  Am I missing something, could my CPU really be that much of a single thread bottleneck, or are my results normal?  Thanks for any advice!

System:
R620
Dual E5-2650v2
128GB (16x8GB PC3L-12800R)
H710p mini mono
5x Intel D3-S4610 960GB SSDs in Raid 5
Intel X540 NIC

Using CrystalMark 3 - 9/4GB:
My system
Read / Write
Seq: 1018 / 1637
512K: 743 / 1158
4K: 19 / 23
4k QD32: 204 / 75

Comparison system - https://www.brentozar.com/archive/2013/08/load-testing-solid-state-drives-raid/
Read / Write
Seq: 1855 / 1912
512K: 1480 / 1419
4K: 34 / 51
4k QD32: 651 / 88

Using CrystalMark 6 - 2/100mb:
my system
Read / Write
Seq Q32T1: 3022 / 3461
4k Q8T8: 335 / 290
4K Q32T1: 210 / 195
4K Q1T1: 32 / 30

Comparison system - https://www.youtube.com/watch?v=i-eCmE5itzM
Read / Write
Seq Q32T1: 554 / 264
4k Q8T8: 314 / 259
4K Q32T1: 316 / 261
4K Q1T1: 33 / 115

Using CrystalMark 6 - 5/1GB:
My system
Read / Write
Seq Q32T1: 2619 / 1957
4k Q8T8: 306 / 132
4K Q32T1: 212 / 116
4K Q1T1: 25 / 27

comparison system - r610, dual X5670, 128 GB 1600mhz ram, 4x Samsung 860 Pro 1TB raid 5, h700
Read / Write
Seq Q32T1: 754 / 685
4k Q8T8: 305 / 69
4K Q32T1: 262 / 69
4K Q1T1: 32 / 38

3 Posts

May 24th, 2019 10:00

Well it looks like the problem was Windows Server 2019.  No idea where the problem lies, whether it's an OS or driver issue, but clearly there is a huge problem with this generation dell and 2019.  I Installed 2016 and it's flying now.  Thanks for firing your QA team and turning me into your beta tester Microsoft.

Moderator

 • 

8.4K Posts

May 22nd, 2019 06:00

D3mo,

 

Normally we don't work with performance issues due to vast amount of variables. What I would look into is if the BIOS - System Profile - is set to disable C-States. Try changing it to custom and disable C-states and see if that has any affect. 

 

Let me know what you see.

 

3 Posts

May 22nd, 2019 10:00

I totally understand, and I've now spent at least three weeks swapping out parts to try and fix it, and tweaking every setting I can find.  So now I'm pretty much desperate.  My Dell r610s and 1950s have had no issues, and despite having slower hardware in every way, we are able to transfer a folder with thousands of small files from vm to host in 23 seconds vs my identical r620s both transferring the same folder vm to host in 2min and 40 sec.  Large files are no problem for the r620s though, which lead me to comparing my servers to similar setups and seeing that my crystaldiskmark speeds for random single thread are definitely slower.

I have tried just about every bios setting, and currently have it set to Performance and all power saving c-state etc turned off.  I even tried the Dell controlled turbo setting, but it didn't make a huge difference.  I can say, changing from the default dell power management setting to performance was the single biggest change I've made to date.  It doubled my performance numbers and got rid of noticeable OS lag.  Unfortunately, I still have huge issues.  Thanks for the recommendation though.  I'm seriously hoping there is something simple like that, but I just can't find it.

No Events found!

Top