Unsolved
This post is more than 5 years old
3 Posts
0
67762
Poor performance - MD3000i as a file server
Greetings folks
I have an MD3000i acting as storage for a PE2950; below is my observation:
PE2950 file copy local drive to MD - ok throughput, up to 70% of GbE, about roughly the same when file copy is from MD to local drive
Client on the public LAN file copy from share on PE2950 (local drive), ok throughput also
However, when the client maps to the file share, and the share happens to be from the MD - I obviously see roughly equal utilisation on both the public LAN and iSCSI LAN NICS (both using the server onboard NICs); but performance is very bad like 7% of GbE.
First thing that came to mind was that the PE2950 couldn't drive both NICS at the same time, seeing that testing from client > server local disk or server local disk > MD worked ok; but the PE2950 is not ancient hardware either (4GB RAM, 2.x GHz), so it can't be that bad?
Thought it might be due to interrupt sharing, but it's been ages since I had to tweak that on a server, but anyway I've tried tweaking those but it does not seem to help.
Does anyone have any leads on where to go to? Firmware is latest on the MD3000i. (15 x 600GB 15K SAS), also updated the SAS disk firmware.
Thanks!
I have an MD3000i acting as storage for a PE2950; below is my observation:
PE2950 file copy local drive to MD - ok throughput, up to 70% of GbE, about roughly the same when file copy is from MD to local drive
Client on the public LAN file copy from share on PE2950 (local drive), ok throughput also
However, when the client maps to the file share, and the share happens to be from the MD - I obviously see roughly equal utilisation on both the public LAN and iSCSI LAN NICS (both using the server onboard NICs); but performance is very bad like 7% of GbE.
First thing that came to mind was that the PE2950 couldn't drive both NICS at the same time, seeing that testing from client > server local disk or server local disk > MD worked ok; but the PE2950 is not ancient hardware either (4GB RAM, 2.x GHz), so it can't be that bad?
Thought it might be due to interrupt sharing, but it's been ages since I had to tweak that on a server, but anyway I've tried tweaking those but it does not seem to help.
Does anyone have any leads on where to go to? Firmware is latest on the MD3000i. (15 x 600GB 15K SAS), also updated the SAS disk firmware.
Thanks!
mrokkam1
23 Posts
0
May 4th, 2010 07:00
Any chance of looking at a wireshark trace?
gaujt
3 Posts
0
May 4th, 2010 08:00
NICS are on different subnets (public, and the iSCSI network as per the quick setup poster; no other hosts connected on the iSCSI side (procurve 2910al-24G switch).
I haven't had a chance to look at the trace, but anything in particular to look out for? SYN/ACK retransmits or something? Haven't used it much so I'm quite green there :D
CPU and mem usage was ok, based on task mgr performance monitor.
JOHNADCO
847 Posts
0
May 6th, 2010 10:00
Are you using O/S MPIO or are you using Dell's MPIO. When ever I am on with a tech and I have the O/S MPIO loaded dell rips it out and installs their own.
Myshtigo
34 Posts
0
May 6th, 2010 10:00
Myshtigo
34 Posts
0
May 6th, 2010 10:00
W2k8 is a different deal, but I would have to believe it is either a network port setting (full/half duplex or 100/1000, etc...). If you have ruled that out then I would look at the iSCSI MPIO, etc.
mrokkam1
23 Posts
0
May 6th, 2010 17:00
gaujt
3 Posts
0
May 7th, 2010 03:00
OS is Win2K3 R2 x86 and come to think of it, I did have the MS iSCSI initiator before getting the MD3000i; but can't recall now if the MPIO was using MS or Dell's; will do a clean install.
On jumbo frames, I set it on the MD3000i as 9000, on the NIC I recalled playing with different values (4000+ included) but didn't get much difference in results. Will try with all parties set to default 1500.
Event logs are clean, nothing in particular about disk or LAN errors.
- to check MPIO
- turn off jumbo frames totally along the entire path, makes some sense since the client accessing the file server may not even use jumbo frames
- wireshark trace
- blow 2003 away and replace with 2008 :)
Thanks again all for the help, I've got next week's work all planned out now. Have a good weekend :)
JOHNADCO
847 Posts
0
May 7th, 2010 08:00
We have two MD3000i's with attached Vmware hosts and two with with Windows Server attached hosts those being a mix of 2003 and 2008 server. Now I will caution you some on running throught put testing as sustained throughput isn't always impressive, but the dang IOPS we are getting is darn impressive considering the low cost of these iSCSI sans. IOPS will be the overwhelming performance factor in business server application use. IE: Exchange, SQL, file and print, ect..ect...
One note on the Jumbo Frames? We use pretty cheap GIGE Switches on our iSCSI san networks, but they do support large frames up to the 9K-ish limits. So other "better" switches may perform better on it.
With most all business sevrer applications on sans? You would never want to trade the a better sustained throughput number for lower overall IOPS.
ecolas
6 Posts
0
June 17th, 2010 09:00
I experience same poor performance. I have 2 MD3000i with 8 SATA 7,2k 1To disks and RAID 6 configured. Both are connected on a Windows 2003 x64 server (R710). I use Broadcom internal NIC and BACS software. I've enable Jumbo Frames (MTU 9000) on both Broadcom iSCSI and MD3000i. I use the Microsoft Initiator but configured with Broadcom iSCSI Adapter (need a configuration).
I really don"t understand what's going on. When i just copy a file from attached disk to MD3000i it seems fast but when i do defrag on the drive on the MD3000 it's a nightmare.
If some one have a solution or an advice.
Thanks.
JOHNADCO
847 Posts
0
June 17th, 2010 10:00
ecolas
6 Posts
0
June 18th, 2010 00:00
The fact is that I do experience the same poor performance than a lot of people here. I have configured iSCSI link with dedicated tools (Broadcom tool) and i really don't see any differences.
Maybe someone knows how to increase perf by tuning OS?
JOHNADCO
847 Posts
0
June 18th, 2010 08:00
Even though I don't think this is the issue.
8 Drives, slow cruddy sata drives, and RAID 6 may not yield results good enough for many situations.
We have a couple of windows x86 servers connected by a single NIC to single controller MD3000i's with 15 slow cruddy sata drives in the disk group. This should be as slow as slow can be. I would be willing to try some operations on it of your choosing to compare numbers?
Budrumi_b83ec8
5 Posts
0
June 19th, 2010 05:00
JOHNADCO
847 Posts
0
June 21st, 2010 08:00
ecolas
6 Posts
0
June 21st, 2010 09:00
I have some news about my problem. I test with IOmeter and got some bad result (one MD3000i gives me poor perf on writing 32K bloc and this other gives me poor perf on reading 32K blocs). I've checked all my conf and find different driver version. I've put all drivers the same on both server and test again (without success). I've unchecked then checked write cache in drive properties (iSCSI drive). It gives me now good perf on MD3000i which was really bad on writing. Now only one of my MD3000i is OK. On the other i can't have good perf on reading 32K block .
Here some result:
first MD3000 reading 32 K perf:
Total I/O per second = 32.86
Total MBs per Second= 1.03
Average I/O Respond Time (ms) = 30.4
Maximum I/O Response Time (ms) = 50.5
%CPU Utilisation (Total) = 0.43
Total Error Count = 0
Second MD3000 reading 32K perf:
Total I/O per second = 1090
Total MBs per Second= 34
Average I/O Respond Time (ms) = 0.9
Maximum I/O Response Time (ms) = 75.3
%CPU Utilisation (Total) = 1.54%
Total Error Count = 0
For writing i've got a litlle difference between the both but it's nearly the same.
Any idea?