Start a Conversation

Unsolved

This post is more than 5 years old

29085

November 10th, 2012 21:00

MD3620i Slow speeds?

Hi All,

So i just setup a new MD3620i directly attached to a VMware host using an intel x540-T2 10gbe adapter and cat6a cabling. I setup a disk group with 12x 900GB 10k rpm disks in raid 10, created two virtual disks of equal size and assigned each to its own controller. Setup a vmfs5 data store spanning both virtual disks and put a windows server test vm on it. Ran some benchmarks from within the vm and was kind of disappointing with the numbers. Sequential reads were about 400mbps and writes were 225mbps. Random 4k reads were 2mbps and writes 9mbps. This was done using crystal disk mark with three passes of 1000mb each. I deleted the data store and created a 12 disk raid 0 array and the numbers were pretty much unchanged. A little disheartening as an internal raid 5 array on the same host made up of 6 identical disks blows those numbers out of the water. ATTO shows 18/33mbps write/read speeds at 4k and 400/700mbps at the larger stripe sizes. Connection set to round robin and jumbo frames are off. Any ideas? Dell's literature shows this unit getting 1400mbps some tests.Thank you.

685 Posts

November 12th, 2012 11:00

For performance issues like this there are a number of things that can cause it. First I am including a white paper for you that lists some best practices for IP SANs. I know it mentions the MD32XX but the same would go for the MD36XX that you are running.

www.dell.com/.../ip-san-best-practices-en.pdf

If after looking through that and if everything is configured properly then we may need to pull a support bundle and I can look through that to make sure there is nothing wrong the array itself that could be causing the performance hit. I would also recommend downloading and running IOMeter. You can find info on installation and running it from:

http://www.iometer.org/

Once we get all that it will help us determine where the bottle neck is at and what we can do to resolve the issue. Please let me know if you have any questions or concerns as I would be happy to assist.

2 Posts

November 19th, 2012 19:00

Kenny thank your for your response. I looked at the best practices guide but since our unit is directly attached not much applied to our situation. I've been using iometer to benchmark our 12 disk raid 10 versus an internal 6 disk raid 5 on a h710p using identical 10k 900gb disk on r720's and the internal storage outperform the md3620i in every test. In the 4k read and write tests were seeing 30-35MBps on the internal storage and 20MBps read and 12MBps write on the san. On the 32k read we see 210mbps on the internal storage and only 70MBps on the san. As you can imagine we are scratching our heads here as our team has spent about two weeks trying to figure this issue out. Any help would be greatly appreciated.

685 Posts

November 20th, 2012 09:00

After looking into the setup you are running and the speeds that you mentioned. First thing that jumps out is your internal array out performing an external array. That is normal as an internal array will most always be faster then moving data back and forth to an external array. One thing that will help in speeding up your external array would be to actually add a switch. The reason I say that is because of the MPIO. When you have a switch the MPIO will allow you to use multiple pipes going to the array which in turn moves data at a faster rate. It is recommended to have at least 2 NICs configured and going to a switch and then to the array. When you have multiple NICs configured you need to make sure they are NOT teamed as teaming iSCSI NICs is a very bad idea. I would also say that if enabled Jumbo Frames end to end in your network that would also increase performance as well. I hope this helps, please let me know if you have any other questions.

4 Posts

January 29th, 2013 13:00

The original poster appears to have the terminology incorrect, assuming he meant 400MB/s and not 400mbps.

This is the sort of speed I have also seen on an MD3620i and it appears the Dell benchmark results (propaganda) is completely missleading (lies).

1 Message

July 9th, 2013 06:00

Hi, please did you solve your problem?

WE have the same problems, very slowly md3620i storage over 4x10G network.

And esxi 5.1 from dell.

I think what it is the PROBLEM of DELL Hardware.

No Events found!

Top