Start a Conversation

Unsolved

This post is more than 5 years old

76412

March 4th, 2011 11:00

MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one  in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from   MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests:

Seq. writes on MD1220 in my office:  1.1 GB/s - bonnie++, 1.3GB/s - dd

Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++,  310MB/s - dd 

Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing.  I expect at least similar throughput from these two setups.  My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high.

More details about the configurations:

1. A good one in my office:

Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel  2.6.18-194.26.1.el5 x86_64

20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on     MD1220 + Dell H800 I/O controller with 1GB cache in the host

2. Not so good one at my vendor's:

Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64

20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size,   Dell MD3200, 2 I/O controllers in array with 1GB cache each   

 

Additional information.

I've also ran the same tests  on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives.

When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal  drives gave me about a half of 20 drives' throughput on MD3200.

The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput.

Thank you for looking into it.

Regards

Igor

7 Posts

March 4th, 2011 13:00

To summarize, the question here is it reasonable to expect about 100MB/s   of sequential write throughput  per each couple of drives in RAID10 on MD3200?

Is there any trick to enable such performance in MD3200 with dual controller  as opposed to simple MD1220 with a single H800 adapter?

57 Posts

March 9th, 2011 15:00

Not sure if this will be helpful at all or not.  I have an MD3200 and MD1200 here running as Shared SAS in a VMware environment.  I did some performance testing with IOMETER a while back (just before Christmas) but didn't do a Sequential Write performance test (I was interested only in mixed workloads) ... anyway, here's a link to my results:

http://communities.vmware.com/thread/287838

I have only limited experience with the gear and have only ever used it as a back-end for VMware ESXi 4.1 and never directly with Windows, so not sure whether I can shed any light on your issue.

57 Posts

March 9th, 2011 15:00

Couple of observations though:

* you mentioned that your MD3200 at your vendor's site has 20 drives? ... MD3200 only has 12 drive bays ... did you mean MD3220 or ... ?

* is the gear at the vendor's site dedicated to your server, or is it being shared with other servers?

* have you looked into issues such as partition alignment (shouldn't be an issue with Server 2008 onwards, but might be if you're on Server 2003 or earlier)

1 Message

March 17th, 2011 02:00

I have the same Problem here with MD3200 and MD1200.

MD1200 with H800 performs 4 times better on sequential writes (Blocksize 2MB) as MD3200 with H200. Both were configured identical, 12*450GB SAS 15k, 6xRAID1.

The Tests were taken form a DELL tech. with a DELL standard IOMeter-Test on site.

After the case was escalated (going for almost 4 weeks now), a L3 tech. from Ireland said: this is normal. The Redundancy-Features of the MD3200 lead to this performance.

My local representative said more or less: If this is not fast enough for you, you have bought the wrong Storage. The Storage is designed to serve only to 2-4 hosts for loads like Database-Clusters have.

I used it to connect it to a DataCore-Host which serves 5 VMware-Servers. With a simple sdelete of one of the Windows-VMs I was able to get the MD3200 in so much trouble that I got 6000-7000ms Latency for I/Os - leading in disk failure on VMWare.

DELL now offered to change the MD3200 to MD1200 "for no additional cost". They cannot change anything on the MD3200 as it is an LSI 2600 OEM - which is also used by IBM, FSC and some others... All of them will have the same "Feature": Redundancy dearly bought by performance.

 

 

 

7 Posts

March 17th, 2011 13:00

FROSTYATCBM, thank you for your comment and for noticing the typo. 

Yes, I meant MD3220 with 20 drives. However, I made this typo because before I've tested MD3000+MD1000 with 28 drives total and had the same results. 

Regards

Igor Polishchuk

7 Posts

March 17th, 2011 13:00

VMADMIN,

Thank you so much for posting your answer. You confirmed all my conclusions about the performance of the MD3200 ( and MD3220) arrays' controllers.

I've tested all three: MD3000+MD1000, MD3220, and MD1220+H800. The only thing I could not make my vendor to do is to escalate it to Dell and confirm the issue. Your post ads the missing link and brings me the closure.

Now, I can go with my known configuration of MD1220 + H800 without worrying that I was just missing some magic "slow" button on MD3200/MD3220.

Regards

Igor Polishchuk 

57 Posts

March 17th, 2011 15:00

Thanks for that additional info.  I'm just about to go into production with my MD3200, so I hope this issue doesn't end up biting me in the backside.  Most of our traffic is mixed workloads (various servers, Exchange, a little bit of SQL).  But since I will be using the MD3200+MD1200 storage for keeping my VM backups, this might become an issue for me, as backups would possibly involve a lot of sequential writes.

2 Posts

March 28th, 2011 01:00

Hey Frosty I noticed you're in AU also :emotion-1: Probably the only two with a MD3200 as info is sooo hard to find.

I have this setup but with a MD3200.

Do you have your controllers in Simplex or Duplex? and are you using 1 or 2 HBA on your R710's?

My setup is 

1x Dell MD3200 : 10x 600GB 15k SAS drives Raid 10, Simplex Config i.e. single controller.

1x Dell R710 (1x Dell 6Gb SAS HBA)
2x HP ML350 G5 (1x Dell 6Gb SAS HBA)

All tests were done with the R710 and the HP's are not even connected at present.

I'm getting around 450-550MB/s Sequential Read, and around 350-450MB/s Sequential Write. I was expecting 600MB/s read/write to max out the controller.

I'm trying to speak to tech support to see what kind of increase a second controller will make, and/or you need a second HBA to take advantage of it. 

With the Dell HBA's they have 2 ports on them, I was under the impression that if you connect both the cables, 1 to each controller on the SAN you would get 12Gb instead of 6Gb.

 

Also another thing I noticed, is that when running a simplex configuration if you controller dies you're in a world of hurt. Basically if your replacement card comes with a different firmware to your current, the array manager locks out the card from accessing the array, preventing you from even doing a firmware upgrade. You need to remove the array from the MDSM config and take out all the HDD's to be able to upgrade the firmware apparently.

 

Another thing, is this is for a VMWare HA deployment. so DC/MX/FileServer/Webserver will be running on the SAN. Have you tried to say give 4 drives just to the DC/MX in Raid 10, and give the remaining 6 drives to all others Raid10. I'm wondering if it's worth splitting up the small random read/write I/O VM's with the Larger Sequential read/write VM's

57 Posts

March 30th, 2011 20:00

Some quick responses to your questions:

* our MD3200 has 2 x controllers for redundancy

* each Dell R710 server has 2 x HBAs (each HBA has 2 x ports incidentally, but I got the extra HBAs for added redundancy)

* the cables for the MD3200 have 4 x 6Gb/sec channels in them ... if one channel if full it will go to the 2nd channel, if both 1 & 2 are used it will go to the 3rd, etc etc ... so theoretically up to 4 x 6Gb/sec if all 4 channels in the cable were all simultaneously maxed out (extremely unlikely to happen in practice I would think)

58 Posts

April 15th, 2011 21:00

If it matters any, if you install the EXT4 filesystem, I had an increase of about 18% using dd on a 80GB file.

Also one thing we found that Dell has not been able to resolve is with the multipath driver if you use a benchmark utility like bonnie or Oracle's Orion in combination with a dd test, we can usually get the filesystem to drop into read only mode in a matter of minutes to a few hrs. The superblock becomes corrupt and you essentially have to reformat.

We just replaced the MD3200 with an r510 and the performance is about 1.2GB/sec compared to 320 MB/sec on the MD3200.

 

 

 

 

 

1 Message

October 3rd, 2011 09:00

My guess is that you need to pay to enable the "high performance tier"

www.dell.com/.../powervault-md3200-high-performance-tier-implementation.pdf

No Events found!

Top