Not sure if this will be helpful at all or not. I have an MD3200 and MD1200 here running as Shared SAS in a VMware environment. I did some performance testing with IOMETER a while back (just before Christmas) but didn't do a Sequential Write performance test (I was interested only in mixed workloads) ... anyway, here's a link to my results:
I have only limited experience with the gear and have only ever used it as a back-end for VMware ESXi 4.1 and never directly with Windows, so not sure whether I can shed any light on your issue.
* you mentioned that your MD3200 at your vendor's site has 20 drives? ... MD3200 only has 12 drive bays ... did you mean MD3220 or ... ?
* is the gear at the vendor's site dedicated to your server, or is it being shared with other servers?
* have you looked into issues such as partition alignment (shouldn't be an issue with Server 2008 onwards, but might be if you're on Server 2003 or earlier)
I have the same Problem here with MD3200 and MD1200.
MD1200 with H800 performs 4 times better on sequential writes (Blocksize 2MB) as MD3200 with H200. Both were configured identical, 12*450GB SAS 15k, 6xRAID1.
The Tests were taken form a DELL tech. with a DELL standard IOMeter-Test on site.
After the case was escalated (going for almost 4 weeks now), a L3 tech. from Ireland said: this is normal. The Redundancy-Features of the MD3200 lead to this performance.
My local representative said more or less: If this is not fast enough for you, you have bought the wrong Storage. The Storage is designed to serve only to 2-4 hosts for loads like Database-Clusters have.
I used it to connect it to a DataCore-Host which serves 5 VMware-Servers. With a simple sdelete of one of the Windows-VMs I was able to get the MD3200 in so much trouble that I got 6000-7000ms Latency for I/Os - leading in disk failure on VMWare.
DELL now offered to change the MD3200 to MD1200 "for no additional cost". They cannot change anything on the MD3200 as it is an LSI 2600 OEM - which is also used by IBM, FSC and some others... All of them will have the same "Feature": Redundancy dearly bought by performance.
Thank you so much for posting your answer. You confirmed all my conclusions about the performance of the MD3200 ( and MD3220) arrays' controllers.
I've tested all three: MD3000+MD1000, MD3220, and MD1220+H800. The only thing I could not make my vendor to do is to escalate it to Dell and confirm the issue. Your post ads the missing link and brings me the closure.
Now, I can go with my known configuration of MD1220 + H800 without worrying that I was just missing some magic "slow" button on MD3200/MD3220.
Thanks for that additional info. I'm just about to go into production with my MD3200, so I hope this issue doesn't end up biting me in the backside. Most of our traffic is mixed workloads (various servers, Exchange, a little bit of SQL). But since I will be using the MD3200+MD1200 storage for keeping my VM backups, this might become an issue for me, as backups would possibly involve a lot of sequential writes.
Hey Frosty I noticed you're in AU also :emotion-1: Probably the only two with a MD3200 as info is sooo hard to find.
I have this setup but with a MD3200.
Do you have your controllers in Simplex or Duplex? and are you using 1 or 2 HBA on your R710's?
My setup is
1x Dell MD3200 : 10x 600GB 15k SAS drives Raid 10, Simplex Config i.e. single controller.
1x Dell R710 (1x Dell 6Gb SAS HBA) 2x HP ML350 G5 (1x Dell 6Gb SAS HBA)
All tests were done with the R710 and the HP's are not even connected at present.
I'm getting around 450-550MB/s Sequential Read, and around 350-450MB/s Sequential Write. I was expecting 600MB/s read/write to max out the controller.
I'm trying to speak to tech support to see what kind of increase a second controller will make, and/or you need a second HBA to take advantage of it.
With the Dell HBA's they have 2 ports on them, I was under the impression that if you connect both the cables, 1 to each controller on the SAN you would get 12Gb instead of 6Gb.
Also another thing I noticed, is that when running a simplex configuration if you controller dies you're in a world of hurt. Basically if your replacement card comes with a different firmware to your current, the array manager locks out the card from accessing the array, preventing you from even doing a firmware upgrade. You need to remove the array from the MDSM config and take out all the HDD's to be able to upgrade the firmware apparently.
Another thing, is this is for a VMWare HA deployment. so DC/MX/FileServer/Webserver will be running on the SAN. Have you tried to say give 4 drives just to the DC/MX in Raid 10, and give the remaining 6 drives to all others Raid10. I'm wondering if it's worth splitting up the small random read/write I/O VM's with the Larger Sequential read/write VM's
* each Dell R710 server has 2 x HBAs (each HBA has 2 x ports incidentally, but I got the extra HBAs for added redundancy)
* the cables for the MD3200 have 4 x 6Gb/sec channels in them ... if one channel if full it will go to the 2nd channel, if both 1 & 2 are used it will go to the 3rd, etc etc ... so theoretically up to 4 x 6Gb/sec if all 4 channels in the cable were all simultaneously maxed out (extremely unlikely to happen in practice I would think)
If it matters any, if you install the EXT4 filesystem, I had an increase of about 18% using dd on a 80GB file.
Also one thing we found that Dell has not been able to resolve is with the multipath driver if you use a benchmark utility like bonnie or Oracle's Orion in combination with a dd test, we can usually get the filesystem to drop into read only mode in a matter of minutes to a few hrs. The superblock becomes corrupt and you essentially have to reformat.
We just replaced the MD3200 with an r510 and the performance is about 1.2GB/sec compared to 320 MB/sec on the MD3200.
ora4dba
1 Rookie
•
7 Posts
0
March 4th, 2011 13:00
To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200?
Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter?
FrostyAtCBM
1 Rookie
•
59 Posts
0
March 9th, 2011 15:00
Not sure if this will be helpful at all or not. I have an MD3200 and MD1200 here running as Shared SAS in a VMware environment. I did some performance testing with IOMETER a while back (just before Christmas) but didn't do a Sequential Write performance test (I was interested only in mixed workloads) ... anyway, here's a link to my results:
http://communities.vmware.com/thread/287838
I have only limited experience with the gear and have only ever used it as a back-end for VMware ESXi 4.1 and never directly with Windows, so not sure whether I can shed any light on your issue.
FrostyAtCBM
1 Rookie
•
59 Posts
0
March 9th, 2011 15:00
Couple of observations though:
* you mentioned that your MD3200 at your vendor's site has 20 drives? ... MD3200 only has 12 drive bays ... did you mean MD3220 or ... ?
* is the gear at the vendor's site dedicated to your server, or is it being shared with other servers?
* have you looked into issues such as partition alignment (shouldn't be an issue with Server 2008 onwards, but might be if you're on Server 2003 or earlier)
vmadmin
1 Message
0
March 17th, 2011 02:00
I have the same Problem here with MD3200 and MD1200.
MD1200 with H800 performs 4 times better on sequential writes (Blocksize 2MB) as MD3200 with H200. Both were configured identical, 12*450GB SAS 15k, 6xRAID1.
The Tests were taken form a DELL tech. with a DELL standard IOMeter-Test on site.
After the case was escalated (going for almost 4 weeks now), a L3 tech. from Ireland said: this is normal. The Redundancy-Features of the MD3200 lead to this performance.
My local representative said more or less: If this is not fast enough for you, you have bought the wrong Storage. The Storage is designed to serve only to 2-4 hosts for loads like Database-Clusters have.
I used it to connect it to a DataCore-Host which serves 5 VMware-Servers. With a simple sdelete of one of the Windows-VMs I was able to get the MD3200 in so much trouble that I got 6000-7000ms Latency for I/Os - leading in disk failure on VMWare.
DELL now offered to change the MD3200 to MD1200 "for no additional cost". They cannot change anything on the MD3200 as it is an LSI 2600 OEM - which is also used by IBM, FSC and some others... All of them will have the same "Feature": Redundancy dearly bought by performance.
ora4dba
1 Rookie
•
7 Posts
0
March 17th, 2011 13:00
FROSTYATCBM, thank you for your comment and for noticing the typo.
Yes, I meant MD3220 with 20 drives. However, I made this typo because before I've tested MD3000+MD1000 with 28 drives total and had the same results.
Regards
Igor Polishchuk
ora4dba
1 Rookie
•
7 Posts
0
March 17th, 2011 13:00
VMADMIN,
Thank you so much for posting your answer. You confirmed all my conclusions about the performance of the MD3200 ( and MD3220) arrays' controllers.
I've tested all three: MD3000+MD1000, MD3220, and MD1220+H800. The only thing I could not make my vendor to do is to escalate it to Dell and confirm the issue. Your post ads the missing link and brings me the closure.
Now, I can go with my known configuration of MD1220 + H800 without worrying that I was just missing some magic "slow" button on MD3200/MD3220.
Regards
Igor Polishchuk
FrostyAtCBM
1 Rookie
•
59 Posts
0
March 17th, 2011 15:00
Thanks for that additional info. I'm just about to go into production with my MD3200, so I hope this issue doesn't end up biting me in the backside. Most of our traffic is mixed workloads (various servers, Exchange, a little bit of SQL). But since I will be using the MD3200+MD1200 storage for keeping my VM backups, this might become an issue for me, as backups would possibly involve a lot of sequential writes.
Pingu87
2 Posts
0
March 28th, 2011 01:00
Hey Frosty I noticed you're in AU also :emotion-1: Probably the only two with a MD3200 as info is sooo hard to find.
I have this setup but with a MD3200.
Do you have your controllers in Simplex or Duplex? and are you using 1 or 2 HBA on your R710's?
My setup is
1x Dell MD3200 : 10x 600GB 15k SAS drives Raid 10, Simplex Config i.e. single controller.
1x Dell R710 (1x Dell 6Gb SAS HBA)
2x HP ML350 G5 (1x Dell 6Gb SAS HBA)
All tests were done with the R710 and the HP's are not even connected at present.
I'm getting around 450-550MB/s Sequential Read, and around 350-450MB/s Sequential Write. I was expecting 600MB/s read/write to max out the controller.
I'm trying to speak to tech support to see what kind of increase a second controller will make, and/or you need a second HBA to take advantage of it.
With the Dell HBA's they have 2 ports on them, I was under the impression that if you connect both the cables, 1 to each controller on the SAN you would get 12Gb instead of 6Gb.
Also another thing I noticed, is that when running a simplex configuration if you controller dies you're in a world of hurt. Basically if your replacement card comes with a different firmware to your current, the array manager locks out the card from accessing the array, preventing you from even doing a firmware upgrade. You need to remove the array from the MDSM config and take out all the HDD's to be able to upgrade the firmware apparently.
Another thing, is this is for a VMWare HA deployment. so DC/MX/FileServer/Webserver will be running on the SAN. Have you tried to say give 4 drives just to the DC/MX in Raid 10, and give the remaining 6 drives to all others Raid10. I'm wondering if it's worth splitting up the small random read/write I/O VM's with the Larger Sequential read/write VM's
FrostyAtCBM
1 Rookie
•
59 Posts
0
March 30th, 2011 20:00
Some quick responses to your questions:
* our MD3200 has 2 x controllers for redundancy
* each Dell R710 server has 2 x HBAs (each HBA has 2 x ports incidentally, but I got the extra HBAs for added redundancy)
* the cables for the MD3200 have 4 x 6Gb/sec channels in them ... if one channel if full it will go to the 2nd channel, if both 1 & 2 are used it will go to the 3rd, etc etc ... so theoretically up to 4 x 6Gb/sec if all 4 channels in the cable were all simultaneously maxed out (extremely unlikely to happen in practice I would think)
npolite
58 Posts
0
April 15th, 2011 21:00
If it matters any, if you install the EXT4 filesystem, I had an increase of about 18% using dd on a 80GB file.
Also one thing we found that Dell has not been able to resolve is with the multipath driver if you use a benchmark utility like bonnie or Oracle's Orion in combination with a dd test, we can usually get the filesystem to drop into read only mode in a matter of minutes to a few hrs. The superblock becomes corrupt and you essentially have to reformat.
We just replaced the MD3200 with an r510 and the performance is about 1.2GB/sec compared to 320 MB/sec on the MD3200.
jjones_accela
1 Message
1
October 3rd, 2011 09:00
My guess is that you need to pay to enable the "high performance tier"
www.dell.com/.../powervault-md3200-high-performance-tier-implementation.pdf