Unsolved
This post is more than 5 years old
9 Posts
0
12338
November 17th, 2011 06:00
MD3220i with VMWare - Sequential Read Performance and IOMeter
Hello,
I am trying to benchmark my configuration before going production. I am using MD3220i and multipathing with VMWare, and before I provide alot of configuration specifics, let me ask if anyone else has used IOMeter to test ME3220i performance. I have some results that when compared to others in a VMWare forum where performance results are shared (using the same test profile), there seems to be an issue with MD3220i sequential READ performance:
| IOps | MBps | Avg. Response Time | |
| Max Throughput-100%Read | 370 | 11.58 | 286.19 |
| RealLife-60%Rand-65%Read | 3625 | 28.32 | 23.786 |
| Max Throughput-50%Read | 6057 | 189.30 | 16.406 |
| Random-8k-70%Read | 3437 | 26.85 | 24.999 |
The other tests seem inline with other storage array results of a similar configuration; except for Max Throughput-100%Read, and therefore, I don't suspect there is anything glaringly wrong with my configuration. RAID levels of the diskgroups and spindle counts within the group will effect the numbers, but the Max Throughput-100%Read seems way off. The numbers for sequential read should be higher than anything else, right?
Thanks, Jason
0 events found


JOHNADCO
2 Intern
•
847 Posts
0
November 17th, 2011 07:00
Check this out?
communities.vmware.com/.../1780079
Cut and paste from post 305 from the above link.
My company has spent a significant amount of time on iSCSI / VMWare benchmarks over the last few months.
Using a Dell R710 connected to an MD3220i with 500GB 7.2K drives in the first shelf and 600GB 15K drives in the second shelf.We originally only had the 7.2K drives and configured them with RAID6 and RAID10 (equal number of disks).Max Throughput-100%Read and Random-8k-70%Read tests were the same - approximately 128MB/s and 135MB/sec throughput respectively even with round robin configured.RAID6 on these drives for RealLife-60%Rand-65%Read and Random-8k-70%Read throughput was about 8.7 and 8.8 MB/secRAID10 was 17 and 15 MB/sec.RAID10 on the 15K drives was basically double the 7.2K at 31 and 33 MB/sec (we did briefly see 37 and 42 but are unable to repeat it).The most interesting thing we discovered was that the iops need to be optimized for this array when using round robin.The command is esxcli nmp roundrobin setconfig --type "iops" --iops=3 --device (your lun ID).Once this command was run against our LUN the Max Throughput-100%Read and Random-8k-70%Read tests hit the limit of the NIC's, if we have 3 x 1Gbit NICs we get over 300 and 315 MB/sec
jebersole
9 Posts
0
November 17th, 2011 08:00
Yes. I've seen this, and I did have IOPS Policy set to 3 for my testing.
jebersole
9 Posts
1
November 18th, 2011 05:00
Update! DO NOT use hardware iscsi intiators when using Broadcom 5709 NICs, at least when also using the Dell MD3220i. Not sure how the benchmark numbers from before would have effected real world traffic, but after switching to VMWare's software HBA, here are the results:
VM TYPE: VM Server 2003 x86, 2 vCPU, 2GB RAM
HOST TYPE: ESXi 5.0, HP DL380 G7, 48 GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell MD3220i / 10 x 10k 600GB SAS / RAID10
STORAGE NETWORK / PROTOCOL: iSCSI w/ 2 active ports into storage
Round Robin and IOPS Policy set to 3
LUN = 500 GB / vmdk = 20GB
Two Workers both pointing to same drive.
IOps MBps Avg. Response Time % CPU Utilization
Max Throughput-100%Read 6801.31 212.57 15.93 27.91
RealLife-60%Rand-65%Read 4047.01 31.61 21.49 35.99
Max Throughput-50%Read 6887.24 215.23 15.55 27.58
Random-8k-70%Read 3482.28 27.205 23.34 38.82
That's better I think...
JOHNADCO
2 Intern
•
847 Posts
0
November 18th, 2011 14:00
Ain't that the truth... The Broadcom TOE plain stinks indeed. I should ask, but I seem to always assume software initiators are being used.
jebersole
9 Posts
0
November 19th, 2011 08:00
It was just so easy to setup in vSphere 5 (can bind NICs to iSCSI in the GUI now), and the only thing I read about using HW initiators was to not use jumbo frames with broadcom. Everything seemed to be working just fine except the skewed benchmark numbers kept me from proceeding. I'm glad I dug deeper and found the problem. Note: the install guide for the MD3220i does include specifics on configuring SW initiators, not HW. I wasted two days but lesson learned...