This post is more than 5 years old

1 Rookie

 • 

63 Posts

14520

May 4th, 2011 07:00

IO Meter Performance Stats?

Has anyone done performance stats on the VNXe3100 yet? 

I am doing an initial config and would like to know what kind of performance that you are seeing.

Best Regards.

2 Posts

July 20th, 2011 15:00

I ran through the iometer tests on the vmktree iso and below are the results from an NFS share.  The first result is with the NFS share cached which is the wrong behavior.  The second test being after the MR1 install with the cache turned off.  On the Max Throughput-50% read it is taxing the 1Gb connection now.  My numbers are not overly scientific given that there are 20 VM's running on the 3100 as I ran the test but the performance difference is substantial for those of you fighting with NFS performance.

Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 16.07 3694 115 3%
RealLife-60%Rand-65%Read 128.02 435 3 10%
Max Throughput-50%Read 25.20 2352 73 2%
Random-8k-70%Read 150.09 362 2 10%
Max Throughput-100%Read 16.47 3596 112 4%
RealLife-60%Rand-65%Read 29.68 1882 14 18%
Max Throughput-50%Read 13.16 4505 140 2%
Random-8k-70%Read 30.74 1783 13 28%

11 Legend

 • 

20.4K Posts

 • 

87.4K Points

May 7th, 2011 05:00

Hmm ..response times are not great.

1 Rookie

 • 

106 Posts

May 7th, 2011 05:00

Hi

these are my iometer test

VNXe config

vnxe 3100 (2 SP) + i/o module 4 eth 1Gb/s

R5 4+1 SAS 15K 600GB

R6 4+2 NL-SAS 1TB

HS 1 SAS 15K 600GB

networking

1 lacp eth2 + eth3

1 lacp eth10 + eth11 + eth12 + eth13

Tested on  VMFS volume

1 iSCSI  500GB VMFS on 600GB 15K

IOMeter test on a VM windows 2003 32bit 2GB ram and CPU X5660

Test 100% Read, 100% sequential 32K

Result I/Os 3450

Response 19ms

Throughput 107 MB/s

Test 65% Read, 40% sequential 8K

Result

I/Os 1100

Response 50ms

Throughput 9 MB/s

1 Rookie

 • 

63 Posts

May 8th, 2011 21:00

Thanks Matteo.

Here is what I got.

I am using a 500 GB NFS share in a RAID 5 (4+1) 300 GB SAS.  VM is Windows Server 2003 R2 with 4 GB Memory.

32K 100% read 100% Sequential

     3174 IOPs

     100 MBps Throughput

     Avg IO resp 5.05 ms

     Max IO resp 169 ms

8K 65% Read 40% Sequential

     382 IOPs

     3 MBps Throughput

     Avg IO  resp 42 ms

     Max IO resp 331ms

Anyone else like to share?

Best Regards.

5 Posts

May 16th, 2011 14:00

I am confused or is that throughoutput really really low? (3Mbps?)

What should I be expecting, I'm getting pretty bad IOPS myself, and cant figure out the issue.

I'm using VMWare, or even if i use a Client inside VMWare its pretty bad.

1 Rookie

 • 

63 Posts

May 16th, 2011 21:00

odge,

I thought it was just me.

I was planning on rolling out the VNXe and my new hosts this month, but until I can get some decent numbers out of this thing or can figure out what I am doing wrong, it's just and expensive space heater.  I am working with EMC tech support, but so far, I have not seen any performance improvement.  Even simple file copies within the VM seem slow.  For example, went to a web share on my network that has a bunch of installation files on it.  I tried to copy a file that was about 8 MB from the network share to the desktop of my VM and the download transfer rate was maxed out at 200KBps before settling at 150 KBps.  It took 46 seconds to copy this one file from a physical machine to the virtual desktop.

I am no storage expert, which is why I purchased the VNWe by the way, so I'm sure that I am doing something wrong, but have not figured out what.  If I find out anything from Tech Support, I will post it here.

Did you run your own IOmeter stats?  If so, what is your setup and what were your results?

Best Regards.

1 Rookie

 • 

106 Posts

May 17th, 2011 00:00

Hi all.

My VNXe implementation runs without "glory". I've implemented it for 15 VMs (40 Users). All services are virtualized (exchange 2007, DCs, MSCRM, ....) during the daily workloads all work without problems, but when I use massive robocopy, sVmotion  or somethings else that are disk intensive the VNXe performance falls down. The read or write latency (see them in vCEnter) are realy bad, about 600-700 ms of latency.

I've just opened a SR but they told me that the configs are all ok and there are no problem. I'm quite unsatisfied of the box. I think that this type of storage MUST be configured with many disks....the best way is to buy it  with 12 SAS 15 K disks, to fill the first dae.

Bye

Matteo

2 Intern

 • 

727 Posts

May 17th, 2011 06:00

Matteo,

Can you share the SR number that you had filed for this performance issue?

1 Rookie

 • 

106 Posts

May 17th, 2011 06:00

SR 40598094

bye

Matteo

2 Intern

 • 

727 Posts

May 17th, 2011 06:00

pdkilian - do you have an SR number that you can share with us?

1 Rookie

 • 

63 Posts

May 17th, 2011 07:00

Mine is 40754994.

Best Regards.

1 Rookie

 • 

63 Posts

May 19th, 2011 10:00

I thought that it would be interesting to check the IOMeter  performance of a datastore on the local drive.  I have 4, 73gb 2.5 inch  10K SAS drives in a Dell PE 1950 II RAID 5, which is also my ESXi  4.1host.  The 2950 has a quad core E5335 @ 2 GHz with 16 GB of memory.  I am running IOMeter on a Windows 2000 Server VM.

Results:

8 KB transfer size, 40% Sequential, 65% Read 16 outstanding I/O's

     530 IOPs

     4.14 MBps Throughput

     Avg I/O Resp 30.2 ms

     Max I/O Resp 1503 ms

32 KB Transfer size, 100% sequential, 100% Read, 16 outstanding I/O's

     2306 IOP's

     72.9 MBps throughput

     Avg I/O Resp 7 ms

     Max I/O Resp 63 ms

I'm not sure what to make of this.  In the Max performance test, the VNXe shows better performance than the local DS, but in the real world test, the local DS with slower and fewer drives performs  30% to 40% better.

2 Intern

 • 

727 Posts

May 23rd, 2011 09:00

You EMC rep should not have access to VNXe data released by our performance guys. You can contact them for details.

2 Intern

 • 

727 Posts

May 23rd, 2011 10:00

Typo in the previous message: "You EMC rep should have access to VNXe data released by our performance guys. You can contact them for details."

1 Rookie

 • 

63 Posts

June 6th, 2011 17:00

OK, I've given up on NFS.  No matter what I tried, I kept getting dismal results.

I created a test iSCSI share and moved my VM from the NFS datastore to the iSCSi datastore.  I reran the exact same test and the performance improved dramatically, about a 5 fold increase in the IOPs and MBps and a corresponding decrease in the average I/O response time.  This is without any tuning or link aggregation.

65% Read, 40% Sequential, 16 outstanding IO's and an 8 K size.

Total IOPs - 1961

Total MBps - 15.32

Avg I/O Resp - 8.1 ms

Max I/O Resp - 188 ms

I was hoping to use NFS for it's flexibility and simplicity, but as things stand currently, there was no way to get the performance out of it that I was expecting.  I have sent a service request to EMC to let them try to resolve the NFS issue and if I hear anything back, I will post it here.

Best Regards.

0 events found

No Events found!

Top