Highlighted
sakacc
1 Nickel

Are you a customer using DART to host an NFS datastore?

This could be on an EMC Celerra, EMC Unified, or EMC VNX platform.   We think we've developed something that might be an awesome performane improvement, but the truest test would be customer feedback.

This is very "hot off the presses" from engineering, and is before the formal beta phase. 

This is EXPERIMENTAL CODE - and not supported in production!!!   Do not deploy on production systems.

We would love informal feedback (but as formal as possible - including your before/after experiences).    To encourage people to do this (but again, only on non-production environments!!), I will kick start another contest.

For the first 30 people who provide data that shows the "before/after" comparison of this new code, I will provide a 16GB Wifi iPad.

So – how do you get it?

  1. For customers who would like to try the fix, here is the process (this is the preferred process, as it formalizes feeback):
    1. the epatch is available to the tech support group, which is the usual method in which we release code.   
    2. The epatch is called 6.0.40.805     If your customer wishes to obtain and test with it, please open an SR with your local service representative and have them work with tech support to get the 6.0.40.805 e-patch so that you can schedule your upgrades.
    3. Please provide any feedback based on the experience that results from this patch.  Negative or positive, we'd like to hear it.
  2. For non-customers (EMC Partners/employees) who would like to try the fix, the experimental DART epatch is here, with the MD5 here, and the release notes here.

Please use the EMC/VMware Powershell tools used in this earlier post to measure the effect: https://community.emc.com/thread/117654?start=0&tstart=0, but ALSO please capture your IOmeter results (see below)

Understanding this a little more:

  • The optimization is for the NAS write path – it has shown very large performance improvements in some test with very small IO random IO workloads on NFS datastores.
  • Multi-VMs is a must if you want to provide data. We know it's good with a single VM (4x better latency), but the way it's better is by reducing the serialization of writes through the NAS stack to the backend disk devices (which produces a lot less latency). The main questions are: a) how much it holds up with random, multi-vm workloads; b) whether performance regressed with other workloads (large IO sequential guest workloads)
  • I can't say it enough - it is experimental.  This means: Don’t use it in production – PERIOD.   If you have a non-production Celerra/VNX use case, give it a shot.  We’ve been playing with it for a while, so it seems solid, but never use non-production code in production environments.

Since this will have before/after data - being a little more proscriptive is useful.   Note - this test harness below is not a pre-requisite to winning an iPad.   Any before/after data (even if the effect observed is negative) will register you to win an iPad.

Ideally, tests should share this configuration:

  • Capture your IOmeter workload results - the ultimate effect of this should be better guest-level latency and maximum IOps.   Do this in ADDITION to the vSCSIstats, ESXtop, and array stats noted earlier.
  • Windows VM, one vCPU, 512MB RAM, Iometer installed, a second unformatted VMDK sized to 10GB attached to the VM
    • The tests will run against the second VMDK. 
    • Be run for at least four minutes
  • 32 outstanding IOs
  • NFS volume should be configured for as many disks as available.  RAID0.   The point here is to eliminate back-end bottlenecks - we're trying to stress the NAS stack, not the block stack.

Then I propose we run the following tests against the entire unformatted second VMDK.  For each of these we should collect pre- and post-patch results.

  • 4k IO, 100% random, 0% read
  • 4k IO, 100% random, 50% read
  • 4k IO, 66% random, 0% read
  • 4k IO, 66% random, 50% read
  • 4k IO, 33% random, 0% read
  • 4k IO, 33% random, 50% read
  • 4k IO, 0% random, 0% read
  • 4k IO, 0% random, 50% read

Pause and analyze your results.  What would be awesome would be to find the results for which there was the most (MOST) dramatic change and the one for which there was the least (LEAST) change.  Based on those we should then add the following tests for pre- and post-patch configurations:

  • 8k IO, MOST
  • 8K IO, LEAST
  • 16k IO, MOST
  • 16k IO, LEAST
  • 32k IO, MOST
  • 32k IO, LEAST
  • 4 VM configuration: 4xMOST
  • 4 VM configuration: 4xLEAST
  • 8 VM configuration: 4xMOST
  • 8 VM configuration: 4xLEAST
  • 16 VM configuration: 4xMOST
  • 16 VM configuration: 4xLEAST

Post your results to this thread.

Thanks - any and all welcome data is welcome!   Remember - DO NOT RUN THIS IN PRODUCTION!

Labels (2)
0 Kudos
15 Replies
swevm
1 Copper

Re: Are you a customer using DART to host an NFS datastore?

Did some tests today with DART version 6.40.500 and later compared some of the results with 6.40.805. More results with different block size will be published tomorrow.

Wow! What a difference. This is just one indication out of several that there´s a lot of performance gains that can be achieved by utilizing clever algorithms in software.

Unfortunately I cannot run any powershell scripts here so I had to take another approach. All tests have run at least 4min and have been monitored and performance statistics for the virtual disk have been exported to Excel spreadsheets that I will clean up later and publish.

Atm lot more test have to be done but the results below speak for themselves in terms of performance gains between current and experimental DART code.

Another thing I noticed was on reboot time. A DM node seem to reboot faster than with current released code. Haven´t had time yet to measure the time difference and I will probably not have time to to that either.

First tests was done with 2x W2k8R2 VM´s. Why W2k8R2? The simple answer is it perform better than 2k3.

One the backside it need more memory to work so I had to do the tests with 2GB ram instead of 512MB.

Attached to each VM is a 10GB unformatted vmdk used with IOMeter.

IOMeter is configured with 6 workers each with 32 outstanding IO.

Note: Backend storage is not optimally configured with only 4x 4+1 raid groups that back NFS export.

Test

Block Size

Randomness

Read/Write Ratio

Ops/s [6.40.500]

Ops/s [6.40.805]

Improvement

Reference

4k

0%

100% read, 0% write

69336 [1x VM]

35918+35960

1,04x

4k blocks

4k

100%

0% read, 100% write

1040+1270

5303+5284

4,58x

4k

100%

50% read, 50% write

2221+2305

7550+7526

3,33x

4k

66%

0% read, 100% write

1377+1378

4227+4219

3,07x

4k

66%

50%, read, 50% write

2326+2330

7046+7024

3,02x

4k

33%

0% read, 100% write

1507+1511

3333+32432,18x

4k

33%

50% read, 50% write

2296+2303

2859+28811,24x

4k

0%

0% read, 100% write

1473+1490

3074+32742,14x

4k

0%

50% read, 50% write

2024+2045

2933+30511,47x

8k blocks

8k

100%

0% read, 100% write

1229+12303929+38503,16x

8k

100%

50% read, 50% write

1741+16245870+46723,13x

8k

66%

0% read, 100% write

1427+11903447+35222,66x

8k

66%

50%, read, 50% write

1819+18176082+61973,38x

8k

33%

0% read, 100% write

1614+15403718+34632,28x

8k

33%

50% read, 50% write

1742+17915738+52963,12x

8k

0%

0% read, 100% write

2351+14604517+47312,43x

8k

0%

50% read, 50% write

2313+230311302+117704,99x (strange)

32k blocks

32k

100%

0% read, 100% write

32k

100%

50% read, 50% write

32k

66%

0% read, 100% write

Entered wrong randomness. Still interesting result for bigger blocks.

32k

50%

50%, read, 50% write

1807+15143187+33751,98x

32k

33%

0% read, 100% write

32k

33%

50% read, 50% write

32k

0%

0% read, 100% write

32k

0%

50% read, 50% write

Added results for rest of 4k tests and 8k testing done. Also added a test for 32k block, but unfortunately during the hurry I entered wrong randomness, 50% instead of 66%. Still interesting result though as the patch seem to improve performance for bigger blocks too.

0 Kudos
nikolayp
1 Nickel

Re: Are you a customer using DART to host an NFS datastore?

Single ESX 4.1.0 build 260247 on Dell server with 8 quad CPU, 128Gb memory and 10G network
Celerra: NS-G8 with CX4-960
Network: 10Gb all the way between ESX and DM, 1500 MTU
Celerra doesn't work with RAID0, so FS was built on 14 FC disks (7 luns) 32K stripe across using RAID10
VMs were running WinXP with SP3 with 512MB RAM and second 10Gb unformatted drive againts which iometer was used.
IOmeter on VMs was configured with 1 worker having 32 outstanding IOs

Most difference between base 6.0.40.8 version and 6.0.40.805 patch I noticed when using single VM (with 32 outstanding IOs) - up to 3.5x for almost any IO size when having 100% writes (both sequential and random)

6.0.40.8056.0.40.8
Improvement (times)
start timeIOPSMB/sresponse timestart timeIOPSMB/sresponse time
IOPSMB/sresponse time
4k IO, 100%   random, 0% read7:3313659.5553.362.3429:314560.2517.817.0167
3.0033.00
4k IO, 100%   random, 50% read6:466078.923.755.26279:373857.6815.078.2942
1.581.581.58
4k IO, 66%   random, 0% read6:5513927.3154.42.29689:424901.2219.156.5282
2.842.842.84
4k IO, 66%   random, 50% read7:005127.7520.036.23949:484095.92167.8116
1.251.251.25
4k IO, 33%   random, 0% read7:0612742.649.782.51049:524892.3419.116.5401
2.602.62.61
4k IO, 33%   random, 50% read7:115155.7420.146.20549:584243.4616.587.54
1.211.211.22
4k IO, 0%   random, 0% read7:1713753.4253.722.32610:035080.1719.846.298
2.712.712.71
4k IO, 0%   random, 50% read7:2212320.8348.132.595310:197129.6327.854.4874
1.731.731.73



8k IO, 100%   random, 0% read12:3613646.88106.622.344511:274616.1136.066.9314
2.962.962.96
8k IO, 33%   random, 50% read12:424486.3135.057.131311:203534.2527.619.0533
1.271.271.27
16k IO, 100%   random, 0% read12:4911908.82186.082.686311:373440.2653.759.3006
3.463.463.46
16k IO, 33%   random, 50% read12:564390.2268.67.287811:433213.3850.219.9575
1.371.371.37
32k IO, 100%   random, 0% read13:098266.88258.343.870211:563023.7494.4910.5818
2.732.732.73
32k IO, 33%   random, 50% read13:143830.09119.698.353712:022842.0788.8111.2575
1.351.351.35

I also noticed that there was some sort of limitation on ESX side preventing more than 64 simultanious. So really, 2 VMs with 32 outstanding IOs was doing great, but adding more same type of Vms caused delays in serving writes/reads. Is there anything can be tuned on ESX?

Just for comparison, another test was performed on FS built on 10 4+1 RAID5 FC disks using 4 VMs with 4 workers in each and 1 outstanding IO (32K in size). Below table represent numbers for 1 VM:

Version

Throughput (MB/s)

Response time (ms)

6.0.40.8

36

3.5

6.0.40.805

57

2.2

Total throughput for 4 VMs was:

6.0.40.805 patch:

32 seq writes - ~235MB/s

32 random writes - ~235MB/s

6.0.40.8 image:

32 seq writes - ~157MB/s

32 random writes - ~156MB/s

235MB/s on that FS was close to max throughput you can get out of it,  so that was impressive!

So, I think, by changing IOmeter configuration it's possible to get much better numbers which I've got when having 1 worker and 32 outstanding IOs.

0 Kudos
sile1
1 Nickel

Re: Are you a customer using DART to host an NFS datastore?

2x ESX 4.1.0 on Dell server with 8xCPU, 32Gb memory and 1Gb network
Celerra: NS480, flare 30.511
Network: Single 1Gb connection, 1500 MTU
Storage: 4+1 400GB EFDs, 16x LUNs with a Celerra stripe created across all 16 LUNs 256KB stripe size, and then a single 200GB file system presented.

             The file systems was mounted with pre-fetch disabled and Direct writes Disabled
VMs:   Windows 2003 with 512MB RAM and second 10Gb unformatted drive againts which iometer was used.  I inflated the vmdks on the NFS volume before I started testing.
IOmeter on VMs was configured with 1 worker having 32 outstanding IOs, default alignment used(sector boundaries)

I saw the biggest improvement with 0% random 0% Read, and actually the biggest drop in performance with 0% random 50% Read.

I am not seeing a difference between the two. My best theory is that by striping across 16 LUNs on the EFDs I was able to get very good concurrency on the GA code.  So, the patch isn't giving much benefit.  Only a few of the higher VM and larger block sizes was hitting my 100MB limit on the network ports.  I have five more EFDs in another array.  I could potentially pull those to give 10x EFDs in raid10, and then connect up some more network ports and re-test.

6.0.40.81xVM 2xVM4xVM8xVM16xVM
4KB 0% random 0% Read4665.77141.110124.313744.415064.4
4KB 0% random 50% Read3906.74643.56060.38455.110344.4
8KB 0% random 0% Read5808.68729.011526.812579.612654.0
8KB 0% random 50% Read3322.73358.94677.46360.98037.8
16KB 0% random 0% Read3526.05492.46465.46650.06574.7
16KB 0% random 50% Read2241.32635.23503.34212.85240.5
32KB 0% random 0% Read2366.93156.03340.43413.63398.4
32KB 0% random 50% Read1438.11766.92354.22656.53216.9

6.40.0.8051xVM 2xVM4xVM8xVM16xVM
4KB 0% random 0% Read4687.87091.710335.111041.211486.5
4KB 0% random 50% Read4109.64613.76355.68796.510546.5
8KB 0% random 0% Read5389.48689.111390.011941.311865.1
8KB 0% random 50% Read3152.63173.44533.45858.97904.7
16KB 0% random 0% Read3439.65369.56484.36628.66547.1
16KB 0% random 50% Read2011.12542.13423.34038.94903.4
32KB 0% random 0% Read2279.33077.33402.43433.13377.5
32KB 0% random 50% Read1399.41662.12255.02638.32922.2

Relative1xVM 2xVM4xVM8xVM16xVM
4KB 0% random 0% Read1.000.991.020.800.76
4KB 0% random 50% Read1.050.991.051.041.02
8KB 0% random 0% Read0.931.000.990.950.94
8KB 0% random 50% Read0.950.940.970.920.98
16KB 0% random 0% Read0.980.981.001.001.00
16KB 0% random 50% Read0.900.960.980.960.94
32KB 0% random 0% Read0.960.981.021.010.99
32KB 0% random 50% Read0.970.940.960.990.91
0 Kudos
RafaelNovo
2 Iron

Re: Are you a customer using DART to host an NFS datastore?

1 x ESX 4.1
- 2 x Intel Nehalem (4 cores) 2.4 Ghz
- 32 Gb Memory
- 1 GB Etherner


NS-120
2 x 73Gb EFD


In the Attached file
- an Excel with an overview of IOmeter results
- All iometer csv output files
- Complete performance grabs from ESX (vscsistat & esxtop) and from Celerra using PowerCli scripts

Two main findings:
- Huge performance improvements with this new patch, specially in the write intensive workloads (up to 3x)
- Amazing IOPS number from a single pair of EFD Drives (more than 8.000)


You guys did a great job! Can't wait to have this patch GA!

0 Kudos
nikolayp
1 Nickel

Re: Are you a customer using DART to host an NFS datastore?

Hi Sile,

Could you clarify what do you mean by "direct writes disabled"? Or could you show your server_mount output for the FS used in testing. In order to get benefit of the patch it HAVE to be mounted with "uncached" option.

Regards,

Nick

0 Kudos

Re: Are you a customer using DART to host an NFS datastore?

Chad... thank you and my BETA patch results are posted on my blog here: http://www.boche.net/blog/index.php/2011/03/21/emc-celerra-beta-patch-pumps-up-the-nfs-volume/

I'll perform additional "postpatch" testing in accordance with the guidelines outlined above.  I wasn't aware of the specific test parameters a month ago when I originally embarked on the patch upgrade mission.

0 Kudos
sile1
1 Nickel

Re: Are you a customer using DART to host an NFS datastore?

Yes, uncached mode (The gui calls it Enable Direct Writes).  That was it.

Here are my new results.

Environment:

2x ESX 4.1.0 on Dell server with 8xCPU, 32Gb memory and 1Gb NIC for NFS
Celerra: NS480, flare 30.511
Network: 4x 1Gb connection using LACP, 1500 MTU (only 2 ports actually used, because I had two hosts each using single 1Gb adapter for NFS)
Storage: 2x 4+1 400GB EFDs RGs, 2x LUNs per RG using Celerra AVM, and then a single 200GB file system presented.
             The file system was mounted with Direct writes Enabled
VMs:   Windows 2003 with 512MB RAM and second 10Gb unformatted drive which iometer used.  I inflated the vmdks on the NFS volume before I started testing.
IOmeter on VMs was configured with 1 worker having 32 outstanding IOs, default alignment used(sector boundaries)

0 Kudos

Re: Are you a customer using DART to host an NFS datastore?

Sile,

Thanks for the comprehensive set of results.  See the attached images below for a summary of your vscsistats results base don T1/T2 (1st phase) and T3/T4 for the deep dive into IO sizes.  For anyone curious, the thread below shows you how to create this kind of data.

https://community.emc.com/thread/118723

0 Kudos
kec3
1 Copper

Re: Are you a customer using DART to host an NFS datastore?

Great results, thanks for posting!

What did your VM look like?   Did you do I/O to a 10GB unformatted VMDK?

And ... to verify ... this is with a single 1GbE ethernet connection?   

0 Kudos