Start a Conversation

Unsolved

This post is more than 5 years old

K

1 Rookie

 • 

358 Posts

1786

April 1st, 2014 10:00

How many luns in a Storage Pool?

I created a storage pool per our spec'd reccomendations which is consiting of a Fast Cache enabled two tier (SAS and NL-SAS) raid groups.

Now after this creation of the storage pool, I believe I have to right click on it "Pool 0" and create luns.  Well how many LUNS should you create and should they have the thin and/or deduplicated box checked?

End goal is creation of a few filesystems on here for mostly NFS export to VMWare ESXi 5 hosts over a pair of Brocade 10gbps to the 10gbe data mover io modules.

In Pool 0 I have 2 RAID10's (16 drives total) SAS 536.808 drives and 2 RAID6's (16 drives total) NL-SAS 1834.354 capacity.  Total capacity is 26257.184 GB.

Am I on the right track here or not?

1 Rookie

 • 

20.4K Posts

April 1st, 2014 11:00

so you are going to present them through datamovers ? Take a look at page 18

1 Attachment

1 Rookie

 • 

358 Posts

April 1st, 2014 11:00

Perfect.  I did have that document at one time and misplaced where I saved it.  Thank you, that's exactly what I was looking for.

1 Rookie

 • 

20.4K Posts

April 1st, 2014 11:00

i am curious to find out how much performance you will be able to squeeze out of this config. Do you have plans to run some kind of benchmark before you go into production (I/O Analyzer or IOmeter) ?

1 Rookie

 • 

358 Posts

April 1st, 2014 11:00

I can run iometer on my current NX4 Celerra (NFS presented to VMware over 1gbps) and also run it on this new VNX5200 and see which one comes out on top.

The emc engineer suggested this config based on our NX4 load that he ran through the analyzer.

I will put a test vm on this and do some bench-marking and post it.  I goofed though I created the 10 luns at "MAX" and forgot to subtract 5% from the pool "for relocations and recovery operations.", so I am waiting for the LUNS to delete now so I can recreate them.

1 Rookie

 • 

20.4K Posts

April 1st, 2014 12:00

so you are changing ESXi hosts in addition to storage ?

1 Rookie

 • 

358 Posts

April 1st, 2014 12:00

Same 5 ESXi hosts but their storage will be traversing a 10gbe network and separate switches.  They will have simultaneous access to the existing Production datastores on the NX4 via their dedicated gigabit networking.

The plan is to storage vmotion piece by piece, but before that I want to do some tests on the new vm datastores.

1 Rookie

 • 

358 Posts

April 1st, 2014 12:00

Yes I do.  The VNX and the new dedicated qlogic 10gb nics in each of the ESXi servers are connected on their own separate Brocade 10gbps TurboIron 24X switches, along with the VNX 10gbe I/O modules.

The existing nics in the ESX server (there are multiple) go to Cisco switches where storage traffic is on its own subnet and vlan, as well as various virtual machine vlans like DMZ, Server LAN, Voice, etc....

So the only thing that the 10gbe nics right now are doing is vmotion and that was simply for testing purposes, which I may leave on the 10gig but throttle vmotion to 4gbps because 1) we don't vmotion often, 2)vmotion is pretty quick and 3) during vmotion that still leaves 6gbps which is the sas bus speed of the DAE's anyway.

I am always open to opinions and suggestions too as this VNX is VERY flexible as I'm finding out, but right now I am just going by the engineer's recommendations who ran our NX4 analysis, took in our requirements and spec'd out this solution for us.

The reason were doing NFS is because its dead simple, we don't have fiberchannel switches or HBA's and I just think its easier.

We have about 130 employees across soon to be 5 locations (up from 4).

1 Rookie

 • 

20.4K Posts

April 1st, 2014 12:00

ok, so ESXi hosts will continue to use 1G interfaces. Do you have flexibility to put ESXi and VNX datamoves on the same subnet so you are not jumping through any routers ?

1 Rookie

 • 

20.4K Posts

April 1st, 2014 12:00

although we are predominately a block only shop, i really like NFS for its simplicity. You probably have seen this paper, in case you have not these are some excellent techbooks

https://www.emc.com/collateral/hardware/technical-documentation/h8229-vnx-vmware-tb.pdf

1 Rookie

 • 

358 Posts

April 2nd, 2014 07:00

Sorry I think this system automatically added the xls extension.  Those are csv files and if opened / imported into excel as such, the formatting should be fixed in proper column format.

1 Rookie

 • 

358 Posts

April 2nd, 2014 07:00

Ok here are some preliminary IO meter benchmarks and some shots of the network bandwidth during the tests.

I know I didn't reach too much over 1gbps on the 10gbe link ,but its just 1 VM on 1 host.  I figure I have 50 vm's at least and 5 hosts, if everything is busy and hitting the array at once, I am still happy we have 10gbps into the array.

The existing nx4-sas results are lower than I expected but there are 19 VM's powered on and running on this filesystem, 7 which are on the same host as the vm that ran these IO meter tests.

The existing nx4-sata results are pretty low, and I expect low for sata, but only 3 powered on vm's are on this, 2 on the same host that ran IO meter (including the IO meter vm).

Note, I haven't done ANY tweaking yet.  Did not venture into network tweaking, queue depths or any other of the vmware advanced options, nor do I have the VAAI plugin installed (not that it should make any difference in these tests).

In any rate, the numbers off the VNX test are far higher than what we are getting on the NX4.

The test VM is a Windows 2008 R2, fully patched, 1 cpu, 4gb ram, and a 10gb hard drive on each file system that was tested.  Each test ran once at a time and the real life test consisted of 65% read and 35% write rate.  All tests were using 100% random workload and 256k transfer request size.  Each test ran for 5 minutes after a 30 second ramp up quiet time.

5 Attachments

1 Rookie

 • 

358 Posts

April 2nd, 2014 08:00

Oh and I just realized my scale is different between the switches.  The 10gbe switches are graphing in bytes per second, and the 1gbps switches are graphing in bits per second.  Let me unify that. 

That puts me around 8.24 gigabits per second considering a maximum of 1.03 gigabytes per second.  So that is a benefit right there.

1 Rookie

 • 

20.4K Posts

April 2nd, 2014 12:00

are you using IOmeter or something else ? Can you try these tests with 8k or 16k size.

1 Rookie

 • 

358 Posts

April 2nd, 2014 13:00

Yes IOmeter in a Windows Server 2008 R2 VM, fully patched with the latest VMware tools for 5.0U3 plus the last January VMware patches on top of that.

I found a blog that said 256k size so that's what I used.  But yes I can try 8k and 16k.

I am redoing it though because my pool was 97% subscribed and Vaai was not enabled for my first filesystem, so I blew it all away and I'm making the 10 luns 2445gb and that puts me at 95.158% subscribed which is closer to the 5% free space in the pool described in the Best Practices document.

1 Rookie

 • 

358 Posts

April 2nd, 2014 15:00

Ok here are tests with 8k and 16k.  First CSV file is the VNX5200.  The second is NX4 (as seen in the file name).

Each test ran for 5 minutes a piece.

In each CSV there were four tests in this order.

100% 8k reads

100% 16k reads

8k "real world" which is 65% reads 35% writes

16k "real world" which is 65% reads 35% writes

And the clear winner is the VNX by a long shot.

2 Attachments

No Events found!

Top