I created a storage pool per our spec'd reccomendations which is consiting of a Fast Cache enabled two tier (SAS and NL-SAS) raid groups.
Now after this creation of the storage pool, I believe I have to right click on it "Pool 0" and create luns. Well how many LUNS should you create and should they have the thin and/or deduplicated box checked?
End goal is creation of a few filesystems on here for mostly NFS export to VMWare ESXi 5 hosts over a pair of Brocade 10gbps to the 10gbe data mover io modules.
In Pool 0 I have 2 RAID10's (16 drives total) SAS 536.808 drives and 2 RAID6's (16 drives total) NL-SAS 1834.354 capacity. Total capacity is 26257.184 GB.
Am I on the right track here or not?
i am curious to find out how much performance you will be able to squeeze out of this config. Do you have plans to run some kind of benchmark before you go into production (I/O Analyzer or IOmeter) ?
I can run iometer on my current NX4 Celerra (NFS presented to VMware over 1gbps) and also run it on this new VNX5200 and see which one comes out on top.
The emc engineer suggested this config based on our NX4 load that he ran through the analyzer.
I will put a test vm on this and do some bench-marking and post it. I goofed though I created the 10 luns at "MAX" and forgot to subtract 5% from the pool "for relocations and recovery operations.", so I am waiting for the LUNS to delete now so I can recreate them.
Same 5 ESXi hosts but their storage will be traversing a 10gbe network and separate switches. They will have simultaneous access to the existing Production datastores on the NX4 via their dedicated gigabit networking.
The plan is to storage vmotion piece by piece, but before that I want to do some tests on the new vm datastores.
ok, so ESXi hosts will continue to use 1G interfaces. Do you have flexibility to put ESXi and VNX datamoves on the same subnet so you are not jumping through any routers ?
Yes I do. The VNX and the new dedicated qlogic 10gb nics in each of the ESXi servers are connected on their own separate Brocade 10gbps TurboIron 24X switches, along with the VNX 10gbe I/O modules.
The existing nics in the ESX server (there are multiple) go to Cisco switches where storage traffic is on its own subnet and vlan, as well as various virtual machine vlans like DMZ, Server LAN, Voice, etc....
So the only thing that the 10gbe nics right now are doing is vmotion and that was simply for testing purposes, which I may leave on the 10gig but throttle vmotion to 4gbps because 1) we don't vmotion often, 2)vmotion is pretty quick and 3) during vmotion that still leaves 6gbps which is the sas bus speed of the DAE's anyway.
I am always open to opinions and suggestions too as this VNX is VERY flexible as I'm finding out, but right now I am just going by the engineer's recommendations who ran our NX4 analysis, took in our requirements and spec'd out this solution for us.
The reason were doing NFS is because its dead simple, we don't have fiberchannel switches or HBA's and I just think its easier.
We have about 130 employees across soon to be 5 locations (up from 4).
although we are predominately a block only shop, i really like NFS for its simplicity. You probably have seen this paper, in case you have not these are some excellent techbooks