We have 2 Celerra NX4's replicating over a WAN.
In our HQ, we want to upgrade one NX4 to a new VNX early next year. Issue is that due to costs were not going to upgrade both production and DR at in the same year.
Issue is that we present storage to VMWare using NFS. Because they said replicating NFS using Celerra replicator was much more efficient than iscsi.
Now looking at VNX, I almost want to rearchetect the whole vmware host to storage connection. Instead of 2gbps trunks from each vmware host to a gig switch, I'm thinking what if we go 10gig or Fiber Channel 16 gig if possible. That would require new NICs or HBA's in the servers, fine.... but how would this work with a VNX?
If we presented LUNs to vmware hosts, would we have better performance than just presenting NFS shares? But if we do that, does the VNX have any way to share / translate those LUNs to NFS stores for replicating to our DR NX4?
I'm really looking at how to get the best possible performance for 5 VMWARE Hosts talking to a new VNX. Not only for today's investment but an investment that lasts a few years of growth. We have 3 VMWare hosts today, moving to 5 for some projects, and I can forsee adding a new VM host server a year for the next 3 years.
NFS is easy. Dead easy... but I don't want to choke things when adding more storage, and investing in a new shared storage system it's a good time to think about what's best. I think upgrading to a Cisco NEXUS 5548UP switch and using 10G just ethernet and keeping NFS would be super simple way to do it... but I don't want to do that if it's not the BEST way to do it.
If you want to be able to replicate between NX4 and VNX then staying with NFS is the simplest way.
Whether using SAN or NAS is a different discussion – each one has its pro’s and con’s