I'm gathering information to research, plan, and get budgetary info for an early 2014 approval for our infrastructure improvements.
Currently we run VMWare ESX 4.1 U3 across 3 Dell R710 hosts. They are running about 45 active VM's and each host is about 80% capacity in their 64GB Memory area. There are 4 NFS datastores housed on a Celerra NX4 using SAS drives. The communication between the VMWare hosts and NX4 is on it's own VLAN and connected via 1gbps Cisco 3750g switch.
At a remote site we have another NX4 but with SATA drives. We run Celerra replicator from our main HQ to the remote site. There is one VMWare host here running VMWare SRM and again a Cisco 3750g switch with a similar configuration.
I'm looking to ADD 2 new VM hosts at HQ in addition to add a VNX to migrate to. After migration to the VNX, we are debating keeping the NX4 and using it to keep backups, while using the new VNX for Production. Not sure if we will sell it back to EMC (though they did offer us a credit if we do). Here's my questions to ya'll...
Can the VNX replicate to our existing NX4 at our DR site? Would the replication have to start fresh? It's a 50mbps fiber link between HQ and DR site but we have a schedule in place so Celerra replicator uses 20mbps during business hours, then is unlimited off business hours.would
Can we get a new VNX with 10gbe interfaces for both data movers? What would be a good switch infrastructure? A Cisco Nexus 2000 series? Were familiar with Cisco as all our networking gear from core to wan to wireless is Cisco. What type of cabling (copper, fiber, etc..)?
I'm thinking of adding a 10GbE switch, and adding 10GbE PCI cards in the current hosts and ordering new hosts with 10GbE cards as well. The new switch and all 10GbE gear would be entirely separate and jumbo frames turned on. We currently do not use Jumbo frames and I don't plan on altering that portion of the network because I do not know what that would do if not everything supported it (from clients to routers to wireless to other ancillary switches). Doesn't seem like you can turn on Jumbo frames on a per VLAN basis... its all or nothing.
So I'm looking at an entirely separate 10GbE backend for storage between all VM Hosts and the new VNX. Then keeping existing Cisco 3750g for VMWare hosts connectivity to our core which would be various vlans for voice, data, dmz, vmotion, etc. The current hosts have 4 onboard 1gig ports and a card with 4 1gig ports (not all used today). I'm just thinking to keep the new VNX completely physically separate (Except for a management interface and replication interface). I also plan on upgrading the VMWare 4.1 to 5.1 or 5.5 if it's available at that time.
Any great recommendations on connectivity, if the VNX can replicate to NX4's, etc... that would be great.
Reasons for shopping:
1. Celerra NX4 support more than doubled because its now considered older gear. Nearly $12,000 a year for NBD support vs $5000 previously.
2. Starting to hit performance ceiling. Seeing some datastores spike from 40 to 300 ms latency from vm host to nfs datastore. Though normal average is under 16 ms... these spikes depend on whats going on in the environment and what datastore were looking at. Interested in the VNX flash cache also paired with possibly a new 10gbe connectivity to vmware hosts.
3. Current vm hosts are at 80% capacity. We also would like to continue to grow and while adding VMWare hosts would be the next step, we also want to make sure the backend storage is ready for the added connections. This also has me thinking of upgrading the storage connectivity to 10Gbe rather than 1Gb. We realize that VMWare's NFS implementation can't aggregate bandwidth on NFS connections across LACP links unless we do some creative multi-subnetting, multi-interface, multi-vlan configuration. Going to a straight 10GbE interface across 2 stand alone switches (like the Nexus line) for failover might be a better solution with less complexity.
Yes you can use Replicator between a NX4 and a VNX
If you can avoid a full refresh if you use common base checkpoints – available since 22.214.171.124
That is a feature where you can create a user checkpoint on each of the three systems, synchronize it and then restart the replication without full transfer
See the Replicator manual for the latest VNX release and search for “common base” and “cascading replication”
Yes 10GBit interfaces are available
What are the 10Gbit interfaces? Is it LC fiber connector? Is it copper?
I'm trying to piece together the proper SFP+ modules for Cisco switch gear that this will connect up to.
My backup plan if we don't have the budget for Cisco Nexus is putting in the C3KX-NM-10G modules into two existing Cisco switches. They have 2 SFP ports and 2 10g SFP+ ports. So figuring out what modules I need so I can connect to the VNX is currently where I'm stuck at. If fiber, what kind, multi-mode LC?
I come from the NX4 so I take it each data mover would have a 10g connection in case it has to fail over, correct? The VNX is architected the same right (2 data movers with connections to your network) ?
Thanks for that,
Looks like its our choice. We are presenting NFS storage to vmware so looking at these options:
Two-Port 10 GbE Opt IP Module - IP module with two 10 GB/s Ethernet ports and the choice of SFP+ optical connection or active twinax copper connection to Ethernet switch.
Two-Port 10 GBaseT IP Module - IP module with two 10 GBaseT Ethernet ports with copper connection to Ethernet switch
I can find all kinds of SFP+ optical modules for Cisco so just match what's on either end I guess. I'll reach out to our rep for configuration details when we spec it out.
Thanks for the link to the spec sheet.
It is pretty simple – the only optical 10GBit we support is the short-wave multi-mode one
That is the most popular one for hosts and optical
Ah ok I see cisco part SFP-H10GB-CU1M 10GB SFP+ twinax 3.3 ft is compatible with the expansion module I would put into a 3560x.
Though to get Active twinax it seems I have to go 7m or longer twinax cable. Though it's all in the same rack, I really don't need cable that long.
I ran into an old discussion that said only EMC branded Active Twinax was supported. Is this still the case on the VNX? Even if you don't go over the 5m length limitation?