Start a Conversation

Unsolved

This post is more than 5 years old

K

1 Rookie

 • 

358 Posts

1376

August 31st, 2013 12:00

Want to plan an upgrade from Celerra NX4 to VNX for early 2014, some questions.

I'm gathering information to research, plan, and get budgetary info for an early 2014 approval for our infrastructure improvements.

Currently we run VMWare ESX 4.1 U3 across 3 Dell R710 hosts.  They are running about 45 active VM's and each host is about 80% capacity in their 64GB Memory area.  There are 4 NFS datastores housed on a Celerra NX4 using SAS drives.  The communication between the VMWare hosts and NX4 is on it's own VLAN and connected via 1gbps Cisco 3750g switch.

At a remote site we have another NX4 but with SATA drives.  We run Celerra replicator from our main HQ to the remote site.  There is one VMWare host here running VMWare SRM and again a Cisco 3750g switch with a similar configuration.

I'm looking to ADD 2 new VM hosts at HQ in addition to add a VNX to migrate to.  After migration to the VNX, we are debating keeping the NX4 and using it to keep backups, while using the new VNX for Production.  Not sure if we will sell it back to EMC (though they did offer us a credit if we do).  Here's my questions to ya'll...

Can the VNX replicate to our existing NX4 at our DR site?  Would the replication have to start fresh?  It's a 50mbps fiber link between HQ and DR site but we have a schedule in place so Celerra replicator uses 20mbps during business hours, then is unlimited off business hours.would

Can we get a new VNX with 10gbe interfaces for both data movers?  What would be a good switch infrastructure?  A Cisco Nexus 2000 series?  Were familiar with Cisco as all our networking gear from core to wan to wireless is Cisco.  What type of cabling (copper, fiber, etc..)?

I'm thinking of adding a 10GbE switch, and adding 10GbE PCI cards in the current hosts and ordering new hosts with 10GbE cards as well.  The new switch and all 10GbE gear would be entirely separate and jumbo frames turned on.  We currently do not use Jumbo frames and I don't plan on altering that portion of the network because I do not know what that would do if not everything supported it (from clients to routers to wireless to other ancillary switches).  Doesn't seem like you can turn on Jumbo frames on a per VLAN basis... its all or nothing.

So I'm looking at an entirely separate 10GbE backend for storage between all VM Hosts and the new VNX.  Then keeping existing Cisco 3750g for VMWare hosts connectivity to our core which would be various vlans for voice, data, dmz, vmotion, etc.  The current hosts have 4 onboard 1gig ports and a card with 4 1gig ports (not all used today).  I'm just thinking to keep the new VNX completely physically separate  (Except for a management interface and replication interface).  I also plan on upgrading the VMWare 4.1 to 5.1 or 5.5 if it's available at that time.

Any great recommendations on connectivity, if the VNX can replicate to NX4's, etc... that would be great.

Reasons for shopping:

1.  Celerra NX4 support more than doubled because its now considered older gear.  Nearly $12,000 a year for NBD support vs $5000 previously.

2.  Starting to hit performance ceiling.  Seeing some datastores spike from 40 to 300 ms latency from vm host to nfs datastore.  Though normal average is under 16 ms... these spikes depend on whats going on in the environment and what datastore were looking at.  Interested in the VNX flash cache also paired with possibly a new 10gbe connectivity to vmware hosts.

3.  Current vm hosts are at 80% capacity.  We also would like to continue to grow and while adding VMWare hosts would be the next step, we also want to make sure the backend storage is ready for the added connections.  This also has me thinking of upgrading the storage connectivity to 10Gbe rather than 1Gb.  We realize that VMWare's NFS implementation can't aggregate bandwidth on NFS connections across LACP links unless we do some creative multi-subnetting, multi-interface, multi-vlan configuration.  Going to a straight 10GbE interface across 2 stand alone switches (like the Nexus line) for failover might be a better solution with less complexity.

THANKS!

8.6K Posts

September 1st, 2013 09:00

Yes you can use Replicator between a NX4 and a VNX

If you can avoid a full refresh if you use common base checkpoints – available since 6.0.41.3

That is a feature where you can create a user checkpoint on each of the three systems, synchronize it and then restart the replication without full transfer

See the Replicator manual for the latest VNX release and search for “common base” and “cascading replication”

Yes 10GBit interfaces are available

Rainer

1 Rookie

 • 

358 Posts

September 3rd, 2013 07:00

Thanks for that,

Looks like its our choice.  We are presenting NFS storage to vmware so looking at these options:

Two-Port 10 GbE Opt IP Module - IP module with two 10 GB/s Ethernet ports and the choice of SFP+ optical connection or active twinax copper connection to Ethernet switch.

Two-Port 10 GBaseT IP Module - IP module with two 10 GBaseT Ethernet ports with copper connection to Ethernet switch

I can find all kinds of SFP+ optical modules for Cisco so just match what's on either end I guess.  I'll reach out to our rep for configuration details when we spec it out.

Thanks for the link to the spec sheet.

1 Rookie

 • 

20.4K Posts

September 3rd, 2013 07:00

sure, we have Nexus 5k/2k so we went with active twinax ..was cheaper than optical.

8.6K Posts

September 3rd, 2013 07:00

It is pretty simple – the only optical 10GBit we support is the short-wave multi-mode one

That is the most popular one for hosts and optical

1 Rookie

 • 

358 Posts

September 3rd, 2013 07:00

Ah ok I see cisco part SFP-H10GB-CU1M 10GB SFP+ twinax 3.3 ft is compatible with the expansion module I would put into a 3560x.

Though to get Active twinax it seems I have to go 7m or longer twinax cable.  Though it's all in the same rack, I really don't need cable that long.

1 Rookie

 • 

20.4K Posts

September 3rd, 2013 07:00

1 Rookie

 • 

358 Posts

September 3rd, 2013 07:00

What are the 10Gbit interfaces?   Is it LC fiber connector?  Is it copper?

I'm trying to piece together the proper SFP+ modules for Cisco switch gear that this will connect up to.

My backup plan if we don't have the budget for Cisco Nexus is putting in the C3KX-NM-10G modules into two existing Cisco switches.  They have 2 SFP ports and 2 10g SFP+ ports.  So figuring out what modules I need so I can connect to the VNX is currently where I'm stuck at.  If fiber, what kind, multi-mode LC?

I come from the NX4 so I take it each data mover would have a 10g connection in case it has to fail over, correct?  The VNX is architected the same right (2 data movers with connections to your network) ?

1 Rookie

 • 

20.4K Posts

September 3rd, 2013 14:00

passive are still not supported (as far as i know), but you can buy active from Cisco or other vendors (probably cheaper than what EMC resells them for)

251 Posts

September 3rd, 2013 14:00

See support.emc.com/kb/167016

I have linked the docs in Elab and their pages that also outline what Twinax sfp's are support in the kb

You are correct also EMC do not support passive twinax cables

Cheers

Gearoid   

1 Rookie

 • 

20.4K Posts

September 3rd, 2013 14:00

I don't remember seeing that requirement anywhere, mine were from Cisco, works just fine.

1 Rookie

 • 

358 Posts

September 3rd, 2013 14:00

I ran into an old discussion that said only EMC branded Active Twinax was supported.  Is this still the case on the VNX?  Even if you don't go over the 5m length limitation?

1 Rookie

 • 

358 Posts

September 3rd, 2013 14:00

Read that passive ones aren't supported.

https://community.emc.com/message/642136

Rupal Rajwar wrote:

TwinAx Passive cables are yet not supported on the VNX models . Its only the active ones.

Have a broader look : http://www.emc.com/collateral/hardware/white-papers/h8217-introduction-vnx-wp.pdf

Thanks

Rupal

However I guess things have changed since Jun 27, 2012. 

No Events found!

Top