Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3619

February 11th, 2013 06:00

redundancy for celerra connection to network

Currently our Celerra is connected to one Cisco 3750G 24 port switch.  Our VMWare (3) servers are also connected to this.  There's 3 VLANs,

1 LAN

10 SAN

20 DMZ

This switch is trunked to the core.

Now we added a second switch.  Any best practices how to make it redundant, so if one switch goes down?

on both sides in the back of the celerra there's

cge0, cge1, cge2, cge3        -        cge0, cge1, cge2, cge3

Any link to documents would be helpful.

Storage traffic is on its on vlan.  The VMWare ESX hosts have interfaces on the storage vlan to talk to the celerra via NFS.

1K Posts

February 11th, 2013 07:00

If you really want to implement FSN to protect yourself against a switch failure you'll have to delete cifs_trk and iscsi_trk and re-create those.

Since you only have four ports you'll have to do delete cifs_trk, connect cge3 to the second switch and then create an FSN using cge1 and cge3. When you create the FSN you'll specify which port (cge1 or cge3) the traffic should go through.

You have to do the same thing for iscsi_trk; conenct cge2 to switch2 and create an FSN.

Yes, downtime is required since you will be deleting the trunks and reconfiguring the environment.

What is your other option? If a switch fails you will have to manually failover the datamover. This is a manual step but doesn't require reconfiguration and you can leave everything as is.

1 Rookie

 • 

358 Posts

February 11th, 2013 07:00

Nothing is on switch 2 yet, I want to implement it.

The only thing switch 2 has is 2 links trunked to the core switch stack.  It works great and has full connectivity to the core network.  I tried with a laptop on various vlans between switch 2 and different vlans on the core switch.

I have some spare network interfaces on each ESX host and I can figure that myself.  Its the celerra that I want to make sure I don't make a mistake. 

Switch 1 Cisco 3750G- everything- Celerra, 3 x ESX hosts, ports 23/24 trunked to core

Switch 2 Cisco 3560X - nothing yet.  ports 23/24 trunked to core.

1 Rookie

 • 

358 Posts

February 11th, 2013 07:00

Should probobobly identify each defined device on celerra

Device Name: cifs_trk

data mover: server_2

Type: lacp

Devices: cge1, cge3

Owned by virtual: none

Interfaces: (2 different LAN ip's)

1000FD

Device Name: isci_trk

data mover: server_2

Type: lacp

Devices: cge0, cge2

Owned by virtual: none

Interfaces: SAN IP

1000FD

1 Rookie

 • 

358 Posts

February 11th, 2013 11:00

dynamox wrote:

yes, you can create VLAN tagged interfaces

Oh that is really nice!  I do see now that if you create interface there is a space for VLAN ID.

So theoretically I could create one trunk interface under network devices assigned to 3 physical nics (cge0,1,2) on both sides (server_2 & server_3).  Then for SAN (iSCSI/NFS) I could create an interface attached to the 3 nic trunk with a VLAN ID of 10, and for CIFS I could create an interface with another VLAN ID and for replication traffic I could create another interface with yet another VLAN ID.... all under one trunk, right?

Then for cge3 on both sides, would that be considered another device?

That would still keep the traffic seperate since they are on other VLANS and have their own ip/subnet.  But 3 1000bt interfaces is a bigger pipe than 2, so one vlan taking away from another shouldn't be as bad.

Am I thinking this through correct?

1 Rookie

 • 

358 Posts

February 11th, 2013 12:00

Perfect!  Sounds straightforward, thanks for double checking me and teaching me something new today.

You guys have both been really helpful!

15 Posts

August 3rd, 2016 19:00

What about a configuration using 10Gb ports (fxg0 and fxg01).

I have a  NS-120 (dual-blades) that I want to connect to 2 VMware ESXi hosts (two NICs for storage) via 2 - Nexus 5010 switches.

I have two VLANs for storage - Fab1- VLAN10, Fab2 - VLAN20.

How should I connect to maintain connectivity if there is a failure of a switch, datamover or host connection.

8.6K Posts

August 4th, 2016 02:00

Your choices are LACP, FSN

See the Celerra/VNX High Availibility Networking manual from support.emc.com for a description

If your switches can do cross-switch LACP that would be my preference

No Events found!

Top