Start a Conversation

Unsolved

This post is more than 5 years old

1997

January 19th, 2017 10:00

Dell N4032F PortChannel, VLAN Routing Issue

Hello,

I am configuring my first Dell Switch Stack (having come from a Juniper background) and I seem to be having an issue with some VLAN routing on my stack.

My set up is as follows;

Four N4032F Switches in a stack.

Four FX2S Chassis with multiple blades within each with esxi installed.

8 x 10Gbps per FX2S.

2 x 10Gbps link to each switch in the stack per FX2s

8-ports per FX2S in a LACP port-group, trunk with vlan 100 allowed.

ESXi has a mgmt IP configured, on VLAN 100 (vlan specified), and the DG is configured on the stack.

Each FX2S has its own port channel/LAG group.

Chassis one contains esxi 1 & 2

esxi 1 - 192.168.1.1

esxi 2 - 192.168.1.2

chassis 2 contains esxi 3 & 4

esxi 3 - 192.168.1.3

esxi 4 - 192.168.1.4

From the switch cli, I can ping 192.168.1.1, and esxi 192.168.1.2 but no other esxi hosts.

If I change the ip on the working esxi hosts, they still ping.

The port channel, interface, esxi hosts are all configured the same (bar the vpc id etc).

No idea what is happening but it is flummoxing me!

Any ideas what could cause this?!

Many Thanks in advance.

5 Practitioner

 • 

274.2K Posts

January 19th, 2017 12:00

You mentioned VPC, are these switch setup in an MLAG configuration? Can you share the switch config with us? We can help look through the config for any suggested changes.

January 19th, 2017 13:00

I can likely extract that info tomorrow morning. The configuration is LACP with switch port mode trunk, allowed VLAN 100 in channel-group (1-4) accordingly. I'm wondering if it is host side, though all links are up.

5 Practitioner

 • 

274.2K Posts

January 20th, 2017 09:00

It very well could be something on the host side. We can help rule out the config by helping look through it. Are the switches stacked, or setup as an MLAG? knowing more about the topology would help out. A brief topology diagram would go a long way.

Have you tried testing connectivity between the servers? What about swapping physical connections from a working server to a non working server?

February 13th, 2017 12:00

Firstly, awfully sorry about the delay in responding, a culmination of several issues snowballing!

So I have managed to get 7/8 hosts working now just by wiping the config and restarting. 

The configuration is with the four switches (N4032F) in a stacked setup (not MLAG). 

There are four chassis (FX2S), with two hosts in each. 

Network characteristics consist of;

VLAN 200 - ESXi Hosts all sit within this VLAN, and are tagged on the ESXi side in the config.

VLAN 100 - ESXi - iDRACS sit within this VLAN, and are tagged on the client side.

Each Chassis has two FN410S I/O Modules, within, with each module having a single 10Gbps to each switch, giving 80 Gbps link from each chassis. This is configured in a LACP LAG, with all 8 ports creating a single port channel per chassis. All VLANS (1-4094) are allowed (this isn't staying like this!)

All links are up.

I can ping 8/8 iDRACs on VLAN 100 from a client outside the above scopes.

I can ping 7/8 ESXi hosts on VLAN 200 from the same client. 

I did notice that on the esxi side, only one of the NIC's was specified to be used by the Mgmt interface, once i put this onto use both available ports it seemed to work for the mgmt interfaces..

This does seem to have raised a couple more questions!

1) Should the ESXi host see all four NICs per FC630 blade? I have dual 10Gbps interfaces + Dual 10Gbps Daughter Cards on each blade. 

2) Why would the Mgmt Interface work but not the esxi data side work?! they both traverse the same pipe and have an identical configuration to every other blade/chassis. Also one of the blades within the chassis works. 

Thanks again!

No Events found!

Top