Start a Conversation

Unsolved

This post is more than 5 years old

1956

May 17th, 2017 16:00

FX2 enclosure - FN2210S switches - IPV6 network between nodes

Hey,

I have 3 nodes in a FX2 chassis, 2 FC430's and 1 FC630.
The chassis is equipped with 2 FN2210S switches that then connect to my core network.

I'm looking to move my internal cluster "traffic" to an IPV6 network, that doesn't get routed on the internal network.

I setup the link local IPV6 addresses on the cluster teamed nic on all 3 nodes.
And the 2 FC430's communicate just fine with eachother, but the FC630 is not getting through to either of them nor are they getting to him.

This makes me assume the 2 FC430 share a network connection and route traffic between them directly and the FC630 has to push it to the switches who then send it back.

Any ideas on how can I get this to work?

Thanks!

Dom

6 Posts

May 17th, 2017 18:00

Really?
I thought each node had 2 physical nics that were linked to each of the 2 switches in the chassis?

yes traffic flows freely through the switches towards my network and back

IPV6 between node 1 and 2 works perfectly
but not between 1/2 and 5

Thanks!
Dom

Moderator

 • 

8.5K Posts

May 17th, 2017 18:00

Which slots are you using in your chassis? They probably are mapped to separate ports. 

6 Posts

May 17th, 2017 18:00

Hey Buddy,

I have the 2 FC430's in slot 1a and slot 1b (so position 1 and 2)
The FC630 is in slot 3a (so position 5/6)

The FN2210S's are in standalone mode though.

Thanks!
Dom

Moderator

 • 

8.5K Posts

May 17th, 2017 18:00

So 1 and 2 go to the top io module and 5 and 6 go to the bottom io module so traffic is going to go through the switches. 

Moderator

 • 

8.5K Posts

May 18th, 2017 09:00

They do, but they have a primary switch that they want to connect to, which nic ports are being used on the FC630? Is it a dual port nic or a quad port? The FC630 has one port on each or 2 ports on each and it would depend on which ports you are using.

6 Posts

May 18th, 2017 09:00

I'm thinking I'll have to setup the switches in full switch mode to provide a gateway and offer L3 switching.

Then I can point my servers to those gateways and have traffic flow between the servers without getting to my network.

6 Posts

May 18th, 2017 09:00

Hey,

It's a 2 nic server, but each nic is running NPAR so I have 8 nics in total.
Integrated NIC 1: BRCM 10GbE 2P 57810s bNDC

From there, I created 3 nic teams (one from each nic) and the other 2 are FCOE's
IPV4 - MANAGEMENT OS - NIC1 and NIC2 (PCI0000 and PCI0001)

NO IP - HYPERV - NIC 3 and NIC 4 (PCI0002 and PCI0003)

IVP6 - CLUSTER - NIC5 and NIC6 (PCI0004 and PCI0005)

LOCAL IP - FCOE - NIC7 and NIC8 - not teamed

Thanks!

Dom

6 Posts

May 18th, 2017 11:00

okay

For now I have it set to use the management subnet as a failover cluster network for live migrations.
It takes a few seconds longer to initiate the live migration, and most likely other heartbeat and cluster functions run slower.

I do think L3 will be a better option in the long run, since that will allow me to keep all server traffic within the chassis and away from the network.

But completely overhauling the server networking is a big weekend job, so that will have to get planned in :)

Thanks for the help and I'll give feedback once I'm done
(or someone will read this post in years and I won't have responded... in which case; sorry dude !)

Dom

Moderator

 • 

8.5K Posts

May 18th, 2017 11:00

I have not been able to find how it splits up the traffic with NPAR or even if it stays the same between reboots, if it does you may be able to change the order for it to work, but using L3 on the switch is better.

No Events found!

Top