We have an NX4 running Unisphere and a device named iscsi_trk. We used to use this for isci to vmware but since changed to NFS - just never renamed the device.
It has an ip address of 10.10.10.10 and even though in the interfaces list it shows VLAN ID of 0 in Celerra UI.... the switchports are in "switchport access vlan 10" and "channel-group 4 mode active" (CISCO 3750G)
I was reading an arcicle on load based teaming in VMWare here: http://wahlnetwork.com/2012/04/30/nfs-on-vsphere-technical-deep-dive-on-load-based-teaming/
Long story short, the only way it would work is if we had multiple VLAN's / IP's and multiple mount points to NFS defined in VMWare.
So basically we would have to create more ip's / vlans like so..
10.10.10.10 VLAN10 (Existing)
10.10.11.10 VLAN11 (NEW)
10.10.12.10 VLAN12 (NEW)
10.10.13.10 VLAN13 (NEW)
Then basically duplicate vlans on vmware side and also add root / access hosts for the proper IP's in the NFS Exports in Celerra that include the additional subnets.
I guess on the switch we would have to convert it from an access port to a trunk port and say "switchport trunk encapsulation dot1q" and "switchport trunk allowed vlan 10,11,12,13" and then actually define the vlan's in the celerra interface config.
Is this even possible or even worth it? The author at that site (Chris Wahl) has some good lab tests showing load based teaming. Its a shame VMWare requires different subnets for them all as it really complicates the setup.
Not sure how familor you guys are with VMWare and using a Celerra NX4 (or equivalant) for shared storage / NFS.
Solved! Go to Solution.
it is actually more of a client issue – the normal „old“ TCP/IP stacks would always send packets to a subnet to the first entry in the routing table.
So if you have multiple interfaces to the same destination only the same first one will ever get used to send requests to the NFS server.
You could do some manual load-balancing on a per-mount basis if you put in manual host routes in the client – they take precedence over network routes
Its just so much easier to use multiple subnet for that.
From the VNX side we are fine
Be default we use reflect mode – that means we actually send the reply to a NFS request through the interface it come in (not through the first in the routing table).
So if you can configure your client that the inbound traffic to the Celerra is balanced the outbound would be as well.
That’s one of the things where protocols like FCoE that don’t have to take TCP/IP into account can implement better.
Ah that makes more sense now. So if I add a few more vlans, I will have to specify the vlan id on each interface on the trunk correct?
See right now they are not specified... but on the switch side they are. If I convert the switch ports and port channel to a trunk with 802.1q, then specify each VLAN ID on Celerra... there should be no issue of a client accessing the NFS export in the vlan it is talking from?