Article Number: 532319

printer Print mail Email

VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency

Primary Product: VxRack Flex-PowerEdge 14G

Product: VxRack Flex-PowerEdge 13G more...

Last Published: 12 Mar 2020

Article Type: Break Fix

Published Status: Online

Version: 3

VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency

Article Content

Issue


VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency
ESXi host disconnects from vCenter
SDS disconnects from VxFlex OS
Error counts increment on Nexus TOR switchports
Nexus TOR switch interface flapping at random intervals
Increased storage latency resulting in performance degradation


 
Cause
The incorrect driver is active for the 10G NICs in ESXi
Resolution

After the Vsphere upgrade from 6.0 to 6.5 is done, follow the below procedure to load the correct ixgben
driver:
1. In the VxFlex OS GUI management software, select Backend > Storage Pool and right-click SDS >
Enter Maintenance Mode.


2. Shut down the Storage VM (SVM).


3. Put the FLEX node in ESXi maintenance mode.

4. Disable the ixgbe:
esxcli system module set --enabled=false --module=ixgbe


5. Enable the ixgben:
esxcli system module set --enabled=true --module=ixgben


6. Reboot the Flex node (ESXi host)


7. Validate that the ixgben driver is loaded for the 10G nics and the i40en driver is loaded on the Controller
nodes:
esxcfg-nics -l
Example output:
esxcfg-nics -l
Name PC I Driver Link Speed Duplex MAC
Address MTU Description vmnic0 0000:01:00.0 ixgben Up 10000Mbps Full
24:6e:96:18:06:90 9000 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic1 0000:01:00.1 ixgben Up 10000Mbps Full
24:6e:96:18:06:92 1500 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic2 0000:06:00.0 igbn Down 0Mbps Half
24:6e:96:18:06:94 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic3 0000:06:00.1 igbn Down 0Mbps Half
24:6e:96:18:06:95 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic4 0000:82:00.0 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:d8 9000 Intel(R) Ethernet 10G 2P X520 Adapter vmnic5 0000:82:00.1 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:da 1500 Intel(R) Ethernet 10G 2P X520 Adapter vusb0 Pseudo cdce Up 100Mbps Full
18:66:da:54:d2:80 1500 DellTM iDRAC Virtual NIC USB Device


8. Take the FLEX node out of ESXi maintenance mode.


9. Power on the SVM.


10. Take the SVM out of VxFlex OS maintenance mode.


11. Verify there are no rebuild/rebalance activities before rebooting the next node.

Notes

If ESXi host is running ESXi 6.5 it must be running ixgben driver if a 13G node. (14G use Mellanox)
If the ESXi version is 6.0 but at RCM 3.2.5+ it must be running ixgben driver if 13G node. (14G use M ellanox)
If the ESXi host is a Controller at 6.5 or 6.0 above RCM 3.2.5+, it must be running ixgben i40en for 13G and
14G.

Issue


VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency
ESXi host disconnects from vCenter
SDS disconnects from VxFlex OS
Error counts increment on Nexus TOR switchports
Nexus TOR switch interface flapping at random intervals
Increased storage latency resulting in performance degradation


 
Cause
The incorrect driver is active for the 10G NICs in ESXi
Resolution

After the Vsphere upgrade from 6.0 to 6.5 is done, follow the below procedure to load the correct ixgben
driver:
1. In the VxFlex OS GUI management software, select Backend > Storage Pool and right-click SDS >
Enter Maintenance Mode.


2. Shut down the Storage VM (SVM).


3. Put the FLEX node in ESXi maintenance mode.

4. Disable the ixgbe:
esxcli system module set --enabled=false --module=ixgbe


5. Enable the ixgben:
esxcli system module set --enabled=true --module=ixgben


6. Reboot the Flex node (ESXi host)


7. Validate that the ixgben driver is loaded for the 10G nics and the i40en driver is loaded on the Controller
nodes:
esxcfg-nics -l
Example output:
esxcfg-nics -l
Name PC I Driver Link Speed Duplex MAC
Address MTU Description vmnic0 0000:01:00.0 ixgben Up 10000Mbps Full
24:6e:96:18:06:90 9000 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic1 0000:01:00.1 ixgben Up 10000Mbps Full
24:6e:96:18:06:92 1500 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic2 0000:06:00.0 igbn Down 0Mbps Half
24:6e:96:18:06:94 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic3 0000:06:00.1 igbn Down 0Mbps Half
24:6e:96:18:06:95 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic4 0000:82:00.0 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:d8 9000 Intel(R) Ethernet 10G 2P X520 Adapter vmnic5 0000:82:00.1 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:da 1500 Intel(R) Ethernet 10G 2P X520 Adapter vusb0 Pseudo cdce Up 100Mbps Full
18:66:da:54:d2:80 1500 DellTM iDRAC Virtual NIC USB Device


8. Take the FLEX node out of ESXi maintenance mode.


9. Power on the SVM.


10. Take the SVM out of VxFlex OS maintenance mode.


11. Verify there are no rebuild/rebalance activities before rebooting the next node.

Notes

If ESXi host is running ESXi 6.5 it must be running ixgben driver if a 13G node. (14G use Mellanox)
If the ESXi version is 6.0 but at RCM 3.2.5+ it must be running ixgben driver if 13G node. (14G use M ellanox)
If the ESXi host is a Controller at 6.5 or 6.0 above RCM 3.2.5+, it must be running ixgben i40en for 13G and
14G.

Article Attachments

Attachments

Attachments

Article Properties

First Published

Mon Apr 15 2019 20:22:38 GMT

First Published

Mon Apr 15 2019 20:22:38 GMT

Rate this article

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters