Número del artículo: 532319

printer Imprimir mail Correo electrónico

VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency

Producto principal: VxRack Flex-PowerEdge 14G

Producto: VxRack Flex-PowerEdge 13G más…

Fecha de la última publicación: 12 mar. 2020

Tipo de artículo: Break Fix

Estado publicado: En línea

Versión: 3

VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency

Contenido del artículo

Problema


VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency
ESXi host disconnects from vCenter
SDS disconnects from VxFlex OS
Error counts increment on Nexus TOR switchports
Nexus TOR switch interface flapping at random intervals
Increased storage latency resulting in performance degradation


 
Causa
The incorrect driver is active for the 10G NICs in ESXi
Resolución

After the Vsphere upgrade from 6.0 to 6.5 is done, follow the below procedure to load the correct ixgben
driver:
1. In the VxFlex OS GUI management software, select Backend > Storage Pool and right-click SDS >
Enter Maintenance Mode.


2. Shut down the Storage VM (SVM).


3. Put the FLEX node in ESXi maintenance mode.

4. Disable the ixgbe:
esxcli system module set --enabled=false --module=ixgbe


5. Enable the ixgben:
esxcli system module set --enabled=true --module=ixgben


6. Reboot the Flex node (ESXi host)


7. Validate that the ixgben driver is loaded for the 10G nics and the i40en driver is loaded on the Controller
nodes:
esxcfg-nics -l
Example output:
esxcfg-nics -l
Name PC I Driver Link Speed Duplex MAC
Address MTU Description vmnic0 0000:01:00.0 ixgben Up 10000Mbps Full
24:6e:96:18:06:90 9000 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic1 0000:01:00.1 ixgben Up 10000Mbps Full
24:6e:96:18:06:92 1500 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic2 0000:06:00.0 igbn Down 0Mbps Half
24:6e:96:18:06:94 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic3 0000:06:00.1 igbn Down 0Mbps Half
24:6e:96:18:06:95 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic4 0000:82:00.0 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:d8 9000 Intel(R) Ethernet 10G 2P X520 Adapter vmnic5 0000:82:00.1 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:da 1500 Intel(R) Ethernet 10G 2P X520 Adapter vusb0 Pseudo cdce Up 100Mbps Full
18:66:da:54:d2:80 1500 DellTM iDRAC Virtual NIC USB Device


8. Take the FLEX node out of ESXi maintenance mode.


9. Power on the SVM.


10. Take the SVM out of VxFlex OS maintenance mode.


11. Verify there are no rebuild/rebalance activities before rebooting the next node.

Notas

If ESXi host is running ESXi 6.5 it must be running ixgben driver if a 13G node. (14G use Mellanox)
If the ESXi version is 6.0 but at RCM 3.2.5+ it must be running ixgben driver if 13G node. (14G use M ellanox)
If the ESXi host is a Controller at 6.5 or 6.0 above RCM 3.2.5+, it must be running ixgben i40en for 13G and
14G.

Problema


VxRack FLEX nodes experience network issues including ESXi host disconnects, SDS disconnects, and latency
ESXi host disconnects from vCenter
SDS disconnects from VxFlex OS
Error counts increment on Nexus TOR switchports
Nexus TOR switch interface flapping at random intervals
Increased storage latency resulting in performance degradation


 
Causa
The incorrect driver is active for the 10G NICs in ESXi
Resolución

After the Vsphere upgrade from 6.0 to 6.5 is done, follow the below procedure to load the correct ixgben
driver:
1. In the VxFlex OS GUI management software, select Backend > Storage Pool and right-click SDS >
Enter Maintenance Mode.


2. Shut down the Storage VM (SVM).


3. Put the FLEX node in ESXi maintenance mode.

4. Disable the ixgbe:
esxcli system module set --enabled=false --module=ixgbe


5. Enable the ixgben:
esxcli system module set --enabled=true --module=ixgben


6. Reboot the Flex node (ESXi host)


7. Validate that the ixgben driver is loaded for the 10G nics and the i40en driver is loaded on the Controller
nodes:
esxcfg-nics -l
Example output:
esxcfg-nics -l
Name PC I Driver Link Speed Duplex MAC
Address MTU Description vmnic0 0000:01:00.0 ixgben Up 10000Mbps Full
24:6e:96:18:06:90 9000 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic1 0000:01:00.1 ixgben Up 10000Mbps Full
24:6e:96:18:06:92 1500 Intel(R) 82599 10 Gigabit Dual Port Network
Connection vmnic2 0000:06:00.0 igbn Down 0Mbps Half
24:6e:96:18:06:94 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic3 0000:06:00.1 igbn Down 0Mbps Half
24:6e:96:18:06:95 1500 Intel Corporation Gigabit 4P X520/I350 rNDC vmnic4 0000:82:00.0 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:d8 9000 Intel(R) Ethernet 10G 2P X520 Adapter vmnic5 0000:82:00.1 ixgben Up 10000Mbps Full
a0:36:9f:ab:35:da 1500 Intel(R) Ethernet 10G 2P X520 Adapter vusb0 Pseudo cdce Up 100Mbps Full
18:66:da:54:d2:80 1500 DellTM iDRAC Virtual NIC USB Device


8. Take the FLEX node out of ESXi maintenance mode.


9. Power on the SVM.


10. Take the SVM out of VxFlex OS maintenance mode.


11. Verify there are no rebuild/rebalance activities before rebooting the next node.

Notas

If ESXi host is running ESXi 6.5 it must be running ixgben driver if a 13G node. (14G use Mellanox)
If the ESXi version is 6.0 but at RCM 3.2.5+ it must be running ixgben driver if 13G node. (14G use M ellanox)
If the ESXi host is a Controller at 6.5 or 6.0 above RCM 3.2.5+, it must be running ixgben i40en for 13G and
14G.

Article Attachments

Adjuntos

Adjuntos

Propiedades del artículo

Publicado por primera vez

lun. abr. 15 2019 20:22:38 GMT

Publicado por primera vez

lun. abr. 15 2019 20:22:38 GMT

Califique este artículo

Preciso
Útil
Fácil de comprender
¿Este artículo fue útil?
0/3000 characters