Start a Conversation

Unsolved

This post is more than 5 years old

478

November 5th, 2013 11:00

VNX refuses mount if IPv6 enabled on ESXi

All permissions for the export are correct on the VNX for the hosts being denied. If IPv6 is enabled on the ESXi host, mount permission is always denied. Simply disabling IPv6 on the host bypasses the problem: VNX will allow the mount to succeed. Has anyone seen this? I see this with ESX 5.1 and 5.5. Failure occurs using SRM with the VNX SRA, or even manually mounting via the vSphere Client.

100 Posts

November 5th, 2013 17:00

What version of flare are you using?

FYI... There is an issue in which VNX systems may experience a storage processor (SP) reboot when IPv6 network devices are present in the management or iSCSI LAN. The cause is due to the network driver that is used on the array does not properly handle padding if applied by an external IPv6 network device in the extended header segment of the IPv6 Neighbor Discovery Protocol (NDP) packet.

Affected Flare Revisions:

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.006 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.007 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.008 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.011 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.012 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.502 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.509 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.704 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.709 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.716 

  EMC SW: VNX Operating Environment (OE) 05.31.000.5.720 

 

Workaround: Remove or isolate the IPv6 network device from the network that serves the iSCSI and management ports of the array.

Fix:  Upgrade to VNX OE 05.31.000.5.726 or later.

Disabling IPv6 on the array does not prevent the issue since this action does not prevent the array from receiving a NDP packet from the network.

No Events found!

Top