Numéro d’article: 534249

printer Imprimer mail E-mail

VPLEX: Expanding virtual-volumes causes GPT table corruption on ESX cross-connected hosts

Résumé: This KB addresses an issue when the virtual-volume is expanded from VPLEX side and ESX hosts could not detect the expanded volume due to GPT table corruption.

Produit principal: VPLEX Series

Produit: VPLEX Series plus…

Dernière publication: 12 mars 2020

Type d’article: Break Fix

État publié: En ligne

Version: 3

VPLEX: Expanding virtual-volumes causes GPT table corruption on ESX cross-connected hosts

Contenu de l’article

Problème


  • After successfully expanding virtual-volumes from VPLEX, ESX hosts cannot see the expanded size. 

From firmware logs: 

128.221.253.36/cpu0/log:5988:W/"0060169d39d6214036-1":25668:<6>2018/12/06 12:17:36.79: amf/248 Capacity of disk Test_Volume has changed from 2748779069440 to 3298534883328
  • The above event indicates the virtual-volume was successfully expanded from VPLEX. 
  • After expanding the datastore, the host doesn't see the expanded size as the expansion causes GPT table corruption.
  • In this case, the error below is reported in the VMware kernal logs indicating the GPT table is corrupted:
018-12-06T14:06:46.732Z ERROR Exception: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old ba$


 
Cause
The expanded virtual-volume was presented to cross-connected hosts with different LUN IDs. 

 
(Refer to the host.txt file which can be found in the base collect-diagnostics (c-d) file under analysis notes directory)

****************************************
*       Cluster Name: cluster-1        *
****************************************

---------------------------------------------
     View Name:   Test_Storage_View
     View Status: ok
---------------------------------------------

LUN-4: TestVolume (VPD83T3:60001440000000xxxxxxxxxxxxxxxx)
       ==> device_TestVolume (distributed raid-1)
       cluster-1:
           ==> device_TestVolume (raid-0)
               ==> extent_TestVolume
                   ==> TestVolume
                       ==> VPD83T3:60060160xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)
       cluster-2:
           ==> device_TestVolume_1 (raid-0)
               ==> extent_device_TestVolume_1
                   ==> TestVolume_1
                       ==> VPD83T3:600601604xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)
                           
****************************************
*       Cluster Name: cluster-2        *
****************************************
---------------------------------------------
     View Name:   Test_Storage_View
     View Status: ok
---------------------------------------------

LUN-6: TestVolume (VPD83T3:60001440000000xxxxxxxxxxxxxxxx)
       ==> device_TestVolume (distributed raid-1)
       cluster-1:
           ==> device_TestVolume (raid-0)
               ==> extent_TestVolume
                   ==> TestVolume
                       ==> VPD83T3:60060160xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)
       cluster-2:
           ==> device_TestVolume_1 (raid-0)
               ==> extent_device_TestVolume_1
                   ==> TestVolume_1
                       ==> VPD83T3:600601604xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)



As shown above, the same-virtual-volume was exported to cross-connected storage-view with different LUN IDs on cluster-1 and cluster-2.
Résolution
VMware should be engaged to resolve the GPT table corruption. 

To avoid GPT table corruption and ensure the expansion is seen from the host side, on VPLEX, change the LUN ID of the virtual-volume to be the same on both cluster-1 and cluster-2 storage-views before expanding the datastore as follows: 

VPlexcli:/> cd /clusters/cluster-1/exports/storage-views/Test_Storage_View
VPlexcli:/clusters/cluster-1/exports/storage-views/Test_Storage_View>export storage-view addvirtualvolume (5, TestVolume ) --force

VPlexcli:/> cd /clusters/cluster-2/exports/storage-views/Test_Storage_View
VPlexcli:/clusters/cluster-2/exports/storage-views/Test_Storage_View>export storage-view addvirtualvolume (5, TestVolume ) --force


Note: Changing the LUN ID might be disruptive to the hosts.
Notes

Problème


  • After successfully expanding virtual-volumes from VPLEX, ESX hosts cannot see the expanded size. 

From firmware logs: 

128.221.253.36/cpu0/log:5988:W/"0060169d39d6214036-1":25668:<6>2018/12/06 12:17:36.79: amf/248 Capacity of disk Test_Volume has changed from 2748779069440 to 3298534883328
  • The above event indicates the virtual-volume was successfully expanded from VPLEX. 
  • After expanding the datastore, the host doesn't see the expanded size as the expansion causes GPT table corruption.
  • In this case, the error below is reported in the VMware kernal logs indicating the GPT table is corrupted:
018-12-06T14:06:46.732Z ERROR Exception: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old ba$


 
Cause
The expanded virtual-volume was presented to cross-connected hosts with different LUN IDs. 

 
(Refer to the host.txt file which can be found in the base collect-diagnostics (c-d) file under analysis notes directory)

****************************************
*       Cluster Name: cluster-1        *
****************************************

---------------------------------------------
     View Name:   Test_Storage_View
     View Status: ok
---------------------------------------------

LUN-4: TestVolume (VPD83T3:60001440000000xxxxxxxxxxxxxxxx)
       ==> device_TestVolume (distributed raid-1)
       cluster-1:
           ==> device_TestVolume (raid-0)
               ==> extent_TestVolume
                   ==> TestVolume
                       ==> VPD83T3:60060160xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)
       cluster-2:
           ==> device_TestVolume_1 (raid-0)
               ==> extent_device_TestVolume_1
                   ==> TestVolume_1
                       ==> VPD83T3:600601604xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)
                           
****************************************
*       Cluster Name: cluster-2        *
****************************************
---------------------------------------------
     View Name:   Test_Storage_View
     View Status: ok
---------------------------------------------

LUN-6: TestVolume (VPD83T3:60001440000000xxxxxxxxxxxxxxxx)
       ==> device_TestVolume (distributed raid-1)
       cluster-1:
           ==> device_TestVolume (raid-0)
               ==> extent_TestVolume
                   ==> TestVolume
                       ==> VPD83T3:60060160xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)
       cluster-2:
           ==> device_TestVolume_1 (raid-0)
               ==> extent_device_TestVolume_1
                   ==> TestVolume_1
                       ==> VPD83T3:600601604xxxxxxxxxxxxxxxxxx
                           ==> EMC~CLARiiON~xxxxxxxxxxxxxxx (EMC~CLARiiON)



As shown above, the same-virtual-volume was exported to cross-connected storage-view with different LUN IDs on cluster-1 and cluster-2.
Résolution

VMware should be engaged to resolve the GPT table corruption. 

To avoid GPT table corruption and ensure the expansion is seen from the host side, on VPLEX, change the LUN ID of the virtual-volume to be the same on both cluster-1 and cluster-2 storage-views before expanding the datastore as follows: 

VPlexcli:/> cd /clusters/cluster-1/exports/storage-views/Test_Storage_View
VPlexcli:/clusters/cluster-1/exports/storage-views/Test_Storage_View>export storage-view addvirtualvolume (5, TestVolume ) --force

VPlexcli:/> cd /clusters/cluster-2/exports/storage-views/Test_Storage_View
VPlexcli:/clusters/cluster-2/exports/storage-views/Test_Storage_View>export storage-view addvirtualvolume (5, TestVolume ) --force


Note: Changing the LUN ID might be disruptive to the hosts.

Notes

Article Attachments

Pièces jointes

Pièces jointes

Propriétés de l’article

Première publication

mar. mai 28 2019 14:34:58 GMT

Première publication

mar. mai 28 2019 14:34:58 GMT

Noter cet article

Précis
Utile
Facile à comprendre
Avez-vous trouvé cet article utile ?
0/3000 characters