VPLEX: Active metavolume sharing array
Summary: This article discusses what do to if, during the running of NDU precheck, the script reports that the active metavolume legs are on the same array.
Symptoms
Cause
This issue is caused by the precheck command seeing that both legs of the active metavolume are on the same back-end array. This may be due to:
- The metavolume was initially configured with only one back-end array available, and both legs were set up on that single array. This situation was not updated when a second array was added to the VPLEX.
- The metavolume was initially configured by the user setting two volumes from one array even though two or more arrays were attached to the VPLEX. This is not a supported action where two or more arrays are attached to the VPLEX and must be fixed by reconfiguring the metavolume with the legs being set up on two different arrays.
Resolution
To correct this error, check if another volume, meeting the required criteria for a meta volume, is available on a second array, if a second array is now available.
Best Practice requirements for a meta volume are there must be two (2) storage volumes that are:
-
Unclaimed
-
78 GB or larger
-
On different arrays
-
Thick Provisioned (not built using thin LUNs)
Procedure:
-
Check for available metadata volume candidates using KB article 000158150, "VPlex: How to list storage volumes that are eligible candidates that may be used to create metadata volumes", for the process of displaying the storage volumes which meet the criteria for a VPLEX Meta Volume.
-
Once you have another volume on a different array that meets the criteria, attach this new volume to the current meta volume using the 'meta-volume attach-mirror' CLI command.
-
At the VPlexcli prompt change directory (cd) to the 'system-volumes' context and run the long list, ' ll ', command. You should see the active metadata volumes 'operational status' showing as 'degraded' and the 'Health State' as 'minor-failure'.
VPlexcli:/clusters/cluster-1/system-volumes> ll
Name Volume Type Operational Health State Active Ready Geometry Component Block Block Capacity Slots
------------------------------- -------------- Status ------------- ------ ----- -------- Count Count Size -------- -----
------------------------------- -------------- ----------- ------------- ------ ----- -------- --------- -------- ----- -------- -----
C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -
C1_Meta meta-volume degraded minor-failure true true raid-1 3 20971264 4K 80G 64000
C1_Meta_backup_2018Jun05_120042 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
-
Run the 'rebuild status' command, this should show the meta volume as rebuilding, and the 'rebuild type' should be 'full' as it is a new rebuild.
VPlexcli:/clusters/cluster-1/system-volumes> rebuild status
[1] storage_volumes marked for rebuild
Global rebuilds:
No active global rebuilds.
cluster-1 local rebuilds:
device rebuild type rebuilder director rebuilt/total percent finished throughput ETA
------- ------------ ------------------ ------------- ---------------- ---------- ---------
C1_Meta full s1_0339_spa 20.1G/80G 25.07% 63.2M/s 16.2min
-
Check the component level of the meta volume to see the newly attached leg. Run the command 'll <meta-volume name>/components', this will show the new component as 'Slot Number 2' as displayed in the example below, its 'operational status' is 'error', and its 'Health State' is 'critical-failure'.
Sample output:
/clusters/cluster-1/system-volumes/C1_Meta/components:
Name Slot Type Operational Health State Capacity
---------------------------------------- Number -------------- Status ---------------- --------
---------------------------------------- ------ -------------- ----------- ---------------- --------
VPD83T3:600601601330270098b5c2118665e611 0 storage-volume ok ok 80G
VPD83T3:600601601330270098b5c2118699e711 1 storage-volume ok ok 80G
VPD83T3:60060160c9c02c00c47cb55a4a99e711 2 storage-volume error critical-failure 80G <<<<
-
Wait for the full rebuild to complete, which may take time, be patient. You can check the status of the rebuild by running the 'rebuild status' command, off and on, until you see that the rebuild has completed.
VPlexcli:/clusters/cluster-1/system-volumes> rebuild status
Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds.
-
Repeat step 3 and the meta volume should now show its 'Operational Status' and its 'Health State' as 'ok'.
VPlexcli:/clusters/cluster-1/system-volumes> ll
Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots
------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- -----
------------------------------- -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----
C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -
C1_Meta meta-volume ok ok true true raid-1 3 20971264 4K 80G 64000 <<<<
C1_Meta_backup_2018Jun05_120042 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
-
Repeat 5 to check that the new leg is showing as 'Slot Number 2' and its 'Operational State' and 'Health State' both show as 'ok'.
VPlexcli:/clusters/cluster-1/system-volumes> ll C1_Meta/components/
/clusters/cluster-1/system-volumes/C1_Meta/components:
Name Slot Type Operational Health Capacity
---------------------------------------- Number -------------- Status State --------
---------------------------------------- ------ -------------- ----------- ------ --------
VPD83T3:600601601330270098b5c2118665e611 0 storage-volume ok ok 80G
VPD83T3:600601601330270098b5c2118699e711 1 storage-volume ok ok 80G
VPD83T3:60060160c9c02c00c47cb55a4a99e711 2 storage-volume ok ok 80G
-
Remove the leg of the meta volume that is listed as slot 1, as it is on the same array as the leg listed as slot 0, by running the 'meta-volume detach-mirror' command as shown below:
Sample output:
VPlexcli:/clusters/cluster-1/system-volumes> meta-volume detach-mirror -d VPD83T3:600601601330270098b5c2118699e711 -v C1_Meta
-
Run the command from step 8 again and you should now only see two volumes that are listed, each from a different array, with 'Slot Number' for '0' and '1'.
VPlexcli:/clusters/cluster-1/system-volumes> ll C1_Meta/components/
/clusters/cluster-1/system-volumes/C1_Meta/components:
Name Slot Type Operational Health Capacity
---------------------------------------- Number -------------- Status State --------
---------------------------------------- ------ -------------- ----------- ------ --------
VPD83T3:600601601330270098b5c2118665e611 0 storage-volume ok ok 80G
VPD83T3:60060160c9c02c00c47cb55a4a99e711 1 storage-volume ok ok 80G
-
Confirm that the meta volume and backup meta volumes are good by running the CLI command as shown below:
VPlexcli:/> ll /clusters/*/system-volumes/
/clusters/cluster-1/system-volumes:
Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots
------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- -----
------------------------------- -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----
C1_Logging_vol logging-volume ok ok - - raid-1 1 2621440 4K 10G -
C1_Meta meta-volume ok ok true true raid-1 2 20971264 4K 80G 64000
C1_Meta_backup_2018Jun04_120017 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
C1_Meta_backup_2018Jun05_120042 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
/clusters/cluster-2/system-volumes:
Name Volume Type Operational Health Active Ready Geometry Component Block Block Capacity Slots
------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- -----
------------------------------- -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----
C2_Logging_vol logging-volume ok ok - - raid-0 1 2621440 4K 10G -
C2_Meta meta-volume ok ok true true raid-1 2 20446976 4K 78G 64000
C2_Meta_backup_2018Jul01_060025 meta-volume ok ok false true raid-1 1 20446976 4K 78G 64000
C2_Meta_backup_2018Jul02_060022 meta-volume ok ok false true raid-1 1 20971264 4K 80G 64000
-
Since you created a meta volume, the backup meta volume is not accurate. As you see above in step 11, the metadata backups on cluster-1, where the new meta volume was created for the examples in this KB, show as having last run a backup around the beginning of June. You need to destroy the old backups and configure new backups. To do this, refer to KB article 000038636, "VPLEX: 0x8a4a6006,0x8a4a6003,0x8a4a6005, The automated backup of the metavolume could not be completed (or) No valid backup metavolume exist (or) Metadata Backup could not be destroyed", and follow the steps in the workaround under the Resolution section. Each backup volume must also be on different arrays when two or more arrays are attached to a VPLEX. This is for redundant purposes.