Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3168

December 4th, 2017 12:00

Failed trying to move meta-volumes

I tried following the instructions in the doc but not sure why it wont work.  I currently have a local vplex set up with 6 luns from an array we are decoming.  4 are being used for meta and 2 for backups.  I created 6 new luns from a target array to move this to.  Someone please tell me what I am doing wrong?

I'm getting this message below.

VPlexcli:/> meta-volume create --name c1_meta_hwvnx02 --storage-volumes VPD83T3:600601602be03400135b697ad29fe711,VPD83T3:600601602be034003d6a5810d29fe711       This may take a few minutes...

meta-volume  Evaluation of <

create:      VPD83T3:600601602be034003d6a5810d29fe711]>> failed.

cause:       Failed to create meta-volume at 'cluster-1'.

cause:       Storage volumes must be from different storage arrays. Arrays available to the local cluster are displayed below along with their

             connectivity status:

             EMC-CLARiiON-APM00135150559 > ok

             EMC-CLARiiON-APM00150422687 > ok

The current meta data is on EMC-CLARiiON-APM00135150559 > ok.  I am trying to move them to EMC-CLARiiON-APM00150422687 > ok.


This is what I see on the vplex now.


VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll

Name                                      VPD83 ID                                  Capacity  Use        Vendor  IO      Type         Thin     Provision  Thin

----------------------------------------  ----------------------------------------  --------  ---------  ------  Status  -----------  Rebuild  Type       Capable

----------------------------------------  ----------------------------------------  --------  ---------  ------  ------  -----------  -------  ---------  -------

VPD83T3:600601602be03400135b697ad29fe711  VPD83T3:600601602be03400135b697ad29fe711  80G       unclaimed  DGC     alive   traditional  false    legacy     false

VPD83T3:600601602be034003d6a5810d29fe711  VPD83T3:600601602be034003d6a5810d29fe711  80G       unclaimed  DGC     alive   traditional  false    legacy     false

VPD83T3:600601602be0340057b687bcd19fe711  VPD83T3:600601602be0340057b687bcd19fe711  80G       unclaimed  DGC     alive   traditional  false    legacy     false

VPD83T3:600601602be034006be7bb35d29fe711  VPD83T3:600601602be034006be7bb35d29fe711  80G       unclaimed  DGC     alive   traditional  false    legacy     false

VPD83T3:600601602be03400a03e6dead19fe711  VPD83T3:600601602be03400a03e6dead19fe711  80G       unclaimed  DGC     alive   traditional  false    legacy     false

VPD83T3:600601602be03400ec818258d29fe711  VPD83T3:600601602be03400ec818258d29fe711  80G       unclaimed  DGC     alive   traditional  false    legacy     false

VPD83T3:6006016049e036002f5d15363669e411  VPD83T3:6006016049e036002f5d15363669e411  80G       meta-data  DGC     alive   traditional  false    legacy     false

VPD83T3:6006016049e03600315d15363669e411  VPD83T3:6006016049e03600315d15363669e411  80G       meta-data  DGC     alive   traditional  false    legacy     false

VPD83T3:6006016049e03600335d15363669e411  VPD83T3:6006016049e03600335d15363669e411  80G       meta-data  DGC     alive   traditional  false    legacy     false

VPD83T3:6006016049e03600355d15363669e411  VPD83T3:6006016049e03600355d15363669e411  80G       meta-data  DGC     alive   traditional  false    legacy     false

VPD83T3:6006016049e0360080d464e69a46e511  VPD83T3:6006016049e0360080d464e69a46e511  80G       meta-data  DGC     alive   traditional  false    legacy     false

VPD83T3:6006016049e03600846b0f759c46e511  VPD83T3:6006016049e03600846b0f759c46e511  80G       meta-data  DGC     alive   traditional  false    legacy     false

configuration show-meta-volume-candidates

Name                                      Capacity  Vendor  IO Status  Type         Array Name

----------------------------------------  --------  ------  ---------  -----------  ---------------------------

VPD83T3:600601602be03400135b697ad29fe711  80G       DGC     alive      traditional  EMC-CLARiiON-APM00150422687

VPD83T3:600601602be034003d6a5810d29fe711  80G       DGC     alive      traditional  EMC-CLARiiON-APM00150422687

VPD83T3:600601602be0340057b687bcd19fe711  80G       DGC     alive      traditional  EMC-CLARiiON-APM00150422687

VPD83T3:600601602be034006be7bb35d29fe711  80G       DGC     alive      traditional  EMC-CLARiiON-APM00150422687

VPD83T3:600601602be03400a03e6dead19fe711  80G       DGC     alive      traditional  EMC-CLARiiON-APM00150422687

VPD83T3:600601602be03400ec818258d29fe711  80G       DGC     alive      traditional  EMC-CLARiiON-APM00150422687

VPlexcli:/clusters/cluster-1/system-volumes> ll

Name                             Volume Type  Operational  Health  Active  Ready  Geometry  Component  Block     Block  Capacity  Slots

-------------------------------  -----------  Status       State   ------  -----  --------  Count      Count     Size   --------  -----

-------------------------------  -----------  -----------  ------  ------  -----  --------  ---------  --------  -----  --------  -----

c1_meta                          meta-volume  ok           ok      true    true   raid-1    4          20971264  4K     80G       64000

c1_meta_backup_2017Nov29_230014  meta-volume  ok           ok      false   true   raid-1    1          20971264  4K     80G       64000

c1_meta_backup_2017Nov30_230016  meta-volume  ok           ok      false   true   raid-1    1          20971264  4K     80G       64000

286 Posts

December 5th, 2017 11:00

As the error states you have multiple arrays. VPLEX enforces that if you have multiple arrays that your meta legs be on different arrays.

Assuming you have this now, remove the old array, clean up the volumes, then recreate your meta and backups.

December 7th, 2017 12:00

Thank you for your reply.

Can you provide a step by step guide on how to do this?  Currently we have 4 luns being used for meta and 2 luns for backup. 

Thanks,

286 Posts

December 8th, 2017 05:00

Please reach out your project manager if you need assistance in doing so

11 Posts

May 9th, 2018 13:00

Trying using these commands to move your meta volumes to a new array.

Meta Volume Moves to new array

present 4 new 80gb volumes from new array - DO NOT CLAIM THEM thru VPLEX

navigate to /clusters/cluster-1/system-volumes

run ll command - to list current system volumes

should show the following

cluster1 logging volume

cluster1 meta volume

cluster1 meta volume backup1

cluster1 meta volume backup2

navigate to /clusters/cluster-1/system-volumes/Cluster1-Meta-Volume

cd /components

run ll - should list the 2 VPD's of the 2 current volumes(luns) used for the meta volume

run command - configuration show-meta-volume-candidates

- you should see the 4 new 80gb volumes from the new array with VPD's

cd /clusters/cluster-1/system-volumes/Cluster1-Meta-Volume

run command - meta-volume detach-mirror -d VPD83T3:6008888888888d88bef88888888de888 --meta-volume CL1_META - this should be the VPD# from one of the original 80gb luns which you want to remove

*** attach new volume from new array

run command - meta-volume attach-mirror -d VPD83T3:600777777777777bef777777777de777 --meta-volume CL1_META - this should be the VPD# from one of the new 80gb luns from the new array

cd /clusters/cluster-1/system-volumes/Cluster1-Meta-Volume/components

run ll

newly added mirror will show error and critical failure - no worries - just need to wait while it syncs up

if you navigate to the /clusters/cluster-1/system-volumes/Cluster1-Meta-Volume/ and run the "ll" command it will provide an rebuild-ETA and percentage complete

wait until the rebuild process completes

navigate to /clusters/cluster-1/system-volumes

run the "ll" command

verify "health state" is ok for all 4 volumes

navigate to /clusters/cluster-1/system-volumes/Cluster1-Meta-Volume

cd /components

run ll - should list the 2 VPD's - one from each array now

repeat the meta-volume detach removing the old array

repeat meta-volume attach for the second leg of the cluster1 meta-volume using the new array VPD

wait for sync to complete

navigate to /clusters/cluster-1/system-volumes

verify Operational Status and Health Status of all volumes

navigate to /clusters/cluster-1/system-volumes/cluster1-metavolume/components

run ll

verify both VPD's are from new array

navigate to /clusters/cluster-1/system-volumes

- we still need to delete the 2 backup volumes and add the new ones from the new array

cd to one the MetaVolume backups

cd to the components - verify VPD is from old array

***delete the backup

run command - meta-volume destroy --meta-volume

run meta volume destroy command twice - one for each backup

navigate to /clusters/cluster-1/system-volumes

run ll - verify logging and Meta volume exist ( should be 2 in the output)

verify you still have 2 80gb volumes unclaimed to use for meta volumes backup

run command - configuration show-meta-volume-candidates

Reconfigure the backup schedule - this will allow you to use the 2 new volumes from the new array

cd /clusters/cluster-1/system-volumes

run command - configuraton metadata-backup

When asked "Do you want change the existing schedule?" - Select Y

It should list the 2 new volumes from the new array

you will have to specify the 2 volumes to use for backups

Please select volumes for meta-data backup, preferably from two different arrays (volume1,volume2):VPD#1,VPD#2

VPLEX is configured to back up meta-data every day at 23:00 (UTC).

  Would you like to change the time the meta-data is backed up? [no]: You can select NO here if you don't want to the change the time

It should specify a summary with the backup time and which volumes it will use - verify VPD's are from new array

Would you like to run the setup process now? [yes]: yes - select YES for this question

Scheduling the backup of metadata...

Performing metadata backup (This will take a few minutes) - it will perform a backup which we can verify later

Successfully performed the initial backing up of metadata

Successfully scheduled the backing up of metadata

Successfully scheduled the metadata backup

The metadata backup has been successfully scheduled.

navigate to /clusters/cluster-1/system-volumes

run ll

you should now see 4 volumes listed

Logging

MetaVolume

MetaVolume1 - backup

MetaVolume2 - backup

You can change to the components folder of the MetaVolume, MetaVolume 1 Backup, and MetaVolume 2 backup to verify the VPD's are from the new array

***you can also verify the consistency on the meta volumes

run command - meta-volume verify-on-disk-consistency -c cluster

Unclaim 4 old luns

unmap from storage array

rediscover array and "forget unreachable luns"

run health-check or health-check --full

No Events found!

Top