Start a Conversation

Unsolved

This post is more than 5 years old

14555

June 17th, 2016 00:00

Best way to remove LUN from VPLEX

Hi, I need to remove about 40 LUNs from VPLEX and I would like to know what is the best way to do it.

The LUNs to be removed are presented from 2 VNX5400 storages and there is all the VPLEx architecture above them (extents, devices,...distributed devices, etc.), and are used from OracleVM hosts.

I'm trying to delete the first LUN after removing it from storage view and have deleted its distributed device, device, extent.

Then I directly give the VPLEXCLI command unclaim from the VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes/ context and now the status is correctly unclaimed.

I read from different community posts that now if I unmask this LUN from VNX side, the VPLEX will made a Call Home:

Remove LUNs from VPLEX without Call Home Event

How to remove old VPLEX LUNS that continue to show as "active" with old LUN size?

how to 'forget' a storage volume?

So, from VPLEXCLI I tried to forget the storage volume  with the following command:

VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes> forget -d VPD83T3:XXXXXXXXXXXXXXXXXXXXX --verbose

WARNING: Error forgetting storage-volume: storage-volume 'VPD83T3:XXXXXXXXXXXXXXXXXXXXX' cannot be forgotten because it is still reachable.

0 of 1 storage-volumes were forgotten.

So what is the correct way to remove this lun (and then the other 39 I need to remove)?

Need to unmask it from backend storage (obtaining a call home and a warning on the VPLEX dashboard) and then to forget it from VPLEXCLI?

Thanks and regards,

Antonio

15 Posts

August 19th, 2016 11:00

I use the following process to remove my LUNs from VPLEX:

1. Remove volumes from Storage group:

removevirtualvolume --view MyServerABC_sv device_volume1,device_volume2,device_volume3 --force

2. Tear down the removed volumes:

I typically do this from the GUI in the Virtual Volume area. Just select the devices and hit the tear down button. (I believe you can also use the command 'advadm dismantle' but I dont have syntax for that offhand.)

3. Unclaim the LUNs in VPLEX:

Depending on your preference (I use CLI), this can be done via GUI in the Storage Volume area, or with:

storage-volume unclaim -d dev1,dev2,dev3

4. Now remove the LUNs from the VNX storage group, so that they are no longer presented to the VPLEX.

5. I typically forget LUNs from the GUI as well, but you can also use the command you listed:

forget --storage-volume VPD83T3:6006016021d0250027b925ff60b5de11


6. Finish the decommission process on VNX (delete LUNs).


The reason you receive the error when trying to forget the LUN is because the LUN is still being presented from the VNX. Make sure to complete step 4 before doing step 5. This process has worked well for me, and hasnt caused any issues.

- Tanner

August 21st, 2016 23:00

Hi Tanner,

       thank you very much for your reply. EMC sent me a document (Customer Procedure -  Remove disk or array from VPLEX).

Here the steps described on this document:

  1. Stop all I/O to the virtual volumes exported to the host whose back-end luns (storage volumes) are being removed.
  2. Login to host and unmount the virtual volumes from the host.
  3. Establish an SSH connection to the VPLEX management server.
  4. From the Linux shell prompt, type the following command to connect to the VPlexcli:
    • If GeoSynchrony release 4.1.x or later is running on the cluster:

vplexcli

  • If GeoSynchrony release 4.0.x or later is running on the cluster:

telnet localhost 49500

  1. Log in as user service.
  2.   From the VPlexcli prompt, type the following commands to remove all virtual volumes that belong to the disk or storage array that will be removed from the storage views. In the second command, list each virtual volume (name) as shown, separated by commas:

cd /clusters/cluster-Cluster_ID/exports/storage-views

removevirtualvolume –-view storage_view_name --virtual-volumes name,name,name –-force

  1. if the virtual volumes are built on a distributed device, then type the following commands to remove virtual volumes from the consistency groups:

cd /clusters/cluster-Cluster_ID/consistency-groups

remove-virtual-volumes –-consistency-group consistency_group_name –-virtual-volumes name,name,name –-force

  1.   Type the following commands to cancel and remove any device migrations:

cd /data-migrations/device-migrations

dm migration cancel --force --migrations migration_name

remove –m migration_name –-force

  1. Type the following commands to cancel and remove any extent migrations:

cd /data-migrations/extent-migrations

dm migration cancel --force --migrations migration_name

remove –m migration_name -–force

  1.   Type the following commands to destroy the virtual volumes you removed in steps 6 and 7:

cd /clusters/cluster-Cluster_ID

virtual-volume destroy –v virtual_volume_name --force

  1.   Type the following commands to detach any mirrors and destroy the devices of the storage volume which is going to be removed from the back end:

device detach-mirror --device /distributed-storage/distributed-devices/device_name --mirror /distributed-storage/distributed-devices/device_name/distributed-device-components/mirror_name

   local-device destroy –d context_path,context_path,context_path

      where context_path = /clusters/Cluster_ID/devices/device_name

  1. When asked if you wish to proceed, type yes and press Enter.
  2. Type the following command to destroy the extents of the removed storage array:

extent destroy –-force extent_name

  1.   Type the following commands to unclaim the physical storage volumes of the removed storage array:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-volumes

unclaim –d volume_name,volume_name,volume_name

  1. Remove the disk from the back-end storage group or volume group,
  2.     Type the following commands to rediscover back-end array and refresh the paths across back-end volumes:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays/name_of_removed_array

array re-discover

  1. When asked if you wish to proceed, type yes and press Enter.
  2. Confirm that the volume is completely removed from the VPLEX cluster(s):

cd /clusters/cluster-Cluster_ID/storage-elements/storage-volumes/

ll storage-volume-name-removed

If the above command fails to find the context for storage-volume-name-removed, this indicates that the disk has been successfully removed from the VPLEX.

  1.   Type the following commands to forget the physical storage volume being removed:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays/name_of_array_being_removed/logical-units

For releases before Geosynchrony Release 5.1 Patch 4, use:

storage-volume forget –c Cluster_ID -i logical unit id

For GeoSynchrony Release 5.1 Patch 4 and above, use:

logical-unit forget –s -u logical unit id

If more than one disk needs to be  removed, repeat the above forget command for each LU associated with the disks that were removed.

Regards, Antonio


No Events found!

Top