Start a Conversation

Unsolved

This post is more than 5 years old

9669

November 2nd, 2010 13:00

Remove LUNS assigned to the Celerra

We had professional services configure our Celerra and they allocated too much of our free space to the NAS.  How do I go about reconfiguring the allotted space and returning it so that it can be assigned to new FC hosts via Navisphere?  I am assuming I can't just remove a LUN from the storage group since the data is probably sliced across the LUNS.  I already have some data located on CIFS and iSCSI but I can easily move that to tempoary storage while I do the reconfig.

2 Intern

 • 

20.4K Posts

November 2nd, 2010 13:00

Run this tool to identify what’s using what

/nas/tools/.whereisfs

46 Posts

November 2nd, 2010 15:00

Of course understanding what resides on each Celerra LUN is paramount as once the LUN is deleted......

If you have any reservations perfoming this type of activity please contact EMC Support for assistance.

46 Posts

November 2nd, 2010 15:00

You will need to properly remove the LUNS from the Celerra's internal database before you delete or reassign them to another host.

Please reference EMC's Knowledgebase Solution emc241502 regarding this type of activity before proceeding.

2 Intern

 • 

20.4K Posts

November 3rd, 2010 12:00

"How to safely remove CLARiiON LUNs from a Celerra - A detailed review"
spacer

spacer spacer spacer
ID: emc241502
Usage: 5
Date Created: 05/20/2010
Last Modified: 08/12/2010
STATUS: Approved
Audience: Customer
Knowledgebase Solution




Question: Safely removing storage from a Celerra allocated from a CLARiiON backend
Environment: Product: Celerra
Environment: Product: CLARiiON
Environment: Product: Celerra attached to a CLARiiON backend
Environment: EMC SW: NAS Code 5.5
Environment: EMC SW: NAS Code 5.6
Problem: Error 3020: d27 : item is currently in use, first delete volume(s)
Change: Reallocating LUN
Change: Removing storage from CLARiiON
Change: Running nas_disk -d
Fix:

Removal of LUNs from  the Celerra Storage Group on an integrated system requires proper clean  up on the Celerra. Improper removal of storage can result in an  inability to scan in new storage and, in some cases, DataMover  panic.   

WARNING!  If this procedure is not properly followed there is  a possibility of data loss.  Contact EMC Customer Service before  proceeding if there are any questions.

This procedure only applies to Celerra volumes attached to CLARiiON arrays!
 
Prerequisites:

  • Prior to removing or reassigning disks owned by Celerra, all exports  and mounts on file systems must be permanently unexported and  unmounted.
  • Any file systems, metavolumes, stripe volumes, and slices built on  the disks that are going to be removed must also be deleted using CLI  commands. The next steps show you how to confirm this. 


The following are a number of scenarios to assist you properly remove storage.

Start by looking at the following  nas_disk -list output:

$ nas_disk -l
id   inuse  sizeMB    storageID-devID   type  name          servers
1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
30    y     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

If LUNs have file systems or other Celerra features built on them,  they will show as "y" in the "inuse" column. You cannot safely remove  them from a Celerra system at this point. You must first delete all  Celerra configuration options from this LUN before it will show up as  inuse=n. This includes deleting the file systems, manual volumes and  manual pools this LUN is allocated with.

Note: If you use pre-defined AVM storage pools, the d# will show up as inuse=n as soon as you delete the file system.

Q: How do you identify what d# is a specific LUN?

A: The fifth column in the nas_disk -list is a dash  with a four digit number (-0008 for example). This number is a  hexadecimal representation of the CLARiiON LUN (0008 = ALU 8, 0010 = ALU  16).

Q: How do I know what LUNs a particular file system uses?

A: Run ls -la /nas/tools. Depending on what code level you are at, you will see either .whereisfs (with a dot) or whereisfs (without a dot). Running this script with the -all setting shows you exactly where your storage is:

$ /nas/tools/.whereisfs -all
RG                   FS's
-----                ------
APM00070300475-0000  [ 2] fs01 (d30)                fs02 (d26)

FS                   Resources (RGs: [total # of RG] {repeated for each RG} )
-----                ------
fs01                 RGs: [ 1] APM00070300475-0000; LUNs:  0011
fs02                 RGs: [ 1] APM00070300475-0000; LUNs:  0010

RAID Groups in use:
RG                       LUN (dVols)         FS list
-----                    -------------       --------
APM00070300475-0000      0011 (d30 )         fs01
                          0010 (d26 )         fs02


Note that the number next to the CLARiiON serial number is the RAID group . 

You see here that by unmounting and deleting fs01, you can release d30.

$ server_umount server_2 -perm fs01
server_2 : done
[nasadmin@KitKat log]$ nas_fs -d fs01
id        = 25
name      = fs01
acl       = 0
in_use    = False
type      = uxfs
worm      = off
volume    = v111
rw_servers=
ro_servers=
rw_vdms   =
ro_vdms   =
auto_ext  = no,virtual_provision=no
stor_devs = APM00070300475-0011
disks     = d30

$ nas_disk -l
id   inuse  sizeMB    storageID-devID   type  name          servers
1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
30    y     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

In this case, deletion of the file system did not release the LUN. Proceed to the next question in this scenario.

Q: I deleted all file systems, but my nas_disk -list still shows "inuse=y" - What am I missing?

A: The LUN was probably allocated to a manual storage pool. Investigate it by performing the following:

  1. Check nas_pool -list for custom pools. For example:
     
    $ nas_pool -list
    id      inuse   acl     name
    3       n       0       clar_r5_performance
    24      n       0       perf_dedicated_pool
    25      y       0       test_pool
  2. Get information on the custom pool: 

    $ nas_pool -i perf_dedicated_pool
    id                   = 24
    name                 = perf_dedicated_pool
    description          =
    acl                  = 0
    in_use               = False
    clients              =
    members              = d30
    default_slice_flag   = True
    is_user_defined      = True
    disk_type            = CLSTD
    server_visibility    = server_2,server_3,server_4,server_5

    "This pool is in_use = False" means that you can safely delete it with nas_pool -delete.  If it was in_use = True, you would need to continue to investigate the  particular clients and members that would be listed in this output.

  3. Now that the pool is in_use = False, delete it:
     
    $ nas_pool -d perf_dedicated_pool
    id                   = 24
    name                 = perf_dedicated_pool
    description          =
    acl                  = 0
    in_use               = False
    clients              =
    members              =
    default_slice_flag   = True
    is_user_defined      = True

    $ nas_disk -l
    id   inuse  sizeMB    storageID-devID   type  name          servers
    1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
    2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
    3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
    4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
    5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
    6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
    26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
    27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
    28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
    29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
    30    n     190833  APM00070300475-0011 CLSTD d30           1,2,3,4 << Now inuse=n
    31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

  4. If the LUN did not release (inuse=y), then there could still be checkpoints or other file systems on the storage (run .whereisfs again) or you need to delete a custom volume that was also built on  that d#. Grep for the file system name like below (in this example, you  are looking to release d26):

    $ nas_volume -l | egrep "d26|inuse"
    id      inuse type acl  name              cltype  clid
    101       y    4   0    d26                  1    112

  5. Now investigate why inuse=y for volume 112:
     
    $ nas_volume -i v112
    id          = 112
    name        = v112
    acl         = 0
    in_use      = True
    type        = meta
    volume_set  = d26
    disks       = d26
    clnt_filesys= fs02
  6. Another file system is still built on this LUN. Permanently unmount and delete the file system. Then delete any manual  storage pools or volumes on the disk as previously discussed.

    Your nas_disk -list should now show as inuse=n:

    $ nas_disk -l
    id   inuse  sizeMB    storageID-devID   type  name          servers
    1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
    2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
    3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
    4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
    5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
    6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
    26    n     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
    27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
    28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
    29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
    30    n     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
    31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

    You can now safely delete the d# from the Celerra.

  7. Use the nas_disk -d d# -perm to remove the diskmark from the LUN and clear any record of this LUN from the Celerra.

    Note: -perm only works if the LUN is still bound and in the CLARiiON Storage Group. 

    Once a d# is marked as deleted from the Celerra and no longer shows up in a nas_disk -list, you can safely remove it from the CLARiiON Storage Group without the chance of API error or panic.

  8. After the LUNs are removed from the Storage Group, verify LUN ownership with nas_storage -c -a.  If no errors show, run server_devconfig ALL -create -scsi -all to update the Data Movers.




spacer

190 Posts

November 3rd, 2010 12:00

I need to do this as well but I'm getting a login to primus self service (same thing when in powerlink) ... can you post this document somewhere?

18 Posts

November 3rd, 2010 14:00

Thanks Dynamox.....I ran the util which presented me with the results of what was in use.  Since I have file systems configured already (but with no data on them, except 1) I think it may be best if I move that little bit of data from the Celerra luns and reconfigure it from scratch.

Once I move the data, I am assuming I can delete the file systems and then go remove all of the LUNs from the Celerra storage group.....except for the LUNs where the NAS OS reside, correct?

2 Intern

 • 

20.4K Posts

November 3rd, 2010 14:00

correct, delete file systems, re-run that script again to make sure there is nothing else using those devices (or run nas_disk -list to make sure there is an N in "inuse" column) and then you can run this command which will delete the disk from Celerra perspective, remove it from Clariion storage group and unbind the LUN at the same time. Please be very very careful

nas_disk -delete -perm -unbind

8.6K Posts

November 4th, 2010 02:00

IF nas_disk shows the these LUNs / dvols as not in use - yes

46 Posts

November 4th, 2010 06:00

You MUST leave HLU's 0-5 in the Celerra Storage Group intact.  These are the Celerra Control LUNS.

18 Posts

November 15th, 2010 09:00

Thanks for all the suggestions.......I have the issue resolved now.

12 Posts

December 2nd, 2013 07:00

Hi Dynamox,

I have a little similary issue.

When I add a LUN to ~filestorage SG, i define the HLU as 16.

Then when I did a rescan from the GUI, the nas_disk command showed me the disk an d15.

Now, i have deleted that d15 with nas_disk -delete d15. (it was not in use).

Now, when i again want to mask the disk its showing up in celera as d16, then d17 and then d18... what i m trying to say is ... it increments the dvol number every time. rather than using the one from 15 onwards ....

any way we can set this or resolve this....

2 Intern

 • 

20.4K Posts

December 2nd, 2013 09:00

did you use -perm ?

12 Posts

December 2nd, 2013 09:00

Actualy NO , first time, then when I did again I did used –perm.

That doesn’t seem to make any difference.

2 Intern

 • 

20.4K Posts

December 2nd, 2013 09:00

i have a feeling if you did not use -perm the first time, it left an entry somewhere ..that at this point can only be removed by support.

12 Posts

December 2nd, 2013 09:00

No

No Events found!

Top