Unsolved
This post is more than 5 years old
18 Posts
2
9669
Remove LUNS assigned to the Celerra
We had professional services configure our Celerra and they allocated too much of our free space to the NAS. How do I go about reconfiguring the allotted space and returning it so that it can be assigned to new FC hosts via Navisphere? I am assuming I can't just remove a LUN from the storage group since the data is probably sliced across the LUNS. I already have some data located on CIFS and iSCSI but I can easily move that to tempoary storage while I do the reconfig.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
November 2nd, 2010 13:00
Run this tool to identify what’s using what
/nas/tools/.whereisfs
JGreen
46 Posts
0
November 2nd, 2010 15:00
Of course understanding what resides on each Celerra LUN is paramount as once the LUN is deleted......
If you have any reservations perfoming this type of activity please contact EMC Support for assistance.
JGreen
46 Posts
0
November 2nd, 2010 15:00
You will need to properly remove the LUNS from the Celerra's internal database before you delete or reassign them to another host.
Please reference EMC's Knowledgebase Solution emc241502 regarding this type of activity before proceeding.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
November 3rd, 2010 12:00
Removal of LUNs from the Celerra Storage Group on an integrated system requires proper clean up on the Celerra. Improper removal of storage can result in an inability to scan in new storage and, in some cases, DataMover panic.
WARNING! If this procedure is not properly followed there is a possibility of data loss. Contact EMC Customer Service before proceeding if there are any questions.
This procedure only applies to Celerra volumes attached to CLARiiON arrays!
Prerequisites:
The following are a number of scenarios to assist you properly remove storage.
Start by looking at the following nas_disk -list output:
$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 APM00070300475-0000 CLSTD root_disk 1,2,3,4
2 y 11263 APM00070300475-0001 CLSTD root_ldisk 1,2,3,4
3 y 2047 APM00070300475-0002 CLSTD d3 1,2,3,4
4 y 2047 APM00070300475-0003 CLSTD d4 1,2,3,4
5 y 2047 APM00070300475-0004 CLSTD d5 1,2,3,4
6 y 2047 APM00070300475-0005 CLSTD d6 1,2,3,4
26 y 190833 APM00070300475-0010 CLSTD d26 1,2,3,4
27 y 205281 APM00070300475-001E CLSTD d27 1,2,3,4
28 n 10239 APM00070300475-0032 CLSTD d28 1,2,3,4
29 n 674033 APM00070300475-0033 CLSTD d29 1,2,3,4
30 y 190833 APM00070300475-0011 CLSTD d30 1,2,3,4
31 n 205281 APM00070300475-001F CLSTD d31 1,2,3,4
If LUNs have file systems or other Celerra features built on them, they will show as "y" in the "inuse" column. You cannot safely remove them from a Celerra system at this point. You must first delete all Celerra configuration options from this LUN before it will show up as inuse=n. This includes deleting the file systems, manual volumes and manual pools this LUN is allocated with.
Note: If you use pre-defined AVM storage pools, the d# will show up as inuse=n as soon as you delete the file system.
Q: How do you identify what d# is a specific LUN?
A: The fifth column in the nas_disk -list is a dash with a four digit number (-0008 for example). This number is a hexadecimal representation of the CLARiiON LUN (0008 = ALU 8, 0010 = ALU 16).
Q: How do I know what LUNs a particular file system uses?
A: Run ls -la /nas/tools. Depending on what code level you are at, you will see either .whereisfs (with a dot) or whereisfs (without a dot). Running this script with the -all setting shows you exactly where your storage is:
$ /nas/tools/.whereisfs -all
RG FS's
----- ------
APM00070300475-0000 [ 2] fs01 (d30) fs02 (d26)
FS Resources (RGs: [total # of RG] {repeated for each RG} )
----- ------
fs01 RGs: [ 1] APM00070300475-0000; LUNs: 0011
fs02 RGs: [ 1] APM00070300475-0000; LUNs: 0010
RAID Groups in use:
RG LUN (dVols) FS list
----- ------------- --------
APM00070300475-0000 0011 (d30 ) fs01
0010 (d26 ) fs02
Note that the number next to the CLARiiON serial number is the RAID group .
You see here that by unmounting and deleting fs01, you can release d30.
$ server_umount server_2 -perm fs01
server_2 : done
[nasadmin@KitKat log]$ nas_fs -d fs01
id = 25
name = fs01
acl = 0
in_use = False
type = uxfs
worm = off
volume = v111
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = no,virtual_provision=no
stor_devs = APM00070300475-0011
disks = d30
$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 APM00070300475-0000 CLSTD root_disk 1,2,3,4
2 y 11263 APM00070300475-0001 CLSTD root_ldisk 1,2,3,4
3 y 2047 APM00070300475-0002 CLSTD d3 1,2,3,4
4 y 2047 APM00070300475-0003 CLSTD d4 1,2,3,4
5 y 2047 APM00070300475-0004 CLSTD d5 1,2,3,4
6 y 2047 APM00070300475-0005 CLSTD d6 1,2,3,4
26 y 190833 APM00070300475-0010 CLSTD d26 1,2,3,4
27 y 205281 APM00070300475-001E CLSTD d27 1,2,3,4
28 n 10239 APM00070300475-0032 CLSTD d28 1,2,3,4
29 n 674033 APM00070300475-0033 CLSTD d29 1,2,3,4
30 y 190833 APM00070300475-0011 CLSTD d30 1,2,3,4
31 n 205281 APM00070300475-001F CLSTD d31 1,2,3,4
In this case, deletion of the file system did not release the LUN. Proceed to the next question in this scenario.
Q: I deleted all file systems, but my nas_disk -list still shows "inuse=y" - What am I missing?
A: The LUN was probably allocated to a manual storage pool. Investigate it by performing the following:
$ nas_pool -list
id inuse acl name
3 n 0 clar_r5_performance
24 n 0 perf_dedicated_pool
25 y 0 test_pool
$ nas_pool -i perf_dedicated_pool
id = 24
name = perf_dedicated_pool
description =
acl = 0
in_use = False
clients =
members = d30
default_slice_flag = True
is_user_defined = True
disk_type = CLSTD
server_visibility = server_2,server_3,server_4,server_5
"This pool is in_use = False" means that you can safely delete it with nas_pool -delete. If it was in_use = True, you would need to continue to investigate the particular clients and members that would be listed in this output.
$ nas_pool -d perf_dedicated_pool
id = 24
name = perf_dedicated_pool
description =
acl = 0
in_use = False
clients =
members =
default_slice_flag = True
is_user_defined = True
$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 APM00070300475-0000 CLSTD root_disk 1,2,3,4
2 y 11263 APM00070300475-0001 CLSTD root_ldisk 1,2,3,4
3 y 2047 APM00070300475-0002 CLSTD d3 1,2,3,4
4 y 2047 APM00070300475-0003 CLSTD d4 1,2,3,4
5 y 2047 APM00070300475-0004 CLSTD d5 1,2,3,4
6 y 2047 APM00070300475-0005 CLSTD d6 1,2,3,4
26 y 190833 APM00070300475-0010 CLSTD d26 1,2,3,4
27 y 205281 APM00070300475-001E CLSTD d27 1,2,3,4
28 n 10239 APM00070300475-0032 CLSTD d28 1,2,3,4
29 n 674033 APM00070300475-0033 CLSTD d29 1,2,3,4
30 n 190833 APM00070300475-0011 CLSTD d30 1,2,3,4 << Now inuse=n
31 n 205281 APM00070300475-001F CLSTD d31 1,2,3,4
$ nas_volume -l | egrep "d26|inuse"
id inuse type acl name cltype clid
101 y 4 0 d26 1 112
$ nas_volume -i v112
id = 112
name = v112
acl = 0
in_use = True
type = meta
volume_set = d26
disks = d26
clnt_filesys= fs02
Your nas_disk -list should now show as inuse=n:
$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 APM00070300475-0000 CLSTD root_disk 1,2,3,4
2 y 11263 APM00070300475-0001 CLSTD root_ldisk 1,2,3,4
3 y 2047 APM00070300475-0002 CLSTD d3 1,2,3,4
4 y 2047 APM00070300475-0003 CLSTD d4 1,2,3,4
5 y 2047 APM00070300475-0004 CLSTD d5 1,2,3,4
6 y 2047 APM00070300475-0005 CLSTD d6 1,2,3,4
26 n 190833 APM00070300475-0010 CLSTD d26 1,2,3,4
27 y 205281 APM00070300475-001E CLSTD d27 1,2,3,4
28 n 10239 APM00070300475-0032 CLSTD d28 1,2,3,4
29 n 674033 APM00070300475-0033 CLSTD d29 1,2,3,4
30 n 190833 APM00070300475-0011 CLSTD d30 1,2,3,4
31 n 205281 APM00070300475-001F CLSTD d31 1,2,3,4
You can now safely delete the d# from the Celerra.
Note: -perm only works if the LUN is still bound and in the CLARiiON Storage Group.
Once a d# is marked as deleted from the Celerra and no longer shows up in a nas_disk -list, you can safely remove it from the CLARiiON Storage Group without the chance of API error or panic.
DanJost
190 Posts
0
November 3rd, 2010 12:00
I need to do this as well but I'm getting a login to primus self service (same thing when in powerlink) ... can you post this document somewhere?
ddavisguard
18 Posts
0
November 3rd, 2010 14:00
Thanks Dynamox.....I ran the util which presented me with the results of what was in use. Since I have file systems configured already (but with no data on them, except 1) I think it may be best if I move that little bit of data from the Celerra luns and reconfigure it from scratch.
Once I move the data, I am assuming I can delete the file systems and then go remove all of the LUNs from the Celerra storage group.....except for the LUNs where the NAS OS reside, correct?
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
November 3rd, 2010 14:00
correct, delete file systems, re-run that script again to make sure there is nothing else using those devices (or run nas_disk -list to make sure there is an N in "inuse" column) and then you can run this command which will delete the disk from Celerra perspective, remove it from Clariion storage group and unbind the LUN at the same time. Please be very very careful
nas_disk -delete -perm -unbind
Rainer_EMC
8.6K Posts
0
November 4th, 2010 02:00
IF nas_disk shows the these LUNs / dvols as not in use - yes
JGreen
46 Posts
0
November 4th, 2010 06:00
You MUST leave HLU's 0-5 in the Celerra Storage Group intact. These are the Celerra Control LUNS.
ddavisguard
18 Posts
0
November 15th, 2010 09:00
Thanks for all the suggestions.......I have the issue resolved now.
kavan-bhatt
12 Posts
0
December 2nd, 2013 07:00
Hi Dynamox,
I have a little similary issue.
When I add a LUN to ~filestorage SG, i define the HLU as 16.
Then when I did a rescan from the GUI, the nas_disk command showed me the disk an d15.
Now, i have deleted that d15 with nas_disk -delete d15. (it was not in use).
Now, when i again want to mask the disk its showing up in celera as d16, then d17 and then d18... what i m trying to say is ... it increments the dvol number every time. rather than using the one from 15 onwards ....
any way we can set this or resolve this....
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
December 2nd, 2013 09:00
did you use -perm ?
kavan-bhatt
12 Posts
0
December 2nd, 2013 09:00
Actualy NO , first time, then when I did again I did used –perm.
That doesn’t seem to make any difference.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
December 2nd, 2013 09:00
i have a feeling if you did not use -perm the first time, it left an entry somewhere ..that at this point can only be removed by support.
kavan-bhatt
12 Posts
0
December 2nd, 2013 09:00
No