开始新对话

此帖子已超过 5 年

Solved!

Go to Solution

6132

2013年7月17日 08:00

vnx 如何删除storage pools for file中的无用storage pool

把lun add storage groups 给dm后如果要删除应该如何删除,回收lun空间。

现在添加到了storage pools for file中后lun就无法去除storage groups,也无法删除了。

2.8K 消息

2013年7月18日 01:00

那这个简单一些,步骤如下:

1、先删除建在这个LUN上的快照或者replication等高级功能相关配置;

2、然后删除相关的CIFS服务器配置,同时umount相关VDM上的文件系统;

3、然后再删除文件系统;

4、接着再删除相关的volume,然后用命令nas_disk看一下volume是否删除干净。

5、接着在block端将LUN移除storage group。

6、最后在file端跑篇nas_storage -c -a

2.8K 消息

2013年7月17日 23:00

LZ,你的问题有二种情况,可以根据不同情况进行处理。

第一种,LUN在storage pool中未移出,使用AVM创建的管理卷。这种情况可以通过:

1、删除文件系统相关信息;

2、删除对应的volume。
3、在block后端将LUN移出存储池,LUN空间即释放。

第二种,LUN已经从文件存储池移除,文件系统等信息未清除。这种情况需要使用命令行手动清除相关配置信息,参考EMC知识文库319875,无权限浏览该文库的用户可以通过登录live chat获取聊天支持。

最后storage groups是实现后端block和前端文件的LUN masking功能,如果将storage groups删除那么前端file将无法访问到后端所有的存储。

1 Rookie

 • 

29 消息

2013年7月18日 00:00

非常感谢回答,可以我表达有误,其实我就是想问下正确的删除storage pools for file中的 pool,并且回收对应的lun。我就是怕直接删除lun会造成不良后果。所以我现在都没有动

1 Rookie

 • 

29 消息

2013年7月21日 05:00

再请教下,我在volume里面删除是灰的 nas_disk  -list 确定volume没有被使用,后来我就用nas_disk  -delete把volume删了。

但是在lun上面解除storage group的时候 提示

Error removing from ~filestorage: The specified operation will potentially affect a File System Storage configuration. Please verify that all File Systems, disk volumes, and Storage Pools for File that use the following LUN(s) have been removed prior to removing from this storage group.

我确定这个肯定没被使用过,仅仅是创建了加入过 ~filestorage 而已,在file里面什么都没动过

2.8K 消息

2013年7月21日 06:00

可能你操作步骤有些问题,我把操作步骤emc241051(公开文档)贴给你参考一下。

Question: Safely removing storage from a Celerra allocated from a CLARiiON backend
Environment: Product: Celerra
Environment: Product: CLARiiON
Environment: Product: Celerra attached to a CLARiiON backend
Environment: EMC SW: NAS Code 5.5
Environment: EMC SW: NAS Code 5.6
Problem: Error 3020: d27 : item is currently in use, first delete volume(s)
Change: Reallocating LUN
Change: Removing storage from CLARiiON
Change: Running nas_disk -d
Fix: Removal of LUNs from  the Celerra Storage Group on an integrated system requires proper clean  up on the Celerra. Improper removal of storage can result in an  inability to scan in new storage and, in some cases, DataMover  panic.   

WARNING!  If this procedure is not properly followed there is  a possibility of data loss.  Contact EMC Customer Service before  proceeding if there are any questions.

This procedure only applies to Celerra volumes attached to CLARiiON arrays!
 
Prerequisites:

•Prior to removing or reassigning disks owned by Celerra, all exports  and mounts on file systems must be permanently unexported and  unmounted.
•Any file systems, metavolumes, stripe volumes, and slices built on  the disks that are going to be removed must also be deleted using CLI  commands. The next steps show you how to confirm this. 

The following are a number of scenarios to assist you properly remove storage.

Start by looking at the following  nas_disk -list output:

$ nas_disk -l
id   inuse  sizeMB    storageID-devID   type  name          servers
1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
30    y     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

If LUNs have file systems or other Celerra features built on them,  they will show as "y" in the "inuse" column. You cannot safely remove  them from a Celerra system at this point. You must first delete all  Celerra configuration options from this LUN before it will show up as  inuse=n. This includes deleting the file systems, manual volumes and  manual pools this LUN is allocated with.

Note: If you use pre-defined AVM storage pools, the d# will show up as inuse=n as soon as you delete the file system.

Q: How do you identify what d# is a specific LUN?

A: The fifth column in the nas_disk -list is a dash  with a four digit number (-0008 for example). This number is a  hexadecimal representation of the CLARiiON LUN (0008 = ALU 8, 0010 = ALU  16).

Q: How do I know what LUNs a particular file system uses?

A: Run ls -la /nas/tools. Depending on what code level you are at, you will see either .whereisfs (with a dot) or whereisfs (without a dot). Running this script with the -all setting shows you exactly where your storage is:

$ /nas/tools/.whereisfs -all
RG                   FS's
-----                ------
APM00070300475-0000  [ 2] fs01 (d30)                fs02 (d26)

FS                   Resources (RGs: [total # of RG] {repeated for each RG} )
-----                ------
fs01                 RGs: [ 1] APM00070300475-0000; LUNs:  0011
fs02                 RGs: [ 1] APM00070300475-0000; LUNs:  0010

RAID Groups in use:
RG                       LUN (dVols)         FS list
-----                    -------------       --------
APM00070300475-0000      0011 (d30 )         fs01
                          0010 (d26 )         fs02


Note that the number next to the CLARiiON serial number is the RAID group . 

You see here that by unmounting and deleting fs01, you can release d30.

$ server_umount server_2 -perm fs01
server_2 : done
[nasadmin@KitKat log]$ nas_fs -d fs01
id        = 25
name      = fs01
acl       = 0
in_use    = False
type      = uxfs
worm      = off
volume    = v111
rw_servers=
ro_servers=
rw_vdms   =
ro_vdms   =
auto_ext  = no,virtual_provision=no
stor_devs = APM00070300475-0011
disks     = d30

$ nas_disk -l
id   inuse  sizeMB    storageID-devID   type  name          servers
1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
30    y     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

In this case, deletion of the file system did not release the LUN. Proceed to the next question in this scenario.

Q: I deleted all file systems, but my nas_disk -list still shows "inuse=y" - What am I missing?

A: The LUN was probably allocated to a manual storage pool. Investigate it by performing the following:

1.Check nas_pool -list for custom pools. For example:
 
$ nas_pool -list
id      inuse   acl     name
3       n       0       clar_r5_performance
24      n       0       perf_dedicated_pool
25      y       0       test_pool
2.Get information on the custom pool: 

$ nas_pool -i perf_dedicated_pool
id                   = 24
name                 = perf_dedicated_pool
description          =
acl                  = 0
in_use               = False
clients              =
members              = d30
default_slice_flag   = True
is_user_defined      = True
disk_type            = CLSTD
server_visibility    = server_2,server_3,server_4,server_5

"This pool is in_use = False" means that you can safely delete it with nas_pool -delete.  If it was in_use = True, you would need to continue to investigate the  particular clients and members that would be listed in this output.

3.Now that the pool is in_use = False, delete it:
 
$ nas_pool -d perf_dedicated_pool
id                   = 24
name                 = perf_dedicated_pool
description          =
acl                  = 0
in_use               = False
clients              =
members              =
default_slice_flag   = True
is_user_defined      = True
$ nas_disk -l
id   inuse  sizeMB    storageID-devID   type  name          servers
1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
30    n     190833  APM00070300475-0011 CLSTD d30           1,2,3,4 << Now inuse=n
31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

4.If the LUN did not release (inuse=y), then there could still be checkpoints or other file systems on the storage (run .whereisfs again) or you need to delete a custom volume that was also built on  that d#. Grep for the file system name like below (in this example, you  are looking to release d26):

$ nas_volume -l | egrep "d26|inuse"
id      inuse type acl  name              cltype  clid
101       y    4   0    d26                  1    112

5.Now investigate why inuse=y for volume 112:
 
$ nas_volume -i v112
id          = 112
name        = v112
acl         = 0
in_use      = True
type        = meta
volume_set  = d26
disks       = d26
clnt_filesys= fs02
6.Another file system is still built on this LUN. Permanently unmount and delete the file system. Then delete any manual  storage pools or volumes on the disk as previously discussed.
Your nas_disk -list should now show as inuse=n:

$ nas_disk -l
id   inuse  sizeMB    storageID-devID   type  name          servers
1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
26    n     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
30    n     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4

You can now safely delete the d# from the Celerra.

7.Use the nas_disk -d d# -perm to remove the diskmark from the LUN and clear any record of this LUN from the Celerra.

Note: -perm only works if the LUN is still bound and in the CLARiiON Storage Group. 

Once a d# is marked as deleted from the Celerra and no longer shows up in a nas_disk -list, you can safely remove it from the CLARiiON Storage Group without the chance of API error or panic.

8.After the LUNs are removed from the Storage Group, verify LUN ownership with nas_storage -c -a.  If no errors show, run server_devconfig ALL -create -scsi -all to update the Data Movers.



找不到事件!

Top