开始新对话

此帖子已超过 5 年

Solved!

Go to Solution

3245

2013年9月3日 21:00

datamover 的lun 从storage group 中拿掉了,但是创建文件系统的pool还有

datamover 的lun 从storage group 中拿掉了,但是创建nas文件系统的pool还有显示空间,应该怎么同步一下数据呢?

从下图看到  data mover的lun已经没有了。。。

fileserver_storagegroup_luns.JPG.jpg

创建文件系统还显示有pool space ,数据怎么同步一下呢? 谢谢!!!

file pool.jpg

2 Intern

 • 

2.8K 消息

2013年9月4日 22:00

楼主,这类问题之前帖子(https://community.emc.com/thread/178256)讨论过。一般将LUN从存储池移出有二种情况:

第一种,LUN在storage pool中未移出,使用AVM创建的管理卷。这种情况可以通过:

1、先删除建在这个LUN上的快照或者replication等高级功能相关配置;

2、然后删除相关的CIFS服务器配置,同时umount相关VDM上的文件系统;

3、然后再删除文件系统;

4、接着再删除相关的volume,然后用命令nas_disk看一下volume是否删除干净。

5、接着在block端将LUN移除storage group。

6、最后在file端跑篇nas_storage -c -a


第二种,LUN已经从文件存储池移除,文件系统等信息未清除。这种情况需要使用命令行手动清除相关配置信息,参考EMC知识文库319875,无权限浏览该文库的用户可以通过登录live chat获取聊天支持。

最后storage groups是实现后端block和前端文件的LUN masking功能,如果将storage groups删除那么前端file将无法访问到后端所有的存储。你的情况属于第二种,照着文档操作吧。

1.8K 消息

2013年9月3日 22:00

nas_disk -list 看下有啥输出?

68 消息

2013年9月3日 22:00

需要重新分lun 给data mover, 但是要把现在的pool 删掉。。

2 Intern

 • 

3.2K 消息

2013年9月3日 22:00

如果用不着,不管它可以吧。

68 消息

2013年9月3日 22:00

[nasadmin@prme-stfc-vnx5700-01cs1 ~]$ nas_disk -list

id   inuse  sizeMB    storageID-devID   type  name          servers

1     y      11260  APM00124117790-2007 CLSTD root_disk     1,2

2     y      11260  APM00124117790-2008 CLSTD root_ldisk    1,2

3     y       2038  APM00124117790-2009 CLSTD d3            1,2

4     y       2038  APM00124117790-200A CLSTD d4            1,2

5     y       2044  APM00124117790-200B CLSTD d5            1,2

6     y      65526  APM00124117790-200C CLSTD d6            1,2

7     n   16777215  APM00124117790-0004 CAPAC d7            1,2

8     y    5135129  APM00124117790-0013 CAPAC d8            1,2

9     y   16887807  APM00124117790-000E CLATA d9            1,2

10    y   12277759  APM00124117790-0007 CLATA d10           1,2

11    y   10485759  APM00124117790-0009 CLATA d11           1,2

68 消息

2013年9月4日 00:00

8     y    5135129  APM00124117790-0013 CAPAC d8            1,2

9     y   16887807  APM00124117790-000E CLATA d9            1,2

10    y   12277759  APM00124117790-0007 CLATA d10           1,2

11    y   10485759  APM00124117790-0009 CLATA d11           1,2

这个都已经被从storage group 中拿掉了。。。

1.8K 消息

2013年9月4日 00:00

inuse里 显示y的有您之前移除的LUN么?

1.8K 消息

2013年9月4日 02:00

使用nas_pool -info 看你之前删的pool 还在不在,查找pool iD

nas_pool -delete id="pool ID" -perm .

68 消息

2013年9月4日 02:00

[nasadmin@prme-stfc-vnx5700-01cs1 ~]$ nas_pool -list

id      inuse   acl     name                      storage system

10      y       0       clarata_archive           APM00124117790

18      y       0       clarata_r6                APM00124117790

42      y       0       Pool 0                    APM00124117790

[nasadmin@prme-stfc-vnx5700-01cs1 ~]$ nas_pool -delete id=42  -deep

Error 2216: Pool 0 : item is currently in use by filesystems = FS_Data18,FS_Data22,FS_Data26,FS_Data30,FS_Data35,FS_Data39,FS_Data43,FS_Data47,FS_Data52,FS_Data56,FS_Data60,FS_Data65,FS_Data69,FS_Data77,FS_Data82,FS_Data86,FS_Data90,FS_Data94,FS_Data99,FS_Data103,FS_Data107,FS_Data111,FS_Data116,FS_Data120,FS_Data124,FS_Data129,FS_Data133,FS_Data137,FS_Data141,FS_Data146,FS_Data150,FS_Data154,FS_Data158,FS_Data163,FS_Data167,FS_Data171,FS_Data175,FS_Data180,FS_Data184,FS_Data188,FS_Data193,FS_Data197,FS_Data201,FS_Data205,FS_Data210,FS_Data214,FS_Data218,FS_Data222,FS_Data227,FS_Data231,FS_Data235,FS_Data239,FS_Data244,FS_Data248,FS_Data252,NAS_Data123,NAS_Data127,NAS_Data131,NAS_Data135,NAS_Data140,NAS_Data144,NAS_Data148,NAS_Data152,NAS_Data157,NAS_Data161,NAS_Data165,NAS_Data169,NAS_Data174,NAS_Data178,NAS_Data182,NAS_Data187,NAS_Data191,NAS_Data195,NAS_Data199,NAS_Data204,NAS_Data208,NAS_Data212,NAS_Data216,NAS_Data221,NAS_Data225,NAS_Data229,NAS_Data233,NAS_Data238,NAS_Data242,NAS_Data246,NAS_Data251,NAS_Data255,NAS_Data7,NAS_Data12,NAS_Data16,NAS_Data20,NAS_Data24,NAS_Data29,NAS_Data33,NAS_Data37,NAS_Data41,NAS_Data46,NAS_Data50,NAS_Data54,NAS_Data59,NAS_Data63,NAS_Data67,NAS_Data3,ss

我的这些文件系统都删掉了的。。。。。。。。

2 Intern

 • 

1.1K 消息

2013年9月4日 02:00

这个是系统自定义的存储池,是不可能被删除的。但是必须要把LUN从这些池中删除。


$ nas_pool -shrink { |id= } [-storage ]

-volumes [ ,...]

where:

= name of the storage pool

= ID of the storage pool

= name of the storage system, used to differentiate pools when the same pool name is used in multiple

storage systems

= names of the volumes separated by commas

Example:

To remove volumes d130 and d133 from the storage pool named marketing, type:

$ nas_pool -shrink marketing -volumes d130,d133

2 Intern

 • 

913 消息

2013年9月4日 06:00

正确的删除顺序是删除share--删除文件系统--在命令行下面用nas_disk -delete删除disk--回到管理界面讲LUN移出filestorage--最后再删除LUN。你删除的顺序错了,所以就算你文件系统删除了,nas_disk -list下面还是显示之前的LUN处于inuse状态。必须强制删除这些LUN,才能回收空间。至于怎么删除,请其他大牛指点!!!(nas_disk -delete应该删不掉啦)

1.8K 消息

2013年9月4日 06:00

希望这个KB能帮到LZ

    • Question:
    • Safely removing storage from a Celerra allocated from a CLARiiON backend
    • Environment:
    • Product: Celerra
    • Environment:
    • Product: CLARiiON
    • Environment:
    • Product: Celerra attached to a CLARiiON backend
    • Environment:
    • EMC SW: NAS Code 5.5
    • Environment:
    • EMC SW: NAS Code 5.6
    • Problem:
    • Error 3020: d27 : item is currently in use, first delete volume(s)
    • Change:
    • Reallocating LUN
    • Change:
    • Removing storage from CLARiiON
    • Change:
    • Running nas_disk -d
    • Fix:
    • Removal of LUNs from  the Celerra Storage Group on an integrated system requires proper clean  up on the Celerra. Improper removal of storage can result in an  inability to scan in new storage and, in some cases, DataMover  panic.   
    • WARNING!  If this procedure is not properly followed there is  a possibility of data loss.  Contact EMC Customer Service before  proceeding if there are any questions.
    • This procedure only applies to Celerra volumes attached to CLARiiON arrays!
    •  
    • Prerequisites:
    • Prior to removing or reassigning disks owned by Celerra, all exports  and mounts on file systems must be permanently unexported and  unmounted.
    • Any file systems, metavolumes, stripe volumes, and slices built on  the disks that are going to be removed must also be deleted using CLI  commands. The next steps show you how to confirm this. 
    • The following are a number of scenarios to assist you properly remove storage.
    • Start by looking at the following  nas_disk -list output:
    • $ nas_disk -l
    • id   inuse  sizeMB    storageID-devID   type  name          servers
    • 1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
    • 2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
    • 3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
    • 4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
    • 5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
    • 6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
    • 26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
    • 27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
    • 28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
    • 29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
    • 30    y     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
    • 31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4
    • If LUNs have file systems or other Celerra features built on them,  they will show as "y" in the "inuse" column. You cannot safely remove  them from a Celerra system at this point. You must first delete all  Celerra configuration options from this LUN before it will show up as  inuse=n. This includes deleting the file systems, manual volumes and  manual pools this LUN is allocated with.
    • Note: If you use pre-defined AVM storage pools, the d# will show up as inuse=n as soon as you delete the file system.
    • Q: How do you identify what d# is a specific LUN?
    • A: The fifth column in the nas_disk -list is a dash  with a four digit number (-0008 for example). This number is a  hexadecimal representation of the CLARiiON LUN (0008 = ALU 8, 0010 = ALU  16).
    • Q: How do I know what LUNs a particular file system uses?
    • A: Run ls -la /nas/tools. Depending on what code level you are at, you will see either .whereisfs (with a dot) or whereisfs (without a dot). Running this script with the -all setting shows you exactly where your storage is:
    • $ /nas/tools/.whereisfs -all
    • RG                   FS's
    • -----                ------
    • APM00070300475-0000  [ 2] fs01 (d30)                fs02 (d26)
    • FS                   Resources (RGs: [total # of RG] {repeated for each RG} )
    • -----                ------
    • fs01                 RGs: [ 1] APM00070300475-0000; LUNs:  0011
    • fs02                 RGs: [ 1] APM00070300475-0000; LUNs:  0010
    • RAID Groups in use:
    • RG                       LUN (dVols)         FS list
    • -----                    -------------       --------
    • APM00070300475-0000      0011 (d30 )         fs01
    •                           0010 (d26 )         fs02
    • Note that the number next to the CLARiiON serial number is the RAID group . 
    • You see here that by unmounting and deleting fs01, you can release d30.
    • $ server_umount server_2 -perm fs01
    • server_2 : done
    • [nasadmin@KitKat log]$ nas_fs -d fs01
    • id        = 25
    • name      = fs01
    • acl       = 0
    • in_use    = False
    • type      = uxfs
    • worm      = off
    • volume    = v111
    • rw_servers=
    • ro_servers=
    • rw_vdms   =
    • ro_vdms   =
    • auto_ext  = no,virtual_provision=no
    • stor_devs = APM00070300475-0011
    • disks     = d30
    • $ nas_disk -l
    • id   inuse  sizeMB    storageID-devID   type  name          servers
    • 1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
    • 2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
    • 3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
    • 4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
    • 5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
    • 6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
    • 26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
    • 27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
    • 28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
    • 29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
    • 30    y     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
    • 31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4
    • In this case, deletion of the file system did not release the LUN. Proceed to the next question in this scenario.
    • Q: I deleted all file systems, but my nas_disk -list still shows "inuse=y" - What am I missing?
    • A: The LUN was probably allocated to a manual storage pool. Investigate it by performing the following:
    1. Check nas_pool -list for custom pools. For example:
       
      $ nas_pool -list
      id      inuse   acl     name
      3       n       0       clar_r5_performance
      24      n       0       perf_dedicated_pool
      25      y       0       test_pool
    2. Get information on the custom pool:  $ nas_pool -i perf_dedicated_pool
      id                   = 24
      name                 = perf_dedicated_pool
      description          =
      acl                  = 0
      in_use               = False
      clients              =
      members              = d30
      default_slice_flag   = True
      is_user_defined      = True
      disk_type            = CLSTD
      server_visibility    = server_2,server_3,server_4,server_5
      "This pool is in_use = False" means that you can safely delete it with nas_pool -delete.  If it was in_use = True, you would need to continue to investigate the  particular clients and members that would be listed in this output.
    3. Now that the pool is in_use = False, delete it:
       
      $ nas_pool -d perf_dedicated_pool
      id                   = 24
      name                 = perf_dedicated_pool
      description          =
      acl                  = 0
      in_use               = False
      clients              =
      members              =
      default_slice_flag   = True
      is_user_defined      = True
      $ nas_disk -l
      id   inuse  sizeMB    storageID-devID   type  name          servers
      1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
      2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
      3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
      4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
      5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
      6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
      26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
      27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
      28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
      29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
      30    n     190833  APM00070300475-0011 CLSTD d30           1,2,3,4 <<
      Now inuse=n31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4
    4. If the LUN did not release (inuse=y), then there could still be checkpoints or other file systems on the storage (run .whereisfs again) or you need to delete a custom volume that was also built on  that d#. Grep for the file system name like below (in this example, you  are looking to release d26): $ nas_volume -l | egrep "d26|inuse"
      id      inuse type acl  name              cltype  clid
      101       y    4   0    d26                  1    112
    5. Now investigate why inuse=y for volume 112:
       
      $ nas_volume -i v112
      id          = 112
      name        = v112
      acl         = 0
      in_use      = True
      type        = meta
      volume_set  = d26
      disks       = d26
      clnt_filesys= fs02
    1. Another file system is still built on this LUN. Permanently unmount and delete the file system. Then delete any manual  storage pools or volumes on the disk as previously discussed.
      Your
      nas_disk -list should now show as inuse=n:$ nas_disk -l
      id   inuse  sizeMB    storageID-devID   type  name          servers
      1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
      2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
      3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
      4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
      5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
      6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
      26    n     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
      27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
      28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
      29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
      30    n     190833  APM00070300475-0011 CLSTD d30           1,2,3,4
      31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4
      You can now safely delete the d# from the Celerra.
    1. Use the nas_disk -d d# -perm to remove the diskmark from the LUN and clear any record of this LUN from the Celerra.
      Note:
      -perm only works if the LUN is still bound and in the CLARiiON Storage Group. 
      Once a d# is marked as deleted from the Celerra and no longer shows up in a
      nas_disk -list, you can safely remove it from the CLARiiON Storage Group without the chance of API error or panic.
    2. After the LUNs are removed from the Storage Group, verify LUN ownership with nas_storage -c -a.  If no errors show, run server_devconfig ALL -create -scsi -all to update the Data Movers.

1.8K 消息

2013年9月4日 06:00

  • Q: I deleted all file systems, but my nas_disk -list still shows "inuse=y" - What am I missing?
  • A: The LUN was probably allocated to a manual storage pool. Investigate it by performing the following:
  1. Check nas_pool -list for custom pools. For example:
     
    $ nas_pool -list
    id      inuse   acl     name
    3       n       0       clar_r5_performance
    24      n       0       perf_dedicated_pool
    25      y       0       test_pool
  2. Get information on the custom pool:  $ nas_pool -i perf_dedicated_pool
    id                   = 24
    name                 = perf_dedicated_pool
    description          =
    acl                  = 0
    in_use               = False
    clients              =
    members              = d30
    default_slice_flag   = True
    is_user_defined      = True
    disk_type            = CLSTD
    server_visibility    = server_2,server_3,server_4,server_5
    "This pool is in_use = False" means that you can safely delete it with nas_pool -delete.  If it was in_use = True, you would need to continue to investigate the  particular clients and members that would be listed in this output.
  3. Now that the pool is in_use = False, delete it:
     
    $ nas_pool -d perf_dedicated_pool
    id                   = 24
    name                 = perf_dedicated_pool
    description          =
    acl                  = 0
    in_use               = False
    clients              =
    members              =
    default_slice_flag   = True
    is_user_defined      = True
    $ nas_disk -l
    id   inuse  sizeMB    storageID-devID   type  name          servers
    1     y      11263  APM00070300475-0000 CLSTD root_disk     1,2,3,4
    2     y      11263  APM00070300475-0001 CLSTD root_ldisk    1,2,3,4
    3     y       2047  APM00070300475-0002 CLSTD d3            1,2,3,4
    4     y       2047  APM00070300475-0003 CLSTD d4            1,2,3,4
    5     y       2047  APM00070300475-0004 CLSTD d5            1,2,3,4
    6     y       2047  APM00070300475-0005 CLSTD d6            1,2,3,4
    26    y     190833  APM00070300475-0010 CLSTD d26           1,2,3,4
    27    y     205281  APM00070300475-001E CLSTD d27           1,2,3,4
    28    n      10239  APM00070300475-0032 CLSTD d28           1,2,3,4
    29    n     674033  APM00070300475-0033 CLSTD d29           1,2,3,4
    30    n     190833  APM00070300475-0011 CLSTD d30           1,2,3,4 <<
    Now inuse=n31    n     205281  APM00070300475-001F CLSTD d31           1,2,3,4
  4. If the LUN did not release (inuse=y), then there could still be checkpoints or other file systems on the storage (run .whereisfs again) or you need to delete a custom volume that was also built on  that d#. Grep for the file system name like below (in this example, you  are looking to release d26): $ nas_volume -l | egrep "d26|inuse"
    id      inuse type acl  name              cltype  clid
    101       y    4   0    d26                  1    112
  5. Now investigate why inuse=y for volume 112:
     
    $ nas_volume -i v112
    id          = 112
    name        = v112
    acl         = 0
    in_use      = True
    type        = meta
    volume_set  = d26
    disks       = d26
    clnt_filesys= fs02

68 消息

2013年9月4日 20:00

[nasadmin@prme-stfc-vnx5700-01cs1 ~]$ nas_disk -list

id   inuse  sizeMB    storageID-devID   type  name          servers

1     y      11260  APM00124117790-2007 CLSTD root_disk     1,2

2     y      11260  APM00124117790-2008 CLSTD root_ldisk    1,2

3     y       2038  APM00124117790-2009 CLSTD d3            1,2

4     y       2038  APM00124117790-200A CLSTD d4            1,2

5     y       2044  APM00124117790-200B CLSTD d5            1,2

6     y      65526  APM00124117790-200C CLSTD d6            1,2

8     y    5135129  APM00124117790-0013 CAPAC d8            1,2

9     y   16887807  APM00124117790-000E CLATA d9            1,2

10    y   12277759  APM00124117790-0007 CLATA d10           1,2

11    y   10485759  APM00124117790-0009 CLATA d11           1,2

[nasadmin@prme-stfc-vnx5700-01cs1 ~]$ nas_pool -list

id      inuse   acl     name                      storage system

10      y       0       clarata_archive           APM00124117790

18      y       0       clarata_r6                APM00124117790

42      y       0       Pool 0                    APM00124117790

这些都是 in use 状态,不管用啊,, nas_disk -d 和 nas_pool -d 不起作用。。。

2 Intern

 • 

913 消息

2013年9月4日 20:00

Hi  born  能不能提供一下这个KB号,谢谢!!!

找不到事件!

Top