Start a Conversation

Unsolved

This post is more than 5 years old

884

March 10th, 2015 02:00

AVM system defined pool

Hi all,

last week i created a pool for file for a test purpose (because i had only 5 disk). Now i have installed other disks and i want to change the initial configuration from raid 5 to raid 6 but i can't because it is system defined pool. I have read some posts about this but i haven't found anything...can you help me?

Thanks

27 Posts

March 10th, 2015 03:00

Hi Dinamox,

thanks for your answer. All disk are in use because is the first system pool created during the celerra installation.

I would re-install the celerra if is possible. I did this operation some years ago, but i don't remember which tools i used...

Thanks

2 Intern

 • 

20.4K Posts

March 10th, 2015 03:00

1) validate that all file systems are gone and LUNs can be reclaimed (nas_disk -l ), column inuse should have an "n"

2) delete LUNs (nas_disk -delete -perm -unbind).

3) at this point you can re-create your raid group.

2 Intern

 • 

20.4K Posts

March 10th, 2015 05:00

internal file systems should not be part of systems pool, my VNX5700. d7 is where my "data" LUNs start

[nasadmin@vnx5700 ~]$ nas_disk -l

id   inuse  sizeMB    storageID-devID   type  name          servers

1     y      11260  APM00112300123-2007 CLSTD root_disk     1,2

2     y      11260  APM00112300123-2008 CLSTD root_ldisk    1,2

3     y       2038  APM00112300123-2009 CLSTD d3            1,2

4     y       2038  APM00112300123-200A CLSTD d4            1,2

5     y       2044  APM00112300123-200B CLSTD d5            1,2

6     y      65526  APM00112300123-200C CLSTD d6            1,2

7     y    5629553  APM00112300123-012E CLATA d7            1,2

8     y    5629553  APM00112300123-012F CLATA d8            1,2

9     y    5629553  APM00112300123-012C CLATA d9            1,2

10    y    5629553  APM00112300123-012D CLATA d10           1,2

[nasadmin@vnx5700 ~]$ nas_disk -info d3

id        = 3

name      = d3

acl       = 0

in_use    = True

size (MB) = 2038

type      = CLSTD

stor_id   = APM00112300123

stor_dev  = 2009

volume_name = d3

mirrored  = False

servers   = server_2,server_3

   server = server_2          addr=c0t0l2

   server = server_2          addr=c16t0l2

   server = server_3          addr=c0t0l2

   server = server_3          addr=c16t0l2

[nasadmin@vnx5700 ~]$ nas_disk -info d7

id        = 7

name      = d7

acl       = 0

in_use    = True

pool      = clarata_r6

size (MB) = 5629553

type      = CLATA

protection= RAID6(6+2)

stor_id   = APM00112300123

stor_dev  = 012E

volume_name = d7

storage_profiles = clarata_r6

thin      = False

mirrored  = False

servers   = server_2,server_3

   server = server_2          addr=c0t1l2

   server = server_2          addr=c16t1l2

   server = server_3          addr=c0t1l2

   server = server_3          addr=c16t1l2

2 Intern

 • 

20.4K Posts

March 10th, 2015 06:00

is nas_disk output complete or did you chop it off ?  So you created a pool on VNX,  created a couple of LUNs and presented it to VNX datamovers ?  If yes, then i don't see any pool LUNs presented.

27 Posts

March 10th, 2015 06:00

Hi Dynamox,

this is my output:

[nasadmin@CS0 ~]$ nas_disk -l

id   inuse  sizeMB    storageID-devID   type  name          servers

1     y      11263  CKM00105000265-0000 CLSTD root_disk     1,2

2     y      11263  CKM00105000265-0001 CLSTD root_ldisk    1,2

3     y       2047  CKM00105000265-0002 CLSTD d3            1,2

4     y       2047  CKM00105000265-0003 CLSTD d4            1,2

5     y       2047  CKM00105000265-0004 CLSTD d5            1,2

6     y      32767  CKM00105000265-0005 CLSTD d6            1,2

[nasadmin@CS0 ~]$ nas_disk -info d3

id        = 3

name      = d3

acl       = 0

in_use    = True

size (MB) = 2047

type      = CLSTD

protection= RAID5(4+1)

stor_id   = CKM00105000265

stor_dev  = 0002

volume_name = d3

storage_profiles = clar_r5_performance

virtually_provisioned = False

mirrored  = False

servers   = server_2.faulted.server_3,server_2

   server = server_2.faulted. addr=c0t0l2

   server = server_2.faulted. addr=c16t0l2

   server = server_2          addr=c0t0l2

   server = server_2          addr=c16t0l2

[nasadmin@CS0 ~]$

Thanks

27 Posts

March 10th, 2015 07:00

Hi Dynamox,

the output of command is complete.

I don't have created any pool on CX, i followed the via wizard to configure the storage and the via created the pool.

2 Intern

 • 

20.4K Posts

March 10th, 2015 08:00

what is the model of this system ?

2 Intern

 • 

20.4K Posts

March 10th, 2015 14:00

can you post output from nas_pool -l

by the way looks your server_2 has failed over to server_3. You might want to look into it, it could a hardware issue or could be nothing and you just need to failback.

8.6K Posts

March 10th, 2015 14:00

Just use /nas/tools/whereisfs –all to identify what is using what

8 Posts

March 10th, 2015 14:00

It is an old ns120.

No Events found!

Top