Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1519

January 13th, 2016 06:00

VNX file extend block mapped pool

I am trying to extend Mapped FILE_cl-GFFF3_p1  File pool on VNX. This is what I am planning to do.

- Create 10 LUN's  in BLOCK POOL FILE_cl-GFFF3_p1 and add them to ~filestorage group with HLU greater than 16.

- rescan storage on NAS

- At this point I am unsure if I have to run nas_pool -xtend for the pool to show up new space in FILE_cl-GFFF3_p1.

# nas_version

7.1.76-4

[nasadmin@IL06VNX10CS0 ~]$ nas_pool -size -all

id           = 42

name         = FILE_cl-GFFF4

used_mb      = 21649049

avail_mb     = 203719

total_mb     = 21852768

potential_mb = 0

id           = 43

name         = FILE_cl-GFFF3_p1

used_mb      = 13469120

avail_mb     = 16956

total_mb     = 13486076

potential_mb = 0

[nasadmin@IL06VNX10CS0 ~]$ nas_pool -info -all

id                   = 42

name                 = FILE_cl-GFFF4

description          = Mapped Pool FILE_cl-GFFF4 on APM0013XXXXXX

acl                  = 0

in_use               = True

clients              = root_fs_vdm_VDM_01,root_fs_vdm_VDM_02,root_fs_vdm_VDM_03,HOSt4x4x05_   

                                            

members              = v127,v131,v134,v146

storage_system(s)    = APM0013XXXXXX

default_slice_flag   = True

is_user_defined      = False

thin                 = False

tiering_policy       = Auto-Tier/Highest Available Tier

compressed           = False

mirrored             = False

disk_type            = Capacity

server_visibility    = server_2,server_3

volume_profile       = FILE_cl-GFFF4_vp

is_dynamic           = True

is_greedy            = N/A

num_stripe_members   = 5

stripe_size          = 262144

id                   = 43

name                 = FILE_cl-GFFF3_p1

description          = Mapped Pool FILE_cl-GFFF3_p1 on APM0013XXXXXX

acl                  = 0

in_use               = True

clients              = HOSt4x4x05_vol02,HOSt4x4x05_vol03,vol35,vol02,vol04,vol08,vol26,vol27                                                 ,vol30,vol28,vpfs375

members              = v137,v141

storage_system(s)    = APM0013XXXXXX

default_slice_flag   = True

is_user_defined      = False

thin                 = False

tiering_policy       = Auto-Tier/Highest Available Tier

compressed           = False

mirrored             = False

disk_type            = Mixed

server_visibility    = server_2,server_3

volume_profile       = FILE_cl-GFFF3_p1_vp

is_dynamic           = True

is_greedy            = N/A

num_stripe_members   = 5

stripe_size          = 262144

[nasadmin@IL06VNX10CS0 ~]$ nas_disk -l

id   inuse  sizeMB    storageID-devID   type  name          servers

1     y      11260  APM0013XXXXXX-2007 CLSTD root_disk     1,2

2     y      11260  APM0013XXXXXX-2008 CLSTD root_ldisk    1,2

3     y       2038  APM0013XXXXXX-2009 CLSTD d3            1,2

4     y       2038  APM0013XXXXXX-200A CLSTD d4            1,2

5     y       2044  APM0013XXXXXX-200B CLSTD d5            1,2

6     y      65526  APM0013XXXXXX-200C CLSTD d6            1,2

7     y    1092638  APM0013XXXXXX-007F CAPAC d7            1,2

8     y    1092638  APM0013XXXXXX-0080 CAPAC d8            1,2

9     y    1092638  APM0013XXXXXX-0081 CAPAC d9            1,2

10    y    1348607  APM0013XXXXXX-0064 MIXED d10           1,2

11    y    1348607  APM0013XXXXXX-0065 MIXED d11           1,2

12    y    1348607  APM0013XXXXXX-0066 MIXED d12           1,2

13    y    1348607  APM0013XXXXXX-0067 MIXED d13           1,2

14    y    1348607  APM0013XXXXXX-0068 MIXED d14           1,2

15    y    1348607  APM0013XXXXXX-0069 MIXED d15           1,2

16    y    1348607  APM0013XXXXXX-006A MIXED d16           1,2

17    y    1348607  APM0013XXXXXX-006B MIXED d17           1,2

18    y    1348607  APM0013XXXXXX-006C MIXED d18           1,2

19    y    1348607  APM0013XXXXXX-006D MIXED d19           1,2

20    y    1092638  APM0013XXXXXX-006E CAPAC d20           1,2

21    y    1092638  APM0013XXXXXX-006F CAPAC d21           1,2

22    y    1092638  APM0013XXXXXX-0070 CAPAC d22           1,2

23    y    1092638  APM0013XXXXXX-0071 CAPAC d23           1,2

24    y    1092638  APM0013XXXXXX-0072 CAPAC d24           1,2

25    y    1092638  APM0013XXXXXX-0073 CAPAC d25           1,2

26    y    1092638  APM0013XXXXXX-0074 CAPAC d26           1,2

27    y    1092638  APM0013XXXXXX-0075 CAPAC d27           1,2

28    y    1092638  APM0013XXXXXX-0077 CAPAC d28           1,2

29    y    1092638  APM0013XXXXXX-0076 CAPAC d29           1,2

30    y    1092638  APM0013XXXXXX-0079 CAPAC d30           1,2

31    y    1092638  APM0013XXXXXX-0078 CAPAC d31           1,2

32    y    1092638  APM0013XXXXXX-007B CAPAC d32           1,2

33    y    1092638  APM0013XXXXXX-007A CAPAC d33           1,2

34    y    1092638  APM0013XXXXXX-007D CAPAC d34           1,2

35    y    1092638  APM0013XXXXXX-007C CAPAC d35           1,2

36    y    1092638  APM0013XXXXXX-007E CAPAC d36           1,2

[nasadmin@IL06VNX10CS0 ~]$ nas_fs -info id=66

id        = 66

name      = vol26

acl       = 0

in_use    = True

type      = uxfs

worm      = off

volume    = v244

pool      = FILE_cl-GFFF3_p1

member_of = root_avm_fs_group_43

rw_servers= server_2

ro_servers=

rw_vdms   =

ro_vdms   =

auto_ext  = no,thin=no

fast_clone_level = 1

deduplication   = Off

thin_storage    = False

tiering_policy  = Auto-Tier/Highest Available Tier

compressed= False

mirrored  = False

stor_devs = APM0013XXXXXX-0069,APM0013XXXXXX-006A,APM0013XXXXXX-006B,APM0013XXXXXX-006C,APM0013XXXXXX-006D

disks     = d15,d16,d17,d18,d19

disk=d15   stor_dev=APM0013XXXXXX-0069 addr=c0t1l9         server=server_2

disk=d15   stor_dev=APM0013XXXXXX-0069 addr=c16t1l9        server=server_2

disk=d16   stor_dev=APM0013XXXXXX-006A addr=c0t1l10        server=server_2

disk=d16   stor_dev=APM0013XXXXXX-006A addr=c16t1l10       server=server_2

disk=d17   stor_dev=APM0013XXXXXX-006B addr=c0t1l11        server=server_2

disk=d17   stor_dev=APM0013XXXXXX-006B addr=c16t1l11       server=server_2

disk=d18   stor_dev=APM0013XXXXXX-006C addr=c0t1l12        server=server_2

disk=d18   stor_dev=APM0013XXXXXX-006C addr=c16t1l12       server=server_2

disk=d19   stor_dev=APM0013XXXXXX-006D addr=c0t1l13        server=server_2

disk=d19   stor_dev=APM0013XXXXXX-006D addr=c16t1l13       server=server_2

nas_storage -info -all >> file. more file

id                    = 0000

storage profiles      = 1 - FILE_cl-GFFF3_p1

raid_type             = Mixed

logical_capacity      = 28419379200

num_spindles          = 18 - 1_1_3 0_0_7 0_0_9 0_0_10 0_0_12 0_1_1 0_1_3 1_1_2 1_1_4 0_1_0 1_1_1 0_0_6 0_0_8 0_1_2 1_1_0 0_0_11 0_0_13 0_1_4

num_luns              = 10 - 00109 00103 00108 00107 00102 00106 00100 00104 00101 00105

num_disk_volumes      = 10 - d19 d13 d18 d17 d12 d16 d10 d14 d11 d15

spindle_type          = MIXED

bus                   = mixed

virtually_provisioned = True

name                  = FILE_cl-GFFF3_p1

subscribed_capacity   = 28419379200

physical_capacity     = 29081272320

used_capacity         = 28419379200

free_capacity         = 661893120

percent_full_threshold= 70%

hidden                = False

id                    = 0001

storage profiles      = 3 - clarata_r6,cmata_r6,FILE_cl-GFFF4

raid_type             = RAID6

logical_capacity      = 46069862400

num_spindles          = 16 - 1_1_7 1_1_9 1_1_12 0_1_10 1_1_11 0_1_7 0_1_9 1_1_6 1_1_8 0_1_12 1_1_10 1_1_13 0_1_6 0_1_8 0_1_11 0_1_13

num_luns              = 20 - 00123 00127 00116 00112 00122 00111 00121 00125 00118 00129 00114 00115 00119 00110 00120 00124 00128 00117 00113 00126

num_disk_volumes      = 20 - d32 d7 d26 d22 d33 d21 d30 d34 d29 d9 d24 d25 d28 d20 d31 d35 d8 d27 d23 d36

spindle_type          = NLSAS

bus                   = mixed

virtually_provisioned = True

name                  = FILE_cl-GFFF4

subscribed_capacity   = 46069862400

physical_capacity     = 46109786112

used_capacity         = 46069862400

free_capacity         = 39923712

percent_full_threshold= 70%

hidden                = False

naviseccli -User  storagepool -list

[nasadmin@IL06VNX10CS0 ~]$ /nas/sbin/naviseccli -h storagepool -list

Pool Name:  FILE_cl-GFFF3_p1

Pool ID:  0

Raid Type:  Mixed

Percent Full Threshold:  70

Description:

Disk Type:  Mixed

State:  Ready

Status:  OK(0x0)

Current Operation:  None

Current Operation State:  N/A

Current Operation Status:  N/A

Current Operation Percent Completed:  0

Raw Capacity (Blocks):  38327765393

Raw Capacity (GBs):  18276.103

User Capacity (Blocks):  29081272320

User Capacity (GBs):  13867.031

Consumed Capacity (Blocks):  28419379200

Consumed Capacity (GBs):  13551.416

Available Capacity (Blocks):  661893120

Available Capacity (GBs):  315.615

Percent Full:  97.724

Total Subscribed Capacity (Blocks):  28419379200

Total Subscribed Capacity (GBs):  13551.416

Percent Subscribed:  97.724

Oversubscribed by (Blocks):  0

Oversubscribed by (GBs):  0.000

Disks:

Bus 1 Enclosure 1 Disk 3

Bus 0 Enclosure 0 Disk 7

Bus 0 Enclosure 0 Disk 9

Bus 0 Enclosure 0 Disk 10

Bus 0 Enclosure 0 Disk 12

Bus 0 Enclosure 1 Disk 1

Bus 0 Enclosure 1 Disk 3

Bus 1 Enclosure 1 Disk 2

Bus 1 Enclosure 1 Disk 4

Bus 0 Enclosure 1 Disk 0

Bus 1 Enclosure 1 Disk 1

Bus 0 Enclosure 0 Disk 6

Bus 0 Enclosure 0 Disk 8

Bus 0 Enclosure 1 Disk 2

Bus 1 Enclosure 1 Disk 0

Bus 0 Enclosure 0 Disk 11

Bus 0 Enclosure 0 Disk 13

Bus 0 Enclosure 1 Disk 4

LUNs:  109, 103, 108, 107, 102, 106, 100, 104, 101, 105

Pool Name:  FILE_cl-GFFF4

Pool ID:  1

Raid Type:  r_6

Percent Full Threshold:  70

Description:

Disk Type:  NL SAS

State:  Ready

Status:  OK(0x0)

Current Operation:  None

Current Operation State:  N/A

Current Operation Status:  N/A

Current Operation Percent Completed:  0

Raw Capacity (Blocks):  61550736416

Raw Capacity (GBs):  29349.678

User Capacity (Blocks):  46109786112

User Capacity (GBs):  21986.859

Consumed Capacity (Blocks):  46069862400

Consumed Capacity (GBs):  21967.822

Available Capacity (Blocks):  39923712

Available Capacity (GBs):  19.037

Percent Full:  99.913

Total Subscribed Capacity (Blocks):  46069862400

Total Subscribed Capacity (GBs):  21967.822

Percent Subscribed:  99.913

Oversubscribed by (Blocks):  0

Oversubscribed by (GBs):  0.000

Disks:

Bus 1 Enclosure 1 Disk 7

Bus 1 Enclosure 1 Disk 9

Bus 1 Enclosure 1 Disk 12

Bus 0 Enclosure 1 Disk 10

Bus 1 Enclosure 1 Disk 11

Bus 0 Enclosure 1 Disk 7

Bus 0 Enclosure 1 Disk 9

Bus 1 Enclosure 1 Disk 6

Bus 1 Enclosure 1 Disk 8

Bus 0 Enclosure 1 Disk 12

Bus 1 Enclosure 1 Disk 10

Bus 1 Enclosure 1 Disk 13

Bus 0 Enclosure 1 Disk 6

Bus 0 Enclosure 1 Disk 8

Bus 0 Enclosure 1 Disk 11

Bus 0 Enclosure 1 Disk 13

LUNs:  123, 127, 116, 112, 122, 111, 121, 125, 118, 129, 114, 115, 119, 110, 120, 124, 128, 117, 113, 126

15 Posts

January 18th, 2016 11:00

This is what I did:

Expanded Block Pool(FAST) in Unisphere. Selected recommended disks by Unisphere that tries to match up Raid groups types with existing RG's in pool.

Created LUN's in Block pool.

Added LUN's to ~filestorage group with HLU's greater than 16.

Scanned server_devconfig server_x -c -s -a # This scanned new disks and automatically added to existing Block Mapped File-pool.

Extended File system.

Thanks.

No Events found!

Top