StorRokr
1 Copper

Adding drives/finding existing templates

Does anyone know how I can query the existing 'shelf template' currently assigned to a drive shelf on a Celerra NS500, and how I can add existing drives to the system?

One of my shelves has 8 open slots and I have some drives I could use, but I'm not clear on the process to expand my system 'safely'. Any documentation links/suggestions are appreciated!

Pete
Tags (2)
0 Kudos
9 Replies
Rainer_EMC
5 Osmium

Re: Adding drives/finding existing templates

whats your Celerra model - i.e. what does /nas/sbin/model say ?
0 Kudos
StorRokr
1 Copper

Re: Adding drives/finding existing templates

NS500? I'll check the path you mentioned...right off the bat I was able to see in celerra manager that the template is set as "user_defined"...
0 Kudos
Rainer_EMC
5 Osmium

Re: Adding drives/finding existing templates

ok - then I assume it is an NS Integrated (not a gateway model)

cd /nas/sbin/setup_backend
./nas_raid list
or
./setup_clariion2 list config CKXXXX

with CKXXX being the Clarion serial number, which you can find with "grep CLAR /etc/hosts"

In order for these commands to work you need to be root but with the nasadmin enviroment - easiest is to login as nasadmin and then su (without -) root
0 Kudos
Rainer_EMC
5 Osmium

Re: Adding drives/finding existing templates

On a NS500 adding drives would normally done by EMC customer service - but you can use the NS20 docs from Powerlink as guidance

I dont know if the direct links would work so I'll describe how to get there

open Powerlink
go to Home > Support > Product and Diagnostic Tools > Celerra Tools > NS20 Integrated
then Step 6: Configure for Production
then Configure additional storage for your integrated system
then Non-Fibre Channel (FC) enabled integrated configuration Celerra system

If in doubt, please work through customer service - its easier to get it right in the first place than cleaning up a suboptimal config
0 Kudos
StorRokr
1 Copper

Re: Adding drives/finding existing templates

I'll take a look at the info..

I'm surprised I would need customer service just to add drives to a shelf? Given, right now I wouldn't feel comfortable adding it since I can't seem to find a doc to give me more info... but I come from the NetApp world...we just add the drives to the shelf, launch a gui (if you want to) to assign the drives to an existing aggregate or create a new one, and use them...can even pull this off through the GUI. So far my eyes are glossing over with the drive template research (only seem to find release notes for a 1.5 year old rev online) and restrictions, forcing myself into these preset raid 1 and raid 4+1 configs (as defined by the original tempalte when the shelf was originally deployed), etc. with no way to change it around later without reconfiguring the entire shelf.

Is it really this restrictive or am I just looking at this the wrong way? Definitely a different theory of storage architecture, but the maintenance requirements are so complex i'm wondering if I'm really missing something here...just can't see calling EMC for every little change... either way I want to learn how to do it properly...

Pete
0 Kudos
StorRokr
1 Copper

Re: Adding drives/finding existing templates

Thanks for the info, this is what it returns (model cx500)

The steps you mentioned indicate using Navisphere? I haven't seen anyone using Navisphere around here, even our emc tech that was last out here...just the Celerra manager...but then again, maybe they just went through the shell directly?

Disk group info:
----------------
Disk Group ID: 0 r5 Disks: 0_0_0,0_0_1,0_0_2,0_0_3,0_0_4
Disk Group ID: 8 r1 Disks: 0_0_6,0_0_7
Disk Group ID: 9 r1 Disks: 0_0_8,0_0_9
Disk Group ID: 10 r5 Disks: 0_0_10,0_0_11,0_0_12,0_0_13,0_0_14
Disk Group ID: 11 r1 Disks: 1_0_1,1_0_2
Disk Group ID: 12 r1 Disks: 1_0_3,1_0_4
Disk Group ID: 13 r1 Disks: 1_0_5,1_0_6
Disk Group ID: 14 r5 Disks: 0_1_1,0_1_2,0_1_3,0_1_4,0_1_5,0_1_6,0_1_7


Lun info:
---------
Lun ID: 0 RG ID: 0 State: Bound root_disk
Lun ID: 1 RG ID: 0 State: Bound root_ldisk
Lun ID: 2 RG ID: 0 State: Bound d3
Lun ID: 3 RG ID: 0 State: Bound d4
Lun ID: 4 RG ID: 0 State: Bound d5
Lun ID: 5 RG ID: 0 State: Bound d6
Lun ID: 16 RG ID: 0 State: Bound d7
Lun ID: 17 RG ID: 0 State: Bound d14
Lun ID: 18 RG ID: 8 State: Bound d8
Lun ID: 19 RG ID: 8 State: Bound d15
Lun ID: 20 RG ID: 9 State: Bound d9
Lun ID: 21 RG ID: 9 State: Bound d16
Lun ID: 22 RG ID: 10 State: Bound d10
Lun ID: 23 RG ID: 10 State: Bound d17
Lun ID: 24 RG ID: 11 State: Bound d11
Lun ID: 25 RG ID: 11 State: Bound d18
Lun ID: 26 RG ID: 12 State: Bound d12
Lun ID: 27 RG ID: 12 State: Bound d19
Lun ID: 28 RG ID: 13 State: Bound d13
Lun ID: 29 RG ID: 13 State: Bound d20
Lun ID: 31 RG ID: 14 State: Bound d21
Lun ID: 33 RG ID: 14 State: Bound d22


Spare info:
-----------
Spare ID: 200 Disk: 0_0_5
Spare ID: 201 Disk: 1_0_0
Spare ID: 202 Disk: 0_1_0
0 Kudos
Rainer_EMC
5 Osmium

Re: Adding drives/finding existing templates

The Powerlink doc I've pointed to does tell you how to add disks (in complete RAID groups)

yes, the architecture is different - we dont add single drives, we add raidgroups that get automatically put into a pool and AVM does the rest.

This has changed with newer models like NS20FC or NS40FC where you have more control, but it also requires more work for the customer and better understanding of the layout.

Most customers dont mind EMC doing the work of adding drives since its typically no extra cost when you buy the upgrade drives.

Think of storage pools to be (loosely) similar to aggregates

your RAID groups are your building blocks that you normally never change.
0 Kudos

Re: Adding drives/finding existing templates

Hi,

I am not sure if this is the right thread but thought of giving it a shot.

I am using setup_clariion2 to configure shelf by shelf for storage pool for file. I see that we can pass parameters like the DAE and a template in the CLI switches. I am not able to figure out the exact usage of the script. Is there a way to create a template or even a config file.

Our end goal is to automate the process of creating the "storage pool for file" -> create "file system" and then create "nfs exports" We are able to figure out the later

This is what i see from teh setup_clariion2 command.

[nasadmin@vnx~]$ /nas/sbin/setup_backend/setup_clariion2

setup_clariion2     [<switches>] setup <storage-system-id>

                                 list config <storage-system-id>

                                 list template <storage-system-id>

Optional switches:

             -e <enclosure>            config one enclousre

             -c <config-file>          config template file

             -t <template>             config template

             -a <ip_address|hostname>  Hostname for SPA

             -b <ip_address|hostname>  Hostname for SPB

             -m                        Mute, no questions will be asked.

             -U                        Running in Web UI mode.

             -n                        Pathname to navicli.

It will great if someone could help me with this script or point me to someone who can.

Thanks in advance

Mohit Kshirsagar

0 Kudos
Rainer_EMC
5 Osmium

Re: Adding drives/finding existing templates

It's really not meant to be used standalone - that's why it isn't documented

It's purpose is to be called from setup_clariion

If you are automating - why bother with old-fashioned templates?

Rainer

0 Kudos