Start a Conversation

Unsolved

This post is more than 5 years old

738

January 25th, 2013 18:00

NFS on NS480's

Hi Folks,

I'm having a bit of a nightmare at present - I've inherited an NS480 (not a bad thing granted ). However the SAN appears to be in a bit of a mess I'm struggling to get my head around what is active storage and what is not.

Anyways - the major problem I have is that I need to re-provision all of its storage as NFS once I begin to clear it off (I know it's a shame) but the hosts that are due to connect to have no hba's and there is no budget to procure any. So essentially I am left with a Full UP NS480 which has 1 raid 5 nfs storage group provisioned which I can't expand even though I've managed to free up 5 HD's - so then I try to provision storage from the 5 HD's and I am only allowed provision as Performance (Raid10); the storage that is there is already is RAID 5 (.8TB - 4 DISKS) and according to the disk provision wizard I am not allowed touch this - but I am massivelt confused here - I would have assumed I'd be able to create a LUN from my free disks and from here a file system and then an NFS share - should it not be that simple - so please help because I am lost and what's most frustrating is that it's like the NS480 hardly exists - I can't even find sales pages let alone manuals

G

1 Rookie

 • 

20.4K Posts

January 25th, 2013 18:00

can you post output from "nas_pool -l"

January 28th, 2013 02:00

The test is one I've created but there is no capacity associated with it.

id      inuse   acl     name

5       y       0       clar_r5_perf

47      n       0       Test

1 Rookie

 • 

20.4K Posts

January 28th, 2013 04:00

can you also post output from this command, use your array serial number (cat /etc/hosts |grep APM)

/nas/sbin/.setup_clariion list config APM00073201234

January 28th, 2013 04:00

What does this command do exactly?

1 Rookie

 • 

20.4K Posts

January 28th, 2013 04:00

it will list configuration of your backend storage, output will look something like this:

Enclosure(s) 0_0,1_0,2_0,3_0,0_1,1_1,2_1,3_1,0_2,1_2,2_2,3_2,0_3,1_3,2_3,3_3,0_4,1_4,2_4,3_4,0_5,1_5 are installed in the system.

Enclosure info:

----------------------------------------------------------------

      0   1   2   3   4   5   6   7   8   9 10  11  12  13  14

----------------------------------------------------------------

1_5:100010001000100010001000100010001000100010001000100010001000

ATA   UB  UB  UB  UB  UB  UB  UB  UB  UB UB  UB  UB  UB  UB  UB  UB

----------------------------------------------------------------

0_5: 10001000100010001000100010001000100010001000100010001000

ATA  EMP  UB  UB  UB  UB  UB  UB  UB  UB UB  UB  UB  UB  UB  UB  UB

----------------------------------------------------------------

3_4: 10001000100010001000100010001000100010001000100010001000

ATA  EMP  UB  UB  UB  UB  UB  UB  UB  UB UB  UB  UB  UB  UB  UB  UB

8.6K Posts

January 28th, 2013 08:00

Dont you have a Powerlink account ?

8.6K Posts

January 28th, 2013 08:00

creating an extra pool wont help you

If you have the LUNs created properly the Celerra will automatically assign them to system pools.

No Events found!

Top