Highlighted
vimal82
1 Nickel

Re: celerra file system creation failed and now allocated as raw

Have you tried to perform a rescan, server_devconfig server_2 -create -scsci -disk now  ?

0 Kudos
Scott Riley
1 Nickel

Re: celerra file system creation failed and now allocated as raw

I'm always amazed by how long these arrays keep plugging along.  You're at least a year past EOL support.  That CX3 is probably at least 7 years old.  You might get this issue resolved eventually -- on the other hand, a CX3 DPE on ebay would probably be cheaper than hiring someone who can fix it.

Better yet, do what I did and turn your old storage into a Kegerator...much better than taking a shotgun to it!

symmerator-2.jpg

0 Kudos
fl1
2 Iron

Re: celerra file system creation failed and now allocated as raw

I did a server_devconfig server_2 -list -scsi -disks and I can see all the disk including the c16t2l12 one which is generating the diskmark error. 

d31     c0t2l12     APM....2267     002C

d31     c16t2l12     APM...2267     002C

from what I see on the list the serial number is the same for all the disk.

Like I said it looks like the file system creation process failed and stop somewhere between allocating the space on disk and marking it as allocated and the step to create/present the file system.

If someone can give me details on each of the steps it goes though when you tell the celerra to create a new file system then maybe I can isolate where it failed and focus my troubleshooting efforts.

Thinking back I remember that this shelve of 15K FC drives had a bunch of drives failures (I think 5-6 in total) all within 24 hours.  Luckily I had nothing important on the disks at the time but I was able to move and delete all file systems that were on those disks.  But it was stuck in a rebuilt/transitioning state and I had to force delete the luns and RGs from the clariion side and recreate them after I replaced all the failed disks.  I have not created any new file systems on these new luns and RGs which are part of the same celerra storage pool until now.  I don't know if this contributed to the problem I'm having now as my other storage pools and disks are working fine and I can delete and create file systems in those pools just fine.

I'm thinking if there is a way to delete the raw allocation for this file system and then I can try again but I can't figure out how to do that.

0 Kudos

Re: celerra file system creation failed and now allocated as raw

Firstly,  yes you have to start clean up from Celerra.

And i wonder if "14 FC drives across 2 RG and 5 LUNs." if this configuration is supported by DART. I guess you may have to create 4+1 R5 RG sets within CX and then present LUNs to DMs.

To clean up, you should get help from nas_fs and nas_disk commands.

Good Luck!
Sameer

0 Kudos
Peter_EMC
3 Silver

Re: celerra file system creation failed and now allocated as raw

fl wrote:

I did a server_devconfig server_2 -list -scsi -disks and I can see all the disk including the c16t2l12 one which is generating the diskmark error.

d31     c0t2l12     APM....2267     002C

d31     c16t2l12     APM...2267     002C

from what I see on the list the serial number is the same for all the disk.

...

instead of -list (which shows the luns the last time a -create was done), you should do a -probe, then you will get the current luns

0 Kudos
fl1
2 Iron

Re: celerra file system creation failed and now allocated as raw

Ok after working on this off and on I finally resolved the problem.

These were the steps I used.

1) delete any file systems on the affected LUNs/volumes/disks (I didn't have any because I had already deleted them beforehand).

2) remove LUNs from the storage group on the Clariion side.

3) unbind the LUNs from the Clariion side.

4) deleted the LUNs from the Clariion side.

5) deleted the disks with the nas_disk -d d# command on the Celerra side.

6) ran nas_storage -c -a to make sure no errors.

7) ran sever_devconfig server_2 -create -scsi -all and server_devconfig server_3 -create -scsi -all to update the data movers.

😎 ran nas_disk -list and nas_storage -c -a to verify disk is deleted and no errors.

9) recreated the LUNs on the Clariion side.

10) bind and added the LUNs to the Celerra storage group on the Clariion side.

11) ran server_devconfig server_2 -create -scsi -all and server_devconfig server_3 -create -scsi -all to update the data movers.

12) create new file system and everything is happy again.

Thanks all for your suggestions and thanks for Google where I found bits and pieces that I used.

0 Kudos