Unsolved

This post is more than 5 years old

3 Posts

1850

April 15th, 2008 20:00

Filesystem Create Error NS20

Hey guys,

Am getting a strange error when trying to create a filesystem from a storage pool.

Create file system test. System error message (Error #5005):"failed to complete command"
Create file system test. System was unable to create a file system with the given parameters.

Any thoughts?

Its a R5 lun / standard template / cant create any filesystems regardless of config.

John

Message was edited by:
johnubs

3 Posts

April 16th, 2008 17:00

Hi Crualoich,

Unfortunately the system has been shutdown and is currently unavailable, shipping and power. once iO arrives and is back online i will post further details.

thanks

John

11 Posts

April 22nd, 2008 10:00

Hi there,

I'm having the same issue.

I have a NS22 with 15 x 300 GB FC.
On the initial RAID5 (with OS on it) I have 2 user LUN's of 455 GB (or 2 data volumes) which I have combined into 1 meta volume. No problems here creating a file system on top of this meta volume.

On the other 9 disks I have chosen to create a RAID6 (user-defined template, clar-r6) with "setup_clarriion -init". This gives me 2 disk volumes of 1 TB which in turn I combine into 1 meta volume (I realize this is a bit much, but we just recently bought the NS20 and I'm still playing around to get to know the thing).

/nas/bin/nas_volume -name mv2 -create -Meta d9,d10

This works fine.

But when I want to create a file system on it it gives me the following (very elaborate) error :

[root@SANPRODCS bin]#nas/bin/nas_fs -name ufs2 -create mv2
Error 5005: failed to complete command

I also tried it with the AVM in the webgui but same result.

Am I missing something ?

Please specify file and location I you helping people need more info (for example, which log file ?)

Thx in advance !!

4 Operator

 • 

8.6K Posts

April 22nd, 2008 16:00

It would help if you could show the lines from "server_log server_2" from around the time when you executed the command

11 Posts

April 23rd, 2008 02:00

After executing the command again, I get this in the server_log server_2 output :

2008-04-23 17:30:31: STORAGE: 3: Input size 1099246 MB does not match actual volume size 824434 MB
2008-04-23 17:30:31: ADMIN: 3: Command failed: volume disk 94 c0t1l2 disk_id=9 size=1099246
Logical Volume 168 not found.
2008-04-23 17:30:31: ADMIN: 3: Command failed: volume delete 168
Logical Volume 95 not found.
2008-04-23 17:30:31: ADMIN: 3: Command failed: volume delete 95
Logical Volume 94 not found.
2008-04-23 17:30:31: ADMIN: 3: Command failed: volume delete 94

Something is wrong but like I said, this is all new to me !! Maybe you can get your head around this one :) !!

(Isn't there a command to clean up previously used, now unused volumes ? Maybe I already played around to much :o ?!)

3 Posts

May 4th, 2008 17:00

Hey Guys,

Yes i am still having this problem aswell. i have removed the R5(8+1) LUN which i created, made a different LUN R5(4+1) and i still cannot create any file systems on this LUN again it gives me that same message, what are some outputs that i can give you in order to diagnose this problem a little better?

John

4 Operator

 • 

8.6K Posts

May 5th, 2008 07:00

please open a case with EMC support so that they can have a look - preferrably dialin if possible

Normally the only things you have to do:
use setup_clariion or the corresponding piece in Celerra Manager to get the raidgroups and LUNs created
press Rescan Storage in Celerra Manager
then just create your file system using the storage pool

11 Posts

May 7th, 2008 00:00

We solved the problem but don't really know how :s !! We used "nas_pool -delete" to remove the storage pool, used "nas_raid -delete diskgroup " to remove the raid group but It still showed up in "setup_clariion -init".

Suddenly, after letting "setup_clarrion -init" do its work, the disk volumes available to us where no longer d9 and d10 but d15 and d16, and the correct available user space was mentioned in the WebGUI. Now we can create a filesystem on them.

My guess is that d9 and d10 are reserved for some "system" purpose and therefor use the raw space instead of the user available space. Maybe there is a bug in the "setup_clariion -init" script.

Anyway can you try to rename (or remove and add) the d9 and d10 disk volumes to d15 and d16 ?

4 Operator

 • 

8.6K Posts

May 7th, 2008 01:00

no - that shouldnt be necesssary

If you have an normal (non-FC) NS20 where the storage is configured using setup_clariion PLEASE dont go changing raidgroups or dvols

d9/d10 arent special - they are just normally a bit smaller than other LUNs since they get created on the first five drives that also contain the OS

If you still have a problem there please notify EMC support to take a look at it

11 Posts

May 7th, 2008 03:00

But if d9/10 are meant to be created on the first five drive that contain the OS, then why does the "setup_clariion" script assign these dvols to newly created clariion LUN's NOT on the first five drives.

Then this is the essence of what is going wrong, not ?!

Top