Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3997

November 12th, 2009 08:00

How do I change the nbpi (inode size) value

Currently I have a 6TB file system that ran out of inodes at only 45% utilization. What is the command to alter the inode size at the time of file system creation?

8.6K Posts

November 12th, 2009 23:00

I dont think ufs.inodelimit requires a reboot (you can check with server_param or the Celerra Manager GUI)

Only way to get more inodes while keeping the file system would be to extend it.
If you were to add another 2TB you would get 2TB worth of inodes effectively doubling the number of inodes compared to your current 6TB fs

8.6K Posts

November 12th, 2009 09:00

that would be the npbi option of nas_fs

however since the default is 8k and the Celerra block size is also 8k (no partials) this usually doesnt make sense unless you have lots of empty files or links/symlinks

I think what you are rather hitting is the ufs.inodelimit param

ufs inodelimit
257949696 ¿ 4294967295
Default: 257949696 (0xf6000000)
Specifies the maximum number of inodes for a new file system or a file system extension.

In order to save space by default we only create inodes for the first 2TB worth of file system.
If you change that param and then create a 6TB fs you get more inodes
Also if you extend it (I believe)

8.6K Posts

November 12th, 2009 09:00

just curious - what kind of data is in there ? zero-length files ?

16 Posts

November 12th, 2009 10:00

Rainer, the application that uses this space creates small txt files that it gathers from the web. Our standard deviation on the file sizes range a lot - largest are 250K, smallest are a handful of bytes.

Once the upper limit is increased and the datamover rebooted is there any way to increase the number of inodes without destroying/recreating the volume?

16 Posts

November 18th, 2009 05:00

Luckily we have an active month and a previous month.  Since this is the first go-round with the Celerra I had additional space to create new volumes after adjusting the ufs.inodelimit to 1.5 billion.  I did do it with Celerra manger but I took the failover reboots just to be safe.   I lowered my inode size to 4k just to be sure we could make it to the end of the month.

Command I ran:

nas_fs -name d02 -create size=6400000M pool=Pool-d02 -o slice=y -o nbpi=4096

Here is the inode math.  Please correct me if I am wrong:

Get from TB to KB:

volume size 6.2TB * 1024 * 1024 *1024 = 6657199308KB

If I have a 8k inode I would have 6657199308.8/8 = 832149913 total inodes

If I have a 4k inode I would have 6657199308.8/4 = 1664299827 total inodes

Once the application is done we will take another look but I have a feeling we will be moving back to the 8k inode size.  Thanks for all your help!

8.6K Posts

November 18th, 2009 05:00

Hi Mike,

just curious - what did you decide to do ?

Rainer

8.6K Posts

November 18th, 2009 06:00

Hi Mike,

looks fine

with 8k nbpi you should be getting 128M (128*1024*1024) inodes per TB

and for 4k nbpi twice of that

you're welcome

Rainer

No Events found!

Top