Start a Conversation

Unsolved

This post is more than 5 years old

632

November 2nd, 2010 13:00

File System Quotas Goofy

About a month ago we had a problem with our file system quotas where a share which had a limit of 1TB was rejecting new files even though only about 400GB of space was being used.  The engineer changed our policy from blocks to filesize and then I rebooted the data mover at a convenient time.  It fixed the problem... I was able to put back the 1 TB limit without problems.  At the time I did not pay close attention to disk space actually reported in use...

HOwever, now I notice that the reporting of disk space in use is not anywhere close to actual.  So for instance, I have a test share to /test on the file system.  The /test folder has a quota of 3GB.  I have about 2.52 GB of files on this share.  The Celerra manager only shows 13MB for "usage" in the file system quotas section.  Each of my other shares is equally way off base.

If I open the test share and select all files and right click/properties, it properly shows 2.52 GB for both size and size on disk (I just put the files there so no chance for deduplication).

I am running dedup on all of my file systems.  We only use file systems for CIFS.  Running 5.6.49-3

Any idea why reporting is so far off?  One share has about 450GB files and the Celerra is only showsing about 44 GB!

Paul

2 Intern

 • 

20.4K Posts

November 3rd, 2010 03:00

I’ve been dealing with this issue for a couple of years now (EMC  ..will you fix this #$&*#$# one day ? ) . Basically quotas get out of sync with what’s actually resides on the file system. There is a command that support can run that re-calculates the quota, last time we ran that command it completely locked up the file system being re-calced and we had to reboot the datamover. So  yes, it can be fixed but if things go south ..you will have to reboot. Another option that support offered was to move data to a new directory and then move it back, I have a 10TB file system ..so I just laugh at them.

November 3rd, 2010 07:00

Ditto this one.  There's something particular about quotas (in my case, quota trees) and larger filesystems.  I have an FS that started off at 1.5TB with a 200GB qtree.  It started misbehaving when the FS went to 3TB, with no change in the qtree, but just over 2.2M files.  I've had to recalc it before each code upgrade, as the pre-flight upgrade script finds an error.  And like dynamox, they suggested I make a 4TB FS and dupe it.  Sigh.

The only other data point I have is that the problem has been harder to "see" when I turned on dedupe on that FS - now, the quota errors rarely happen, but I have no idea how much space is actually in use at times.

No Events found!

Top