Celerra's filesystem is a journaled filesystem called UxFS. There are two main types of data stored in a UxFS filesystem: data blocks, and metadata structures. Within the metadata are things like the superblock, inode tables, information tables, and a cylinder group map. The cylinder group map contains all of the free blocks that are not used for inodes, indirect address blocks or storage blocks. The CG map also keeps track of any fragments to keep disk fragmentation from occurring.
Beneath the filesystem layer is the disk layer, containing RAID groups managed by the storage array, either Symmetrix or CLARiiON. Both storage arrays run their own defrag programs to keep their RAID groups' fragmentation levels low.
Thanks for the thread reference. I think that answers my question, but I have a followup question based on your answer below from the refernced thread. You state that the fragmentation should remain constant except when the partition gets very full and there are lots of file creations deletions with extremely funny file sizes.
In our scenario that we are planning on implementing on the Celerra there would be deletion and creation of large amounts of files on a daily basis as this would be for a backup target for database backup files, and the backups are deleted everyday as new backups are written. The backup files range in size from 3GB to 83GB.
Does that fit the exception to the rule or should we still except a low constant fragmentation rate?
Peter Pilsl wrote:
Celerra does not have a builtin Defragmenter.
But why do you need to defragment Celerra data?
See knowledgebase emc70573: "FFS tries to allocate 'logically close' blocks close. For instance, it allocates (if possible) regular file inodes in the same group as the directory holding them, and data blocks for a file in the same group as the file's inode. Alternatively, it allocates directory inodes on the less loaded group, and it forces the change of cylinder group at each megabyte of file size (to avoid fragmentation inside a cylinder group). The allocation uses geometry parameters to compute a ``closest block''.
So, there is a defrag process but, basically, you don't need to defrag since the file system code is 'tuned' to discourage fragmentation. You'll likely get a low fragmentation (2-5% of the files or so), but this will remain constant except when the partition gets very full and there are lots of file creations/deletions with extremely funny file sizes. From our experience with Celerra, EMC have yet to find a customer who has needed to defrag file systems to regain access performance."
BillStein-Dell
Moderator
•
285 Posts
1
August 18th, 2010 13:00
Hi Aran,
Celerra's filesystem is a journaled filesystem called UxFS. There are two main types of data stored in a UxFS filesystem: data blocks, and metadata structures. Within the metadata are things like the superblock, inode tables, information tables, and a cylinder group map. The cylinder group map contains all of the free blocks that are not used for inodes, indirect address blocks or storage blocks. The CG map also keeps track of any fragments to keep disk fragmentation from occurring.
Beneath the filesystem layer is the disk layer, containing RAID groups managed by the storage array, either Symmetrix or CLARiiON. Both storage arrays run their own defrag programs to keep their RAID groups' fragmentation levels low.
Hope this helps
-bill
dynamox
9 Legend
•
20.4K Posts
0
August 18th, 2010 14:00
Bill,
does uxfs get fragmented as much as NTFS does ?
Peter_EMC
674 Posts
1
August 19th, 2010 03:00
see here: https://community.emc.com/click.jspa?searchID=507565&objectType=2&objectID=385728
AranH1
2.2K Posts
0
August 19th, 2010 07:00
Peter,
Thanks for the thread reference. I think that answers my question, but I have a followup question based on your answer below from the refernced thread. You state that the fragmentation should remain constant except when the partition gets very full and there are lots of file creations deletions with extremely funny file sizes.
In our scenario that we are planning on implementing on the Celerra there would be deletion and creation of large amounts of files on a daily basis as this would be for a backup target for database backup files, and the backups are deleted everyday as new backups are written. The backup files range in size from 3GB to 83GB.
Does that fit the exception to the rule or should we still except a low constant fragmentation rate?