I'd like to add a quick question or two for those who decide to read this just to see how well the "community" aspect of this works. I'm just curious what people are doing with regard to their volume block sizes now that thin provisioning is really grabbing hold. I realize it's been around for a little bit, but now that it's so much easier to implement because of the integration within the vSphere client. Defragmentation is really at the root of my questions. So here are my questions.
Are you rethinking your block sizes (i.e. recreating volumes with larger blocks) in order to cut down on some of the defragmentation that can occur?
Do you Storage Vmotion VMs from time to time in order to "defrag" the VM?
For those of you who like multiple choice questions...
D) None of the above (I don't care, I just let VMware handle it cause they know best)
See you at EMCworld!
griese - thanks for posting the question!
In a VMware environment, filesystem fragmentation is really not an issue like it is in general purpose filesystem use cases (I have NEVER seen this to be a problem for a VMware customers).
The main (and currently only) things that factor into to block sizing for VMFS volumes is:
1) the allocation size dictates the largest single file size
2) smaller allocation sizes tend to be a little more "expensive" in terms of VMFS metadata updates when using VMware-level thin provisioning (since the VMDK growth operations trigger a metadata update everytime the VMDK is extended - and at the minimum allocation size, this is every 8MB).
Thanks for being an EMC and VMware customer!
Maybe I'm misunderstanding your 2nd point, but I believe the minimum (and default) block size is 1MB in size. This default would cause 8x as many metadata updates as an 8MB block size. I guess that's kind of the essence of my question. With Thin Provisioning becoming more of a reality, what are you doing with your block sizes?