We use thin provisioned volumes from the VMAX for all our ESX datastores. Over time, as VMs move between datastores or are deleted the free space that ESX sees on the datastore and the free space that VMAX sees for the volume no longer agree. Typically the VMAX sees a higher utilization than ESX. We occasionally use vmkfstools -y to reclaim the unused space.
My question to everyone is what happens when the VMAX thinks the volume is full but ESX knows there is free space? For example: assume I have a 1024 GB volume, VMAX thinks it is 90% full - 102 GB free but ESX sees 300 GB free. If we deploy a 200 GB VM to this datastore ESX thinks it will fit so it will begin writing data. What happens after the first 102 GB are written? Will the VMAX return with out of space or will it overwrite unused blocks? If overwrites unused blocks how does it determine what is unused?
If the VMAX can determine what is unused it seems like ESX and VMAX would agree on the free space (or be closer at least) and we wouldn't need to run the vmkfsools -y command. Since VMware had to hold off on the automatic UNMAP command when moving/deleting VMs I'm assuming the VMAX can't automatically identify unused blocks and in the example above we would see a failure even though in reality the space in available.
Can anyone confirm the behavior?
In your example --
Of the capacity that the VMAX recognizes as "written" (922GB), the VMAX doesn't know which of these blocks is used or unused. ESX, however, does know; and it correctly reports that 300GB is free within that 1TB volume. If you deploy a 200GB VM to this datastore, ESX will utilize the 300GB of free space for this VM. From the perspective of the VMAX, some or all of that 200GB will be re-writes to blocks that it has already marked as written.
Hope that helps,
VMware will reuse available space and simply write the new data to the datastore while VMAX writes to the extent that is already allocated in the thin pool (or allocate a new one if necessary). There will be no failure unless you run out of space on your datastore because you have over-provisioned and there is no available space in the thin pool. Our TechBook can provide more detail - see Chapter 3: http://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-ldv.pdf.