Hyper-V 2012 R2 CSV Free Space

Hi All

I have two nodes for EQL 6210XS and there are bind a name "EQL Pool". Total capacity almost 36TB.

Now when i create a volume 8TB for the Hyper-v cluster shared volume and create some VM in CSV .All Hyper-V HOST connection to the EQL Volume via ISCSI MPIO.

In the hyper-v failover cluster manager show our CSV have 500GB free space , but the EQL Group Manager show have 5TB free space.

It seems that Hyper-V HOST cann't  correct identification the volume free space after deduplication.

Is there any suggestion to exclude this problem?

Tags (1)
0 Kudos
1 Reply

RE: Hyper-V 2012 R2 CSV Free Space


 This is actually not a problem at all.  The short version is that block storage isn't the best place to monitor free space.   Only in recent times is storage notified about file deletions.  And that's not supported by every OS or configuration.  This is known as UNMAP AKA Space Reclaim With EQL if you replicate a volume then UNMAP is no longer supported. 

 When you use a HyperVisor that brings out another interesting affect as you noticed.  Storage showing LESS in-use space vs. OS.   When you care a virtual disk, say of 50GB, the hypervisor doesn't actually write 50GB to storage.  In fact at creation time it's a tiny fraction.  Only as files are written will the in-use space grow.   Since that's all that block storage can record.  writes.   The OS though deducts the full 50GB at creation time from the allocation table it maintains.   That table isn't shared with the storage device. 

 They two will never fully line up.  Even with UNMAP in place.  The OS knows what blocks can be re-used so even when the array reports 100% in use,  go by what the OS says.  It is the only authority on the subject. 

 Here's the long explanation in case you had more questions. 

 With Hyper-V 2012 UNMAP on CSVs is only supported with VHDX virtual disks, not the older VHD. 



Why is there a difference between (more or less in-use) than what my file system shows as space used and what the PS array GUI shows for in-use for the volume?

VMware ESXi shows 500GB in-use, PS Series SAN reports 800GB.  VMware shows 500GB in-use,  PS Series SAN reports 50GB in-use


The Dell PS array is block-storage, and only knows about areas of a volume that have ever been written. The PS Series GUI reports this information for each volume. Volume allocation grows automatically due to application data writes. If later the application frees up space, the space is not marked as unused in the PS Series GUI. Hence the difference in views between the OS/file system and the PS Series GUI.

With thin provisioned volumes, this perception can be more pronounced.

Thin provisioning is a storage virtualization and provisioning model that allows administrators to logically allocate large addressable storage space to a volume, yet not physically commit storage resources to this space until it is used by an application. For example, using thin provisioning you can create a volume that an application views as 3 TB, while only allocating 300 GB of physical storage to it. As the operating system writes to the volume, physical disk space is allocated to the volume by the storage array. This physical disk space is taken from the available free space in the pool automatically and transparently. As a result, less physical storage is needed over time, and the stranded storage problem is eliminated. The administrator enjoys management benefits similar to over-provisioning, yet maintains the operational efficiencies of improved physical storage utilization. This more efficient use of physical storage resources typically allows an organization to defer or reduce storage purchases.

So Thin provisioning is a forward planning tool for storage allocation in which all the storage an application will need is allocated upfront, eliminating the trauma of expanding available storage in systems that do not support online expansion. Because the administrator initially provisions the application with all the storage it will need, repeated data growth operations are avoided.

Most important, because of the difference between reality and perception, anyone involved with thin provisioned storage must be aware of the duality in play. If all players are not vigilant someone could start drawing on the un-provisioned storage – exceeding capacity, disrupting operations, or requiring additional unplanned capital investments.

A thin-provisioned volume also grows automatically due to application data writes – the space is drawn from the pool free space (rather than having been pre-allocated in a normal volume). If later the application frees up space, the space is free in the file system but is not returned to the free space in the PS Series pool. The only way to reduce the physical allocation in the SAN is to create a new volume, copy the application data from the old volume to the new, and then delete the old volume.

A similar problem is when the initiator OS reports significantly more space in use than the array does. This can be pronounced in systems like VMWare that create large, sparse files. In VMWare, if you create yourself a 10GB disk for a VM as a VMDK file, VMWare does not write 10GB of zeros to the file. It creates an empty (sparse) 10GB file, and subtracts 10GB from free space. The act of creating the empty file only touches a few MB of actual sectors on the disk. So VMWare says 10GB missing, but the array says, perhaps, only 2MB written to.

Since the minimum volume reserve for any volume is 10%, the filesystem has a long way to go before the MB-scale writes catch up with the minimum reservation of a volume. For instance, a customer with a 100GB volume might create 5 VMs with 10GB disks. That's 50GB used according to VMWare, but only perhaps 5 x 2MB (10MB) written to the array. Until the customer starts filling the VMDK files with actual data, the array won't know anything is there. If has no idea what VMFS is; it only knows what's been written to the volume.

• Example: A file share is thin-provisioned with 1 TB logical size. Data is placed into the volume so that the physical allocation grows to 500 GB. Files are deleted from the file system, reducing the reported file system in use to 100 GB. The remaining 400 GB of physical storage remains allocated to this volume in the SAN.

• This issue can also occur with maintenance operations including defragmentation, database re-organization, and other application operations.

In most environments, file systems do not dramatically reduce in size, so this issue occurs infrequently. Also some file systems will not make efficient re-use of previously allocated space, and may not reuse deleted space until it runs out of unused space (this is not an issue for NTFS, VMFS).

In some cases the amount of space used on the array will show LESS than what is shown by the OS. For example, VMware ESXi. When ESXi creates a 20GB VMDK, it doesn't actually write 20GB of data to the volume. A small file is created, then ESX tells the file allocation table that 20GB has been allocated. Over time as data is actually written and deleted, this will swing back the other way. Where ESX says there's less space allocated than what the array GUI indicates.

A related question is how can this previously used, but now not-in-use space be reclaimed in the SAN?

Filesystems that support SCSI UNMAP (AKA space reclaim)  The OS can tell the storage to unallcoate blocks that are no longer in use.  It is up the filesystem to manage that process.

For VMware ESXi VMFS v5.x you have to run the ESXCLI utility to reclaim unused blocks

VMware VMFS v6.x can be enabled to do UNMAP automatically

Windows 2012 supports UNMAP,  Windows 2008 does not by default, but install the Dell Host Integration Toolkit (HIT/ME) will enable that function.

 In order to support UNMAP on EQL, firmware v6.x or greater is required.  EQL does NOT support UNMAP/RECLAIM on ASYNC or SYNCrep replicated volumes.

Social Media and Community Professional
Get Support on Twitter - @dellcarespro