Start a Conversation

Unsolved

This post is more than 5 years old

4761

February 10th, 2015 12:00

XtremIO recommended settings for VMware

One of the recommended settings I came across is " Format VMs Using Thick Provision Eager Zeroed"

Is there any special advantage with this setting ? The reason because, with eager zeroed all cells are filled with zeros and in ExtremIO case only one block gets filled with zero's and hash address (finger print) is assigned ( please correct me if I am wrong) to the block.

727 Posts

February 11th, 2015 13:00

Yes, the recommendation for best performance (from VMware and EMC) is to use “Thick Provision Eager Zeroed” datastores.

The advantage is that at the storage level, we would not have to do ANY operations to write the zeroes. See a related blog article at http://www.xtremio.com/saved-by-zero. We don’t even need to write a single zero block or consume any metadata (aka fingerprint or hash address) storage because of our architecture.

5 Practitioner

 • 

274.2K Posts

February 11th, 2015 14:00

Hi khkris,

When you allocate a Thick virtual disk, the underlying space needs to be erased (zeroed) prior to being written by the VM. This can be done all at once when the disk is first provisioned (Eager Zero Thick, or EZT), or just before the VM writes to a new region of the disk (Lazy Zero Thick, or LZT). As you can imagine, pausing the VM IO until the new region of the VMDK is zeroed will introduce latency. So if you're running latency-sensitive applications it's beneficial to use Eager Zero Thick virtual disks. That's why applications like databases always recommended using EZ. But with traditional storage architectures, that initial zeroing process with EZT could take a long time because the associated metadata needed to be allocated and physical media needed to be overwritten with zeros. Admins that didn't want to wait for eager zeroing to complete before providing the VM to their users would often use LZT instead. And for most applications that would be good enough. The additional latency of zeroing the disk could often be absorbed without being noticed, and after a while much of the virtual disk would have been previously written and not require additional zeroing by ESX.

But now consider VM's deployed on XtremIO. We're inline deduped and never actually have to zero our physical media. Our metadata is 100% in memory and writes are journaled to memory (before being saved to SSD). We use VAAI to accelerate zeroing operations, so we never actually have to ingest zeros and hosts don't have to write zeros. The end result is that allocating EZT virtual disks is a very quick operation. Plus most of the applications deployed on XtremIO care about low latency, so using EZT reduces overall latency for the application. That's why we recommend it as a best practice. Our users are very happy with the performance of both creating EZT disks and the their applications running on top of those disks.

Take care and hope that helps,

Miroslav

p.s., This old VMware Community post might also be helpful: https://communities.vmware.com/message/2199576

7 Posts

November 18th, 2015 12:00

With vSphere 6 expanding EZT VMDK disks looks to pause the VM and eager zero the added space, then un-pause the VM.  vSphere 5.5 and below changed disk type to LZT and did not pause the VM for any substantial length of time.

With vSphere 6 expanding a VMDK 500GB pauses VM for 1 minute for a friends testing environment 

With VPLEX metro and mirrored XtremIO LUNs across datacenters expanding a VMDK 500GB took 25 minutes for me, unfortunately this was production so was a significant event.  I am looking into seeing if this is expected w.r.t time it took, but VMware senior support claims this change in vSphere 6 was considered a bug fix from the vSphere 5.5 behavior. Even if it was 1 minute, I would not like this new behavior.

Eager Zero seems to now have a significant consideration on XtremIO from my perspective, in my implementation.  I am considering Lazy zeroing everything now on XtremIO, but would love to hear if others found this issue and if so what their thoughts are about this.

Before vSphere 6 I was a big proponent of EZT on XtremIO.

My google skills did not find much on this, so I replying to the most pertinent post on this issue.

Cheers,

John

143 Posts

March 17th, 2016 12:00

It looks like VMware created a KB on this issue:

Extending an eager-zeroed virtual disk causes virtual machine stun in ESXi 6.0 GA/ 6.0 U1 (2135380)

You probably already saw this, but according to the KB, 'This issue is resolved in VMware ESXi 6.0 Update 1b'. 

May 11th, 2016 19:00

You could also expand the VMDK and then do an advanced storage vMotion (just select Eager Zero Thick when moving the applicable VMDK), then move it back to the original location. This will convert the VMDK back to an Eager Zero Thick format while leaving the VM online without the possibility of a 'stun' situation.

No Events found!

Top