Start a Conversation

Unsolved

This post is more than 5 years old

1960

April 25th, 2012 12:00

How do I know whether VAAI-block zero is actually being used?

We'd like to know whether we're really using the VAAI-block zero primitive and receiving it's benefits. We believe we have a supported configuration but expected better results.

We verified that the settings on vSphere (versions 4 & 5) are correct for using VAAI. I believe there were 2 settings on the data mover: hardwareacceleratedmove and hardwareacceleratedinit that we looked at, both are set to the default of enabled (1).

We have a Vmax on Enginuity 5875.231.172 code using Virtual Provisioning with thin pools.

Now how do we know if the Vmax is really using the VAAI write-same when we create a vmfs volume using eageredzerothick on the ESX server? It's taking about 22 minutes to format 500 gb and maybe 30 seconds to do 5 gb.  If we were really using write-same, shouldn't these times be much closer? How much time should it take? I realize it's dependent to a degree on how busy the Vmax is, but I really expected times that are more similar.

Thoughts? 

225 Posts

April 25th, 2012 18:00

Hmmm....interesting. I am looking at it to see what I can get back to you

199 Posts

April 25th, 2012 20:00

Hi DLB,

I recommend you to read this document. http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h8115-vmware-vstorage-vmax-wp.pdf?mtcs=ZXZlbnRUeXBlPUttQ2xpY2tTZWFyY2hSZXN1bHRzRXZlbnQsZG9jdW1lbnRJZD0wOTAxNDA2NjgwNWQwMWY0LGRhdGFTb3VyY2U9RENUTV9lbl9VU18w

On page 30, Figure 16 shows the time difference of Block Zeroing different sizes of  devices. You can see, even with the VAAI on, Allocating 200GB virtual disks takes 6 or 7x time than allocating 10GB one. But allocating a 200G disk  with VAAI on is  18 times faster than without VAAI.

Capture.PNG

So my conclusion is it's expected result to see It's taking about 22 minutes to format 500 gb and maybe 30 seconds to do 5 gb with VAAI on.

April 26th, 2012 15:00

imho, VAAI only guarantees efficiency which of course should (more-often-than-not) translate into faster times to complete the associated task.  Yes, offloading the tasks from the hypervisor (and corresponding reduced load on the intermediate FC/Ethernet/FCoE Switches) to a compliant array intuitively almost always should speed it up and every documented comparison correctly validates this {software bugs aside}.  However, it is also relative in that it should ultimately be compared between hardware offload enabled and disabled in your environment.  Of course, there is relevance in comparing to what others are achieving, but they are almost always accompanied with before and after graphs.


Even though Chad is using a VNX in the following video, try the same esxtop/resxtop tests where you should find the impact on the hypervisor appropriately change depending on whether it is enabled or not.  Also, he navigates to the appropriate performance charts in vSphere to see the same.  Then using array based performance tools (Unisphere Analyzer in the video), you'll see the resources accommodate on the back-end with a corresponding increase/decrease.

http://www.youtube.com/watch?v=1sUS-LcEtBY

Even though he is demonstrating the Hardware Accelerated Full Copy primitive, you can simply swap the option he is changing with: DataMover.HardwareAcceleratedInit instead which corresponds to the Write Same (zero) you are inquiring about.

There are also specific entries in the vmkernel logs you can parse for to also confirm that the hardware offload is, in fact, being used. Or similiarly whether or not it is reverting back to non accelerated mode with an otherwise VAAI compliant environment.  For instance, in the case of the Clone Blocks/Full Copy/XCOPY, there are several things that would disqualify it such as performing a Storage vMotion between two datastores (on the same array of course) that have different block sizes (when dealing with VMFS 3). 

Finally, I would also reference the following VMware KB article:

vStorage APIs for Array Integration FAQ

http://kb.vmware.com/kb/1021976

199 Posts

April 26th, 2012 19:00

Chris, I know VAAI is enabled on VMAX by default. Is there a way to disable it from storage side using SE/SMC/SymmWin? Or if there is any utility to monitor the VAAI status from storage side?

April 27th, 2012 00:00

jingyi wrote:

Chris, I know VAAI is enabled on VMAX by default. Is there a way to disable it from storage side using SE/SMC/SymmWin?

No, it is enabled/disabled using the Advanced Settings per host:

DataMover.HardwareAcceleratedMove
DataMover.HardwareAcceleratedInit
VMFS3.HardwareAcceleratedLocking

jingyi wrote:

Or if there is any utility to monitor the VAAI status from storage side?

Yes, use the array performance tools (comparable to Unisphere Analyzer for VNX) to monitor as demonstrated in the video.

17 Posts

April 27th, 2012 13:00

Chris & Jingyi,

Thanks for all the info. I'll take a look over the weekend and let you

know if I have more questions.

No Events found!

Top