Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3017

October 15th, 2012 10:00

Performance thresholds with FAST VP

Could anyone share what they use for performance thresholds on a VMAX running FAST VP?  Throughout our POC and now in production i'm seeing the BE Directors with 45-75% busy % across the array.  According to the definition this doesn't seem like it's neccessarily a bad thing

#% busy

100 - %idle

This metric is available only for Fibre Channel directors.

Is this a measure of CPU?  I thought the CPU's for front end and back end were shared in the matrix? 

The % busy on the ports is under 5 during this time. 

What is everyone else seeing on their array's running FAST VP?

1.3K Posts

October 18th, 2012 08:00

Yes, that release should reduce the DA CPU % busy when FAST VP is enabled.   

465 Posts

October 15th, 2012 15:00

In VMAX, the back-end and front-end have their own CPU. The % busy metric is a measure of CPU busy and applies to back-end processors as well as front-end.

50 Posts

October 15th, 2012 15:00

That's what it seems like (Front-end having different CPU's than back-end) but I though the whole point of the matrix was sharing resources like CPU.  In the heatmap for each director it has a different % busy for the front-end than the back-end.  The front-end busy % stays pretty low but the back-end % busy stays in the 45-75% range.  I'm not sure if this is normal due to FAST VP constantly working or if this is higher than it should be.

465 Posts

October 15th, 2012 16:00

The Matrix is the point-to-point connections between the directors and global memory.

Alas what is 'normal' encompasses a wide spectrum and depends very much on workload and configuration (6 back-end IO's required for a RAID-6 write for example). Certainly FAST will add to processor utilisation at the back-end as it executes the extent moves between tiers.

I expect FAST has not been enabled long since it's a POC, but the initially after enabling FAST there may be quite a number of moves to be done. You may want to adjust the fast relocation rate and observe the difference in CPU utilisation on the back-end.

50 Posts

October 15th, 2012 16:00

Thanks guys that clears things up a bit.

I do believe relocation rate could be the factor here.  I'd like for data to get to the appropriate tier as fast as possible so we have this set at 3 now to be slightly more conservative than setting it to 1.  I read the whitepaper that showed the impact of adjusting the relocation rate with response time and time till data is moved, but I didn't think about what impact it would have to the CPU's.  If it gets much higher i'll likely have to back it off a little.

We're past POC but these are all new provisions so data is still moving around quite a bit between the tiers.  I suspect with the type of I/O our servers do we'll have a higher volume of data moves than a typical implementation, but time will only tell as we continue to add and migrate servers over.

1.3K Posts

October 15th, 2012 16:00

The front end and back end CPUs are seperate.

50 Posts

October 15th, 2012 18:00

Yes it is 5876.  Are there any issues that came up around this?

1.3K Posts

October 15th, 2012 18:00

Is this 5876 by any chance?

50 Posts

October 16th, 2012 13:00

Thanks for sharing that info Quiincy.  I'll keep an eye out for that release.  Do you think that release will be listed in the EMC Technical Advisory notification? 

I'm not worried about the CPU utilization today but if it gets any higher i'll probably have to trim back the relocation rate as a starting point. 

50 Posts

October 16th, 2012 13:00

Keeping an eye on this I noticed the max values seem pretty high.  Spikes as high as 85% busy on about half of the directors and the other directors have spikes in the 70's.  Since FAST VP will only move a max ammount of data in every interval is it safe to say that FAST VP CPU utilization levels off?

We have about 10 hosts and 46 metaluns so far and will be migrating 70 additional hosts in the several weeks.  IOPS for the entire array currently only averages from 2000-4000 and our array was sized for 90K IOPS before we went from R5 to R1 on the FC tier.  I'd like to keep the relocation rate on the fast side but i'm not sure if the array will be able to handle it once we get more hosts consuming I/O.  In general our server owners would rather a slight uptick in response and get the data on a higher tier faster.

1.3K Posts

October 16th, 2012 13:00

In 5876 there was a change in the way FAST VP background tasks are scheduled allowing more DA CPU utilization than before.  A release of 5876 due out soon will reduce the DA utilization back to similar utilization as 5875, 

You could disable FAST VP for some time to see if the DA utilization goes down or not.

50 Posts

October 18th, 2012 08:00

Quincy - Is version

5876.159.102 what you were referring to?  I don't see anything that referrs to FAST VP scheduling, but it could be described differently in the release notes.

1.3K Posts

October 18th, 2012 08:00

Where do you find the release notes on Powerlink?  Can you post a link?

50 Posts

October 18th, 2012 08:00

Great thanks for the confirmation.

No Events found!

Top