Start a Conversation

Unsolved

This post is more than 5 years old

12926

January 7th, 2015 09:00

Ask the Expert: VMAX FAST VP and Performance

YOU MAY ALSO BE INTERESTED ON THESE ATE EVENTS...

Ask the Expert: Business Continuity; disaster recovery vs. data availability

Ask the Expert: SAN (Connectrix), FC Connectivity Recommendations and Best Practices

https://community.emc.com/message/859892

The VMAX has multiple settings and features available on the Front end as well as on the backend within the FAST VP mechanism. Tuning the VMAX can be complex and multiple factors can impact performance. The goal of this forum is to focus on what settings and options allow full optimization of VMAX performance.

Meet Your Expert:


46.png?a=10232

Kevin Gleeson

Technical Account Manager at EMC.

Kevin has been with EMC for nearly 20 years. He has spent 15 years supporting EMC Symmetrix product range at many levels. For the last 5 years he has worked in the Symmetrix Level 2 group specializing in performance. In 2012 he moved to the role Technical Account Manager supporting an EMC Elite Customer. Kevin was also recently named as a member of EMC’s Technical Leadership Academy.


This discussion takes place from Jan. 12th - Feb. 6th. Get ready by bookmarking this page or signing up for e-mail notifications.


Share this event on Twitter or LinkedIn:

>> Join me! Ask the Expert: VMAX FAST VP and Performance. http://bit.ly/1wrhcw3 Jan. 12 #EMCATE <<

January 12th, 2015 06:00

Your questions are welcome now as the event has began. Let's keep this conversation respectuful and dinamic.

Cheers!

1 Rookie

 • 

20.4K Posts

January 12th, 2015 18:00

Kevin,

1) Back in 2010 when we bought our first VMAX we configured our middle tier as 300G 10K drives in 4+1R5. Maybe for the last two years it's been recommended to configure middle tier as R1. Obviously that is very expensive, what my TC explained was that if everything is bound to the middle tier then that tier will incur a lot of load because that's where the data lands first. Later on it will go up and down between the Flash/SATA tier but that middle tier needs to be "speedy".  Can you please elaborate what changed, what EMC learned and why they decided to change the recommendation.

2) VMAX has never had as many knobs to tweak performance as its midtier brother Clariion/VNX. Are we losing even more knobs with VMAX3 and its SLO model. For example i know that meta volumes are no longer there so if you take a situation where i have SRDF/S volumes that are being used for Oracle redo logs. While these logs are small (8.4), we configured them as a 10 way striped meta to help with the performance (especially in SRDF/S configuration). So there is some stuff we can do in previous version of VMAX but what are we losing with VMAX3.

3) Time to ingegrate Recover Point appliances into the platform and get rid of SRDF/S/A ?

62 Posts

January 13th, 2015 03:00

1).When you compare the number of IOPS required on the Back end of the array to complete a write to a Raid 1 vs Raid 5 device. On Raid 1 it takes to back end disk Operation to complete a Write to disk. However , on Raid 5 it is 4 disk operations to complete the write.

For this reason if you are bound to FC tier on Raid 5 and all new writes are destined for FC tier, then it will lead to more Back end IOPS to. By Using Raid 1 it reduces the Fc disk load.

The reason for the change in policy was based on analysis for Customer data in a production state.

2). Yes there are fewer Knobs on VMAX 3 simply because they are not required. The reason for this is that on VMAX there is complete redesign of the architecture. The Core CPU processing characteristics have changed to allow simpler deployment, allocations etc.

in relation to SRDF and Oracle Redo and other specific apps the new design can handle this without the need for Meta due to again the removal of the limitations on the earlier VMAX model.

3. In my honest opinion SRDF on VMAX will be faster and more efficient than the current VMAX. So RPA not needed unless there are other business reasons for it . But performance not compromised on VMAX 3 SRDF

1 Rookie

 • 

20.4K Posts

January 17th, 2015 20:00

so besides the obvious things like "don't bind to SATA pool unless it's for specific purpose", what other things you see in the field we should NOT do.

62 Posts

January 19th, 2015 02:00

That question is rather open ended but.

Some things I would recommend, is monitor the array using Unisphere.

Be vigilant of FA CPU utilization above 65%-70% and you will see latency.

Review the backend utilization to ensure redundancy.

1 Rookie

 • 

20.4K Posts

January 19th, 2015 07:00

my question is around FAST VP, what not to do

24 Posts

January 19th, 2015 14:00

Is there a doc / KB article that explains what priorities the different FAST processes get.

We have FC PRC set to 10%, if we have an inbound migration over SRDF and bind the target devices to FC this may take the pool to 95% full.

Other instances might be a server importing a large amount of data which requires new extents - this could push also FC over it's PRC limit.

Are requests for new extents over RDF treat the same as new extents requested by a server or does one have a higher priority and what effect (if any) does this have over standard performance FAST moves.

=========================================================================================

Another question is around AllocByFastPolicy - how does the array determine which of the other tiers new extents should be served from assuming EFD / SATA both have free capacity ?

=========================================================================================

When write pending starts to get high, what effect does this have on FAST movements and at what levels of WP do each of these limiters take effect ?

62 Posts

January 20th, 2015 02:00

What not to do with FAST VP:

1. As you mentioned, it is not advisable to Bind to SATA.

2. In my experience, limit the number of policies in use.

3. Stick to the default setting around The workload analysis period.

4. Bind By Policy should be enabled, if you are running 5876.229 and above.

I am going to share a link cover FAST VP best practice.

62 Posts

January 20th, 2015 02:00

Is there a doc / KB article that explains what priorities the different FAST processes get. We have FC PRC set to 10%, if we have an inbound migration over SRDF and bind the target devices to FC this may take the pool to 95% full. Other instances might be a server importing a large amount of data which requires new extents - this could push also FC over it's PRC limit. Are requests for new extents over RDF treat the same as new extents requested by a server or does one have a higher priority and what effect (if any) does this have over standard performance FAST moves. ========================================================================================= Another question is around AllocByFastPolicy - how does the array determine which of the other tiers new extents should be served from assuming EFD / SATA both have free capacity ? ========================================================================================= When write pending starts to get high, what effect does this have on FAST movements and at what levels of WP do each of these limiters take effect ?

1 Rookie

 • 

20.4K Posts

January 20th, 2015 04:00

Kevin, looks like your reply got cut off.

62 Posts

January 20th, 2015 05:00

Is there a doc / KB article that explains what priorities the different FAST processes get. Nothing specific. When a Storage Group is associated with a FAST policy, A priority value must be assigned to the storage group. Storage groups with a higher priority will be given preference when deciding which data needs to be moved to another tier. Changing the priority effectively changes the performance score of the extents within the storage group so they have a higher or lower performance score giving the data priority for promotion or demotion. Can be a value between 1 and 3 Default =2 1 is highest priority We have FC PRC set to 10%, if we have an inbound migration over SRDF and bind the target devices to FC this may take the pool to 95% full. FC hitting 95% can be a concern in the short term, I would recommend that you enable bind by policy this way new writes will always be handled by the array. I would not change the PRC from 10% Other instances might be a server importing a large amount of data which requires new extents - this could push also FC over it's PRC limit. Again Use bind by policy to ensure the array can distribute new writes Are requests for new extents over RDF treat the same as new extents requested by a server or does one have a higher priority and what effect (if any) does this have over standard performance FAST moves. Write requests be it from SRDF or FE end need to be serviced the same. For performance consideration on DR, FAST VP co-ordination should be considered. Another question is around AllocByFastPolicy - how does the array determine which of the other tiers new extents should be served from assuming EFD / SATA both have free capacity ? Allows for allocations from new writes to come from any pool in the policy Pool selection order, if one method fails it continues to next method         1.  Extent Group Set is assigned a tier and it attempts to use this tier first, but subjected to compliance  limits. Entirely unallocated Extent Group Sets will not have a tier assigned 2. Allocation from bound pool with no restrictions 3. Selects tiers in order from smallest capacity to largest capacity obeying any compliance limits 4.    Failsafe method selects tiers in order from smallest to largest ignoring compliance limits When write pending starts to get high, what effect does this have on FAST movements and at what levels of WP do each of these limiters take effect ? Throughput will be effected/slowed/stopped IF the array should hit 50% system WP limit

53 Posts

January 23rd, 2015 08:00

Kevin:

If FAST is configured as recommended, and we're willing to let it make all the decisions, all the time, shouldn't max SG demand exceed available to FAST always?  We see these red exclamation marks in the FAST config section, but I generally ignore them b/c we're thin, and I want FAST to have it all.  I guess my question is whether it's safe to just ignore that, or if it's something one should try and address with more granular FAST tweaking.

Thanks!

Brandon

62 Posts

January 26th, 2015 02:00

Hi Brandon.

I assume you are Over provisioned, if that is the case you will see these.

Leaving everything to FAST VP is not an issue.

Just to ensure we are discussing the same thing, can you email me an output
.
Thanks

February 2nd, 2015 08:00

We are a little paranoid about FAST and pools filling up. Reading through the various documents it says that it’s a good idea to enable the “allocate by FAST policy” as a means to prevent this happening:


Apart from FTS storage (that we use for archive/SATA volumes), we bind everything else to FC. However the FC pools are starting to fill up in places. What we want is for it to tier data down to SATA as it becomes cold, but if we add a lot of new data in a given day (or short period of time) we need new writees to spil over automatically, and not just generate IO errors to the hosts.


So my questions are:

  • Will these writes spill over without setting the above flag?
  • And if they do, is there a downside to setting this flag? Ie Im surprised its not on by default?

62 Posts

February 5th, 2015 00:00

If Allocate by policy is enabled it will allow writes to spill over. if is not enabled and the FC tier which an application is bound to becomes full you will see write errors and writes will likely fail if fast cannot move data out of the Fc tier.

There is no downside to setting the flag, it is purely a safety against getting write errors due to binding pool being full.

No Events found!

Top