Start a Conversation

Unsolved

This post is more than 5 years old

13490

June 27th, 2013 12:00

Ask the Expert: FAST VP on VMAX Arrays

Welcome to this EMC Support Community

YOU MAY ALSO BE INTERESTED ON THESE ATE EVENTS...

Ask the Expert: VMAX All Flash – Extreme Performance at Petabyte Scale, and Best Practices

Ask the Expert: VMAX3 Hypervisor, Embedded NAS Assessment, Local and Remote Replication Strategies

https://community.emc.com/message/894453#894453

Ask the Expert conversation. This discussion will focus on best practices for FAST VP on VMAX, including:

  • What configuration and code will allow FAST VP to maximize the performance of a VMAX array
  • Optimal settings for FAST VP
  • The difference between 5875 code and 5876 code for FAST VP
  • What to avoid when implementing FAST VP

Your host:

profile-image-display.jspa?imageID=7904&size=350

Kevin Gleeson is an 18-year EMC veteran who has spent 15 years supporting the EMC Symmetrix product range at many levels. He is currently a Technical Account Manager.

This discussion begins on July 8 and concludes on July 26. Get ready by following this page to receive updates in your activity stream or through email.

1.3K Posts

July 16th, 2013 09:00

Jonathan, maybe we can move this discussion to a new thread.  It has little to do with FAST VP.

Host striping DOES allow prefetch.  However say you had an 8k host stripe across 32 devices.  Unless your sequences were at long (about 1MB), we wouldn't see any sequential activity.  However for very long sequences, we would detect it on all 32 devices and start prefetch.

The comment you make about the host only seeing one device, is another reason I like host striping.

Yes, I work in Symmetrix Performance Engineering.

5 Practitioner

 • 

274.2K Posts

July 16th, 2013 09:00

Hi Jonathan, how are you?

I believe there is a misunderstanding.

What we saw in the Symmetrix Performance course is that: if the striping is implemented through Symmetrix RAID-5, RAID-6, or striped meta devices, the directors will recognize that the I/Os are part of the same device and create prefetch tasks to pre-populate cache with the data. This is on the Back End Module, page 34.

Also, we talked that Host Striping can hinder prefetch (depending on the stripe size). Page 68 of the Back End module.

Best regards,

1.3K Posts

July 16th, 2013 13:00

You didn't mention if it was RDF/S or /A

With /A you always want the R2 to be as fast or faster than the R1.

In general, with FAST VP, for the R2 to have the same performance when used in a failover scenario, it should have the same sort of configuration as the R1.  If it has slower components than the R1, it will be slower when you fail over.

Clones can be on slower technologies, but they need to be able to meet the demands of the workload that is planned for them.  If it is reporting or batch, they may have a significant load put on them.  Putting them under FAST control will depend on how long they plan to be active. No sense in trying to optimize for a job that will only run a few hours.

22 Posts

July 16th, 2013 13:00

What are the best practices for FAST VP with SRDF R2's and Clones.

We always use low tier disk than production for R2's and Clones.

7 Posts

July 16th, 2013 16:00

FAST is not good on pooled mainframe file system, specifically SMS.

The data cicles between the pool devices, and the FAST analisys cannot be applied at each time the data is erased and rewriten.

38 Posts

July 16th, 2013 19:00

My question is around Pool Reserved Capacity..particularly on a VMAX 10K with all virtual provisioning

When the storage group is under FAST VP control my understanding is that if you set a % in the FAST settings that if a tier runs out of space then FAST will provision the needed extents from other tiers in it's policy.

Since it possible to have storage groups not in a FAST policy how is the provisioning of extents governed on those volumes? Can they cause the thin pool to have an out of space condition if the subscription rate is set to 100% or more?

62 Posts

July 17th, 2013 05:00

In relation to FAST VP and the R2.

2 choices.

1. Assign to a chosen tier. In the Case of DR you would need to Use a FC tier mimic some degree of performance from the FASTVP R1 array.

2. If you are running latest 5876 code , there is an Option to use FAST VP SRDF Coordination.

This will allow recommendations around movements on the R2 side to come from the R1 side.

July 17th, 2013 06:00

To Quincy56 and Kevin,

Notwithstanding the fact that auto-meta configuration types (concat vs striped vs N/A)  are out-of-scope for this discussion topic of Fast-VP, I wanted to ask you to follow up on my question about t-dev expansion techniques when the t-dev is configured as a striped-meta.  Since the topic of metas shows up in virtually every other reply in this FastVP topic, I think it is a reasonable question.   How can one  dynamically expand the cylinder size of a t-dev configured as a striped meta without bringing the application or its constituent BMs or VMs down?

62 Posts

July 17th, 2013 07:00

Striped meta expansion online is supported.

The only place i know where it may need downtime is around Veritas and Windows Hosts.

Please correct me if i am wrong

62 Posts

July 17th, 2013 07:00

Q My question is around Pool Reserved Capacity..particularly on a VMAX 10K with all virtual provisioning

When the storage group is under FAST VP control my understanding is that if you set a % in the FAST settings that if a tier runs out of space then FAST will provision the needed extents from other tiers in it's policy.

Since it possible to have storage groups not in a FAST policy how is the provisioning of extents governed on those volumes? Can they cause the thin pool to have an out of space condition if the subscription rate is set to 100% or m

A I am little confused by the question but here is what to expect from PRC .

  • PRC Will prevent FAST VP from Moving data into a tier.
  • SO example PRC of 10% on FC tier means that once FC tier hit 90% Fast cannot push beyond that.
  • So promotion to FC can only occur when there is Demotion occurs.
  • The PRC is there to protect space for new Write/allocations.
  • So if the pool is at 90% and a new write comes in that can go in the 10% space and essentially push the tier above 90% depending on the load.
  • So to summarize PRC Block FAST from filling the tier to 100% and allows new writes /allocations
  • To further Prevent issues BIND BY POLICY Was introduced at 5876 code which mean writes can go to any tier with the SG policy if the primary tier cannot accommodate.

38 Posts

July 17th, 2013 07:00

What I'm trying to get clear is how the PRC settings are enforced. If I use the 76 code setting in FAST and set PRC by policy to say 10% I believe this is a global setting. Then I can go to the EFD thin pool itself and set it's PRC to 1%, my understanding is that this overrides the global FAST policy and FAST will be able to fill it to 99%. Is that an accurate statement?

38 Posts

July 17th, 2013 07:00

Thanks for the clarification, much appreciated.

Mark Nixon | Arraya Solutions

Certified EMC Expert

Senior Solutions Engineer

523 Plymouth Road, Suite 212, Plymouth Meeting, PA 19462

Cell: 610-368-5167 | Support: 610-684-8645

“Do not let the perfect be the enemy of the good.”

4 Attachments

62 Posts

July 17th, 2013 07:00

There is a Global PRC setting which is not really used.

Even if you do but also set a PRC per Pool Ex 1% EFD 10% FC and 1% SATA

PRC per pool is what FAST VP follows and if you set that to 10% at any code FAST cannot go beyond that.

2.1K Posts

July 17th, 2013 10:00

Makes sense, but I guess it depends a lot on the workload too. If it is just data getting written that won't be read back right away then the write cache should absorb spikes and then destage to disk. If it is being actively read back soon after then (as long as you have enough cache) the write may still be served from the cache where it may still be sitting. I would think there would only be a small subset of application types that would be noticably impacted by this.

For us (with our purchasing and allocation model) the risk of running out of physical capacity when thin provisioning is way higher than the risk of a temporary minor performance impact on an application. I understand that may not be true for everyone, we've been doing it this way for over a year now with no reported performance issues related to tiering. To be honest, this past year with FAST VP up and running has been the first entire year with no reported performance problems pointing to a Symmetrix. We are heavy on SATA drives now and getting way more out of the system than we ever got with all 15K FC drives on the DMX3.

1.3K Posts

July 17th, 2013 10:00

If you pre-allocate, any idle or free space will likely be demoted to the lowest tier.  Then when that space is written for the first time, the performance will be low.

If it wasn't pre-allocated, the writes would come from the bound tier (like FC) or by policy, and therefore may start out being faster.

No Events found!

Top