Start a Conversation

Unsolved

This post is more than 5 years old

1836

March 9th, 2015 03:00

FAST Configuration

Hi All.

I know FAST has been covered a lot but I don't seem to find an answer on a question I have.

Can you enable FAST across pools?

Pool1 - EFD

Pool2 - SAS

Pool3 - NL-SAS

I have to upgrade a VNX 5300.

Current capacity is usable 42TB.

The current config is as below configured as a single pool:

6 x 100GB SSD

15 x 300GB SAS

16 x 3TB NL-SAS

Also 2 x 100GB disks as FAST Cache.

Is the best practice to have a single multi-tiered pool?

I this is application and workload dependent as well.

What license do I look for to see if FAST and FAST Cache have been enabled?

Regards,

I

214 Posts

March 9th, 2015 04:00

You can't do FAST tiering across Pools only within a Pool.

If you want to make use of Tiering then obviously it's better to have one or more pools with multiple Tiers.

The SSD Tier should be large enough to hold your working data set which might be for example 5-10% of the size of the pool but it depends on your environment and workload.

The whole idea of Pools is to let FAST put your most least active data (could be archive data) on the cheapest Tier eg. NL_SAS to save you money on not having an array full of all expensive Flash or SAS drives.

Some people have multiple pools for different types of workload. EG. You might have a RAID 1/0 Pool just for sequential log files like SQL, Oracle etc.... You might have another pool for File CIFS/NFS use. Another Pool for Random I/O etc... 

Obviously to have multiple pools you need more drives to give you the IOPS across the pools so if you only have a small amount of drives it's probably better to stick with one pool. Your current Pool layout looks fine to me and follows best practice. If you expand the Pool make sure you add drives in the same underlying RAID group layout numbers eg. 4+1 RAID 5 or 6+2 RAID 6 etc....  Check the properties of your existing Pool to find out. Don't forget Spare drives for every 30 drives of each type you have in the configuration.

Check out the VNX best performance white paper

https://www.emc.com/collateral/software/white-papers/h10938-vnx-best-practices-wp.pdf

To check for FAST Cache and FAST(VP) licenses right click your storage array and check properties in the Block Hardware View.

Ask your EMC partner to evaluate your current environment and recommend your expansion options as part of their pre-sales. They may recommend additional FAST Cache as well if your workload is benefitting.

March 9th, 2015 05:00

Smarti has provided a good answer for you.

I'd like to ask though, why the 6 x SSD in the pool ? seems like kind of an odd grouping and doesn't follow the best practice for RAID 5 (4+1).You'd be advised to keep that grouping going forward should you expand that pool with more ssd's to avoid an imbalance in the private RAID Group(s) in the tier.

Have you analyzed the current performance of the array ? Perhaps take advantage of a miTrend report to see how it's performing 'under the covers'. This will also help you make good decisions on whether to expand the current pool or  create a new pool, increase FAST CACHE, segregate workloads etc.

If you are considering multiple pools, FAST Cache can be turned on or off individually for each pool, but in your example, FAST Cache wouldn't be beneficial to an all EFD pool, as the data would not be eligible to be promoted into the cache.

FAST Cache WP attached.

1 Attachment

59 Posts

March 9th, 2015 05:00

Hi,

Thank you for the replies.

I have received the MiTrend report.

The balance from the different disk types did not look correct to me.

With going with a new pool this could be rectified and also use bigger drives eg 600GB 15k compared to the current 300GB 10k drives.

I can change the SSD count to match the RAID type.

This is a 5300 so the drive count max is 121 at present.

Upgrading this array to achieve the 80TB usable requirement will bring the array to its limit.

VMWare is currently using this array with the average IOPS around 10,000.

I have attached some data from the miTrend report.

Please advise as this is the first time I am so involved with a VNX upgrade.

Thanks.

I

1 Attachment

214 Posts

March 9th, 2015 09:00

Not 100% sure what you are asking now?

Could you clarify what response you would like?

The 6 SSD drives might be configured in a 3 + 3 RAID 1/0 but can't tell form your report. You would need to tell us that from the properties of your Pool in Unisphere.

According to the report you are only doing 6000 peak IOPS with a 95% percentile of 4,153 IOPS.

Your current storage pool with RAID 5 for the SSD's, RAID 5 for SAS and RAID 6 for NL_SAS should be able to handle up to 11,000 IOPS based on 60%reads and 40% writes.

With RAID 1/0 for the SSDs you could get up to 16,000 IOPS for same read/write split..

Swapping 300GB drives for 600GB drives isn't going to make that much difference but adding some more SSD drives will give you a lot more IOPS. Approx 3K-5K IOPS per SSD drive. Obviously the 600GB drives will give you more capacity but I would recommend adding more SSD drives for IOPS and more NL_SAS to give you the capacity.

At the moment your Tiers are approx. 1% SSD, 9% SAS and 90% NL_SAS.  You need to make your higher Tiers large enough to hold your working data set. A good rule of thumb is at least 5-10% for SSD Tier. But it does depend on this working data set.

You also have FAST cache in front of the Pools which helps with write bursts and an expansion of RAM Cache.

I repeat that whoever your are buying the disks from should help you size this properly rather than just box shift.

Hope this helps.

59 Posts

March 9th, 2015 23:00

Not to worry about what I posted earlier.

You answered quite a lot of my questions in your post.

Last question about FAST - does it populate the data in the lowest tier first or from the fastest tier or can this be set per pool.

Seeing the current disk layout I can see that there is a lot of IO on the NL-SAS and not on the SAS drives. I was thinking to increase the SAS count to bring this more inline with the 5%/25%/70% distribution but I am concerned about the NL-SAS tier at the moment.

Thanks.

I

March 10th, 2015 00:00

For FAST VP, you can set the tiering policies per LUN and  in a number of ways, the default being "Start High then Auto-Tier", but you can select other policies at creation.

Other policies are: " Highest Available Tier", "Auto-Tier", "Lowest Available Tier" and "No Data Movement"

After creation, you can also "Pin" a LUN to a tier as required, to lock a workload into the disk profile which can help make the utilization more to your liking.

Data is moved between Tiers on a schedule (or on demand) in chunks of 1GB.

All the details around each policy can be found in the document attached.

1 Attachment

214 Posts

March 10th, 2015 03:00

Brett has it spot on.

You can increase the SAS tier to 20% but if you can afford SSD I would increase that Tier to 5-10% first. The working data set is the important thing here so if you can fit into SSD Tier your performance should be fine allowing you to increase the other Tiers for the capacity as you need.

If you have Exchange 2013 LUNs within the pool it's probably best to pin them to the NL_SAS Tier as the background maintenance tasks process within Exchange can screw with the FAST Cache and Auto Tiering policies and that's why you may be having high IOPS reported on the NL_SAS disks. Exchange 2013 doesn't need high IOPS disk as that's handled on the host side these days. Check the VNX and Exchange best practice whitepaper.

Can you see what LUNs are causing the high IOPS on the NL_SAS disks?

No Events found!

Top