Start a Conversation

Unsolved

This post is more than 5 years old

5309

February 9th, 2016 10:00

VNX2 Storage Pools - Non FAST

VNX2 - Non FAST and all 1 Tier (NL-SAS)

If I have a VNX2, or even a VNX1 for that matter, configured with Storage Pools will my data be striped to all disks in the pool as it would be with FAST enabled?

The way I understand FAST is that data extents (slices) will be moved between tiers in either 1gb (vnx1) or 256mb (vnx2) slices.

I have an application that is heavy sequential write ~70%. Right now I am using RG and multiple Traditional LUNs presented to a Linux host and bound as a single VG.

195 Posts

February 9th, 2016 10:00

Short answer:  Yes, the pool is composed of one or more RAID groups of disks, and LUNs in the pool will use all of those groups.

If you aren't using thin LUNs though, there would be a couple of advantages to using MetaLUNs across multiple traditional RAID groups:

1)  On VNX2, LUNs in RAID groups are served with Symmetric Active/Active access to both SPs, so you can round-robin across all paths to the LUN, not just to one SP.

2)  There is a modest performance hit of a couple of percentage points to using pools over traditional RAID groups.  This amounts to a bit of software overheard in the execution path for pools.

16 Posts

February 10th, 2016 20:00

Why use pool, if you have only NL-SAS disks? RAID group have more performance, isnt't it?

8.6K Posts

February 11th, 2016 03:00

Maybe flexiblity, thin LUNs, snapshots, deduplication, ....

195 Posts

February 11th, 2016 06:00

A good and valid question.  I manage 14 VNX/VNX2 arrays, and fully ten of those are configured with only one drive type, and all of those are organized into RAID groups, not pools.  You seemed particularly interested in VNX2 units, but for VNX arrays, and in particular the 'smaller' ones there there is a good argument to be made for going a bit further than just not using pools; you can significantly improve the robustness of the units by eliminating the enablers for them altogether.

Here are the Software and SP Memory tabs from the properties of a 5300 I have with pools:

F1Soft.PNG.png

F1Mem.PNG.png

And here are those same tabs from one that I have 'cleaned up':

B5soft.PNG.png

B5Mem.PNG.png

I would draw your attention to the SP Usage, and the Write Cache Memory differences between the two.  By eliminating enablers, the SP Usage footprint drops significantly, resulting in more than doubling the available write cache.  The stripped down systems can only create RAID groups, not pools, but they can still support MetaLUNs, as well as analyzer for monitoring and troubleshooting.

Without MCx, and probably more importantly the larger SP RAM in the 5200/5400, that difference in write cache makes a profound difference in the robustness of those 5300s.  They experience far fewer forced flushes of their write cache at any workload, along with an improved ability to defer and coalesce writes, and that provides a more consistent write latency.  Read latency is also improved since the backend buses aren't as often clogged up with write activities.

I have a 5200 with a similar uniform disk configuration, that unit has double the SP memory, and all the advantages of MCx Flare code.  It is licensed for just the core components, but it is still enabled for making and using pools.  But it too is configured into just RAID groups due to the limitations of Symmetric Active/Active, and it is a 'happy and productive' unit.

16 Posts

February 11th, 2016 06:00

Can you compare performance of your application with RAID group LUN vs Pool LUN?

I think pool and thin LUN have poor performance vs. RAID group LUN.

195 Posts

February 11th, 2016 06:00

I don't see why snapshots are on this list...but otherwise; absolutely.

8.6K Posts

February 11th, 2016 07:00

The newer pointer-based snapshots require pools – sure you could use SnapView with raid groups

Yes if you know what functionality you need and know how to create an optimal diks/LUN config there is nothing wrong with using raidgroups

If you don’t then pools are simpler to use

4.5K Posts

February 11th, 2016 11:00

Was your question answered correctly? If so, please remember to mark your question Answered when you get the correct answer and award points to the person providing the answer. This helps others searching for a similar issue.

Also, you can use the internal EMC Forums and mailing lists if you need additional information.

glen

February 12th, 2016 00:00

In my experience the performance difference isn't much at all. There was a study not long ago which pretty much concluded the same: Virtualized Storage Performance: RAID Groups versus Storage pools - VMware VROOM! Blog - VMware Blogs 

The  differences that stand out for me is the fact that with RAID Groups you can get granular with FAST Cache to a LUN level, with Pools it has to be on or off for the entire pool. Also it's far easier to extend LUNS in a pool, and for that matter to increase a pool size. No need to bother with meta's.

16 Posts

February 12th, 2016 08:00

Well, how about active-active vs ALUA LUN performance in VNX5200?

4.5K Posts

February 12th, 2016 12:00

In answer to your first question - about the data being stripped over all the disks without FAST VP enabled, the answer is initially the data will be positioned base on the tiering policy - the default is High:Auto. That means that any new writes will attempt to write into the highest tier available then over time the slices will be relocated based on the temp of the slices. If you have two tiers, EFD and SAS, then all the new Writes will go into the EFD tier until that is full, then into the SAS tier. Over time, the cold slices will be moved down and the hot slices moved up. The is the between tier re-balancing. You also have in-tier re-balancing that tries to re-balance the slices within the tier over all the disks in the tier.

If you have a single tier and no FAST VP enabled, the new data will be located in the first private LUNs in the first private raid group, then move the the next LUN in the first private raid group, etc, until you get to the next private raid group. With small files, the slices could end up on the single private raid group. FAST VP would then re-balance the slices based on the temp of those slices and move hot slices to private raid groups that have a lower temp, thereby re-balancing the data over more disks.

In metaLUNs the data is stripped over all the component LUNs in each raid group evenly, thereby stripping the data over all the disks more evenly. This depends on you following the MetaLUN Best Practices (I've attached in case you don't have it). MetaLUNs require more work to configure, but they probably provide the best perform if done correctly.

glen

1 Attachment

16 Posts

February 12th, 2016 19:00

I plan to buy VNX5200 with 25 SAS drives only. I plan to use 4 vault drives as first RAID group, and next 20 drives as second classic RAID group.


What you mean about private RAID groups without FAST VP Pool? If i have RAID group i have no any private RAID groups isn't it ? As i know, private RAID groups exists only with FAST VP Pool.


My question is about different thing. If i have only SAS drives in VNX5200, then i should not use Pool with FAST VP, use only classic RAID group. So, is it faster for read/write operations (only VMware vSphere hosts access VNX ) than if i make Pool, enable FAST VP and make LUN from Pool on the same SAS disks?


As i understand here, first thing is about memory used by SP and for IO cache. With only SAS disks i do not enable FAST VP, so make more RAM available for cache, and so performance should be better, than with FAST VP Pool LUN.

February 13th, 2016 17:00

Don't over complicate it then !

Generally speaking, your plan may provide a very moderate performance increase, but it won't be significant, particularly for VMware workloads.

You'll also need to have more than one RAID Group as a single group can only contain a maximum of 16 drives.

Have you considered getting a few EFD drives and using a small FAST Cache ? It's a very effective bang for buck item, and if you're spending all that coin up front, a worthy investment. Although in your case it means you'll have to also purchase another DAE to fit them in. If there's a likelihood you'll need to expand in the future, this may also suit.

16 Posts

February 13th, 2016 21:00

Thank you for 16 disks limitation! Don't know it.

EFD disks very expensive, we can't use them.

So, I must use Pool, FAST VP and make LUN's on it, if I want to use all 20 disks.

How about hotspare in VNX 5200? Should i reserve one disk for it, don't include it to Pool?

February 14th, 2016 15:00

You could still achieve your original plan, just create to RAID Groups, each with 2 x (4+1)  RAID 5 counts. then you can create metaluns striped across them.

It's really up to you which way to go. If you have 25 disks, you'll lose 4 to vault 20 to either your POOL or RAID Groups and have 1 left over for hot spare.

No Events found!

Top