Start a Conversation

Unsolved

This post is more than 5 years old

S

152259

March 5th, 2011 06:00

What is the difference between Storage Pool and RAID group?

Storage provisioning:

What is the difference between implementing the storage provisioning in either ways; Storage pool and RAID group?

and the creation of LUN in the two scenarios?

2.1K Posts

March 6th, 2011 19:00

Hi Shawky,

This is one of those questions that could go into a VERY deep dive answer, but I'm going to try to keep it at a high level for now. Once you've got the concepts down I would recommend digging into some of the whitepapers on best practices for performance and availability. You can find these on Powerlink or under the CLARiiON section of the Support Forums here on ECN.

The other thing I'm going to do - based on the wording of your question - is assume that you are talking specifically about CLARiiON here (including NS and VNX as well) and specifically FLARE 30 (although most of this applies to FLARE 29 and some of it applies to FLARE 28).

So, on a CLARiiON the most basic structure (ignoring the physical drive itself) when building LUNs is your RAID Group (RG). This is true for traditional RGs AND Storage Pools. For Storage Pools you don't create the RGs yourself and you don't see them, but they are there underlying the Pool structure.

A traditional RAID Group is simply a set of physical drives that have been grouped together so that LUNS can be "bound" with a specific RAID type. For example, you could put together an RG of 9 x 300GB 15krpm FC drives. If you bind a 500GB RAID5 LUN on this RG it becomes a RAID5(8+1) set. From that point on you can only bind RAID5 LUNs on this RG unless you clear all the LUNs off and start fresh. You can increase the capacity of the RG as a whole by adding drives into the RG (up to a limit of 16 drives - although that isn't usually recommended). You can't increase the size of the LUN though without doing some interesting shuffling (e.g. binding  MetaLUN). There is one exception to this if you have a single LUN consuming an entire RG - you can increase the size of the LUN as you add drives to the RG. You are also limited to the IO performance limits of the RG unless you build MetaLUN structures across multiple RGs.

Using Storage Pools can address many of the limitations of traditional RGs - as long as you pay attention to the details. A basic Storage Pool can be made up of many more drives than a traditional RG. Instead of being limited to 16 drives in an RG you could potentially have hundreds of drives in a single pool. These drives are divided up and configured as RGs "under the covers" by FLARE in order to provide the protection to your data, but you don't see the RG structures as you work with the pool. You are also not limited to a single drive type in a pool as you are with traditional RGs. If you are implementing FAST you would actually want a variety of drive types to share the pool. From this pool you can provision thin LUNs or thick LUNs. If you have the FAST Suite enabled on the array you could configure LUNs that automatically tier their data based on performance requirements. LUNs can be grown on the fly. They can span many hundreds of drives to spread the performance load.

Now keep in mind that this is not meant to be a comprehensive view of Storage Pools vs. Raid Groups. And while it may appear that there would be no place for RGs any more, that is still far from true. They still provide the only way to absolutely guarantee specific performance characteristics for critical applications. In general though Storage Pools are more flexible, more efficient, and easier to manage.

I hope this gives you enough info to get going... and if I've made any incorrect statements in my descriptions, I would ask that someone feel free to correct me. While I've done a lot of research on how and where we will be implementing FAST and Storage pools, I have yet to have my first FLARE 30 array land on the floor. All of my hands on experience is with traditional RAID Groups.

16 Posts

March 6th, 2011 23:00

Hi Allen,

Thanks for your efforts, i really appreciate that, you made things clearer ....

I'm really working with CLARiiON FLARE R30.

16 Posts

March 6th, 2011 23:00

Summerize the main points:

- Increasing the capacity of RG is limited and is done by adding drives, the RG can have up to 16 drives.

- Using Storage pool overcomes many limitation of expansion the size, Instead of being limited to 16 drives in an RG you could potentially have hundreds of drives in a single pool.

- RG still provide the only way to absolutely guarantee specific performance characteristics for critical application.

- Storage Pools are more flexible, more efficient, and easier to manage.

2.1K Posts

March 7th, 2011 19:00

That sums it up really well (and it's a whole lot easier to read )

March 15th, 2011 08:00

Hi Shawky


Allen gives a great answer to your question, to follow on here is a paper I did last year based around Fully Automated Storage Tiering (FAST), it shows some of the benefits that Pools offer us with the introduction of Sub-LUN tiering to divide the data across multiple tiers based on I/O pattern analysis.

FAST really comes into play by helping us make the best use of Flash Drive technology ensuring hot data (data with a high I/O activity pattern) is placed on our most performant tier. This allows us to have smaller more greener array footprints and avoids old practices of short-stroking disks to guarantee performance. This gives saving in the physical aquisation of disks, we need less disks and we need less power Flash Drives have no moving parts.

Traditional Raid Groups (RG's) still have a place particularily in the SQL world where we may want to guarantee performance for TempDB, Log files, etc by segragating I/O activity and dedicating spindles to them.

The great things about Pools is that we can throw data files from varied appications/workloads into a Pool and let FAST make recommendations for data relocation to the appropriate tiers, this helps us to identify the hot data, and allows us to kick off relocation or schedule a window for relocation to occur to and rebalance the spread of our data across tiers and improve performance. If we identify data that may be having a negative impact on the pool we have the option to migrate LUNs to others pools or to RG's. example SQL perfromance is highly dependant on TempDB and Log file performance so placing these on a 2+2 or 4+4 RG to segragate and service I/O can benefit performance and allow Pools resources to service data which in the case of OLTP type databases can be critical to the profitability of a business. For more details read the paper as some good info in there.

We also have the ability to use FAST Cache  with Pools just to complicate matters this options is enabled/disabled by a tick box on the LUN properties just as we have the choice of whether or not to use FAST on a LUN level with our pools. As a very simple explanation FAST is used as a long term strategy to tier data based on I/O patterns ensuring data is placed on the optimal tier - the right place at the right time. FAST Cache comes into play by allowing a reaction to changes in I/O patterns between relocation windows. FAST and FAST Cache can be used together or seperately based on your needs.

Anyway hope this helps a little more,

Michael

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC Unified Storage and EMC Fully Automated Storage Tiering (FAST) - An Architectural Overview

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h7208-tiered-sql-unified-fast-wp.pdf

http://www.emc.com/collateral/software/white-papers/h7208-tiered-sql-unified-fast-wp.pdf

another paper we did last year...

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives—A Detailed Review has been published to the following locations.

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h7071-tiered-storage-sql-cx4-efd-wp.pdf

http://www.emc.com/solutions/samples/microsoft/performance-optimization-sql-server.htm

16 Posts

March 21st, 2011 02:00

Thanks Micheal,

The documnets are very helpful ..

60 Posts

January 23rd, 2014 14:00

Hi,

This is rightly one of the most popular posts on the forum!

Alot of designs still have to cater for corner cases for classic RAID groups (even after realising tiering's advantages for skewed workloads). A point worth noting is that Fast Cache does not work well for sequential work loads (database logs) or layered application luns such as for Recoverpoint, Write Intent Logs, clone private luns etc and for pools you cannot turn off fast cache at the lun level only the pool level so separate pool or traditional raid group is needed.


On the upside for traditional RAID groups the latest VNX MCx range supports Symmetric Active Active which enables hosts to access luns by using both SP's simultaneously.This potentially enables double the performance with SPA and SPB writing to the same LUN at the same time (not to the same LBA(Logical Block Address). This is not present for Pool Luns currently. Just something I wanted to add to the conversation that may be fruit for thought


Victor 

2.1K Posts

January 23rd, 2014 21:00

And the award for resurrecting the oldest thread with current relevant information goes to ...    

It's quite interesting that now matter how much has changed this is still very valid info. The good stuff doesn't change so much if it's done right the first time! I'm still looking forward to the day when I can get that Active Active on my pool LUNs in MCx though.

60 Posts

January 24th, 2014 02:00

Haha responded to @EMCproven tweet with an abbreviated version of what I posted here and was rightly prompted to bring my thoughts here. YIP and Active Active only on Classic LUNS will lead to interesting conversations

11 Posts

December 10th, 2015 03:00

Thanks Micheal,

It is good discussion to understanding RAID & LUN scenarios

Thanks  All

Manzoor

34 Posts

February 1st, 2017 04:00

As per my knowledge, RAID group contains limited disks and same type of disks like you can make RAID group with up to 16 drives and you select 600GB 10k, you can't use other drives like 900 10K etc in that RAID group.

But in Pool you can add hundreds of different type of drives like mixture of 10K, 15K, SSD etc.

No Events found!

Top