Unsolved

This post is more than 5 years old

16 Posts

8250

August 3rd, 2011 20:00

Storage Pool deep dive

Hi there,

we have just purchased two VNX5300 systems and I would like to ask a question in regards to the storage pools and space allocation in 1G slices.

Let's say I have 15 disks in a pool with 3x 4+1 RGs and 30x Private LUNs (10x per each RG) underneath. Now, let's say I create a 10GB LUN on top of that pool. The question is how the system will stripe the data - across all 15 drives or only the first RG.? Based on the information I have the latter is true, since 10GB will be fulfilled by 10x 1GB slices, each living in a separate Private LUN within the first 4+1 RG?

Regards,

Robert

2 Intern

 • 

392 Posts

August 4th, 2011 04:00

> how the system will stripe the data - across all 15 drives or only the first RG.?

Across all 15 drives.

However, it may be more helpful to think of it, 'across all three (3) private RAID groups'.  You would hit all three RAID groups (15-drives), if you migrated that multi-GB LUN over to the pool.  If you just bound it within the pool as a thick LUN, and executed say, a single 8 KB single-block write, you'd only have one slice on one private RAID group.

Note that there are 'economies of scale' with pools.  A 15-drive pool is very small. It does not have the capacity efficiency and performance of a larger pool.  A small pool is useful in examples, but may not be practical in a production environment.

I recommend you review the 'LUNs' section (which includes Virtual Pools) of EMC Unified Best Practices for Performance and Availability: Common Platform and Block O.E. 31.0-- Applied Technology, a.k.a. VNX Best Practices.  VNX Best Practices is available on Powerlink.

16 Posts

August 4th, 2011 14:00

>> how the system will stripe the data - across all 15 drives or only the first RG.?

>Across all 15 drives.

The information I have is based on the following article: http://virtualeverything.wordpress.com/2011/03/05/emc-storage-pool-deep-dive-design-considerations-caveats/

From the comments section:

"Hi D, if you were to have a 15 disk pool, you would have 3x 4+1 RGs underneath, but each RG would have 10x Private LUNs for a total of 30x Private LUNs. From my testing, the 1G chunks are distributed across the Private LUNs not across the RGs. So you would have a 1G chunk on Private LUNs 0-5, which means 6x 1G chunks on RG1, and no 1G chunks on RG2 and RG3. This is ignoring the single 1GB slice that is allocated upfront for metadata purposes; there is actually an extra because of this. I was able to test this in my lab to verify. I created an 8GB LUN, and placed a ~6GB VM on it. It allocated everything from the 1st Private RGs Private LUNs"

If that is incorrect, I think it would be worth if EMC would add a comment to the article and clarify it.

Thank you for your reply.

Regards,

Robert

2 Intern

 • 

392 Posts

August 8th, 2011 05:00

I'm aware of that blog.

> It allocated everything from the 1st Private RGs Private LUNs

The allocations of capacity within 'private LUNs' and 'private RAID groups' are 'private'.  The allocations are not visible through Unisphere or Analyzer.  Proprietary tools are needed to perform the analysis of slice allocation within a pool.  I do not believe the poster would have access to those tools.

16 Posts

August 8th, 2011 17:00

>I'm aware of that blog.

Still no clarification from EMC on the blog....

>> It allocated everything from the 1st Private RGs Private LUNs

>The allocations of capacity within 'private LUNs' and 'private RAID groups' are 'private'.  The allocations are not visible through Unisphere or Analyzer.  >Proprietary tools are needed to perform the analysis of slice allocation within a pool.  I do not believe the poster would have access to those tools.

As per the blog, it seems like it is possible to figure it out based on the SP Collect txt files.

2 Intern

 • 

392 Posts

August 9th, 2011 05:00

I spoke with the release engineers.  The multiple stripes to a private RAID group behavior was found in a CLARiiON Release 29 version of Virtual pools.  This was corrected in later patch.  The current version of Virtual Pools for CLARiiON is Release 30.  Release 30 does not have this behavior.

The current VNX release is 31.  The VNX releases of Virtual provisioning have never had this behavior.

16 Posts

August 9th, 2011 05:00

>There are flaws in that blog; several flaws as far as I can tell. The biggest flaw is the following statement:

>>Depicted in the above figure is what a storage pool looks like under the covers. In this example, it is a RAID5 protected storage pool created with 5 disks. >>What FLARE does under the covers when you create this 5 disk storage pool is to create a Private RAID5 4+1 raid group. From there it will create 10 Private >>LUNs of equal size. In my test case, I was using 143GB (133GB usable) disks, and the array created 10 Private LUNs of size 53.5GB giving me a pool size >>of ~530GB

>Flare creates 1GB private LUNs, two of them per drive. There is no way that you will get 10 Private LUNs of size 53.5GB, that's just not the case.

Can you explain this further? Two 1GB private LUNs per drive in a pool as above would only mean 10GB. How the total usable capacity is achieved?

>I agree with jps00. The LUN will be created on all 15 private LUNs.

At what granularity, 1GB? How then the data is written, does the system fills the first 1GB slice before writing to the second? 

1K Posts

August 9th, 2011 07:00

Sorry, that's what I meant. 2xdrive count = number of private LUNs. Thanks again!

2 Intern

 • 

392 Posts

August 9th, 2011 07:00

> A 5 disk group (4+1 r5) will create 10 1GB private LUNs (2 per drive) from the beginning.

Its actually per RAID group, not drives.  The 10 private LUNs in your example would be across all the RAID group's drives.

10 per RAID group..png

However, I think all you need to know is that both Storage Processors share the capacity of the pool's private RAID groups.

And, before anyone asks, there is provision within the algorithm for the User LUNs of one SP requiring more capacity than available on 'their' initially allocated private LUNs.

I recommend that users interested in the details of Virtual pools read the EMC VNX Virtual Provisioning: Applied Technology white paper.  This paper contains detailed information on the current release's implementation of Virtual Pools.  The VNX Virtual Provisioning paper is available on Powerlink.

16 Posts

August 10th, 2011 05:00

>> A 5 disk group (4+1 r5) will create 10 1GB private LUNs (2 per drive) from the beginning.

>Its actually per RAID group, not drives.  The 10 private LUNs in your example would be across all the RAID group's drives.

So in a case of 15 drives (3x 4+1 RG) we have 30 private LUNs where the first 10 are across the first RG, the second 10 are across the second RG and so on... Now, if I create a 10GB LUN and start writing data to it, the system will first fill the first 1GB slice in the first private LUN of the first 4+1 RG and once the slice if full it will then start writting to the second 1GB slice in the first private LUN BUT now from the SECOND 4+1 RG, is this correct?

2 Intern

 • 

392 Posts

August 10th, 2011 06:00

Robert, I'm sorry, I don't understand your question.

Don't be concerned about the private LUNs.  They are just a mechanism for balancing the I/O to the pool between the two storage processors.  Your attention should be on the capacity utilization of the private RAID groups.

The most important idea to understand is:

Slice allocations are automatically balanced across back-end SAS ports and the private RAID groups.

Below is a figure that may help. The 40-drive pool shown contains eight 4+1 private RAID groups.  There are five User LUNs created in the pool (Blue, Orange, Green, etc.).  Assume each User LUN has exactly three slices fully populated.  Further assume that Blue and Orange are owned by SP A and the remaining by SP B.  One possibleallocation of capacity is based on the allocation algorithm is shown in the figure. (The blue stripes spanning the pool's RAID groups are the slices allocated to the Blue User LUN.)

Pool User LUNs.jpg

Although this figure is not shown in the Virtual Provisioning whitepaper, the mechanism is explained there.  Reading the whitepaper will answer many of your questions about virtual pools.

2 Intern

 • 

392 Posts

August 10th, 2011 06:00

> is there any limit on the total space allocated to your pool LUNs?

The maximum capacity of a pool-based LUN is 16 TB.

The EMC Unified Best Practices for Performance and Availablity: Common Platform and Block O.E. 31.0 in its 'LUNs' and 'Virtual Provisioning' sections contains a description of the capacity mins and max's of the Virtual Pools.  VNX Best Practices is available on Powerlink.

21 Posts

August 10th, 2011 06:00

I probably should have been more clear.  What about the sum of all pool LUNs?  Can you have 120x 2TB NL-SAS disks in a pool (VNX5300 or above) and create about ten 16TB LUNs?

Math behind the madness:  The pool uses RAID6 and is made up of fifteen 6+2 RGs.  Each RG has approx capacity of 10,980GB thus giving a pool capacity of  ~ 164,700GB.  Assuming no overhead and the numbers are correct to make a somewhat simple example you get ten 16TB LUNs which would require 160,000 private 1GB LUNs behind the pools for the slices.   Correct?

21 Posts

August 10th, 2011 06:00

The whitepapers show the maximum number of pools, disks per pool, user-visible LUNs, etc.  Is there any limit on the total space allocated to your pool LUNs?  This would be due to some limit to the number of private LUNs that back the user-visible pool LUNs.  Two 10TB pool LUNs would end up creating 20,000 private 1GB LUNs behind the scenes, correct?

2 Intern

 • 

392 Posts

August 10th, 2011 07:00

VNX5100

VNX5300

VNX5500

VNX5700

VNX7500

Storage
System

Maximum
LUNs

512

2048

4096

4096

8192

Virtual
Provisioning
Pools

Maximum LUNs
per Pool

512

512

1024

2048

2048

Maximum LUNs
all Pools

Traditional
LUNs

Maximum LUNs
per RAID Group

256

MetaLUNs

Maximum MetaLUNs
per Storage System

256

512

512

1024

2048

Maximum LUNs
per MetaLUN

1

                        Table 20 Maximum Host LUNs per LUN Type VNX O/S Block 31.0

VNX Best Practices for Performance and Availability: Common Platform and Block O.E. 31.0.  In a VNX7500 you 'could' have 2048 16 TB User LUNs.

16 Posts

August 10th, 2011 13:00

>Robert, I'm sorry, I don't understand your question.

Let me try to rephrase it, this is really my original question. Let's take 15-drive pool with three 4+1 raid groups and a single 10GB user LUN.

It has been said here that the user data will be stripe across all 15 drives. What I am trying to undestand is how the data will be actually

distributed across all physical drives. How the system strips the data across all 15 drives and at what granuality? Can you break the 10GB of

user data and explain which chunks of it will live on what physical drive using the underlying pool structure?

Sorry, I haven't had a chance to read the white paper yet. I am currently traveling.

Thank you for all your answer so far, I think we are getting there.

Robert

Top