Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

930

April 20th, 2010 10:00

Expand Lun, use unused space in same raid group?

CX3-20

So I have a high performance raid group,6 disks in raid 10.  When it was originally layed out, there was just on 146 GB Lun(lets call it lun 50) created, leaving the other 254GB unallocated.

Now of course, after 2 years, we need to expand our LUN 50 and would like to utilize the empty space in the raid group... only now we see the problem in expanding.

It seems like in order to expand, we would need to create new LUNS in that raid group and Metalun them together.

So here's the problem.

          It seems like either way, Concat, or stripe, we would take a performance hit due to the fact that any lun we used to expand would be in the same raid group(thusly the same disks, only stacked on top of one another).   Does this assumption sound correct?

          If we created a new lun of  146GB in the raid group, then striped them together, when data was accessed would there be a performance loss due to the stacking of luns on the same disk, or does the striping take care of this.  I've included some images just to help illustrate.  The first is the current config, the second is the proposed.  Expanding into luns in the same raid group just seem like it would cause issues.  Or does the array just take care of this through magic, and the performance hit is just in our imagination?

lun1.JPG

lun2.jpg

1 Rookie

 • 

20.4K Posts

April 20th, 2010 11:00

if i understand that paper correctly, using stacked striped metaLUN may not be an issue for heavy random io profile. Maybe John can comment on that but i guess the safest bet not to do that.

1 Rookie

 • 

20.4K Posts

April 20th, 2010 10:00

take a look at document:

EMC. "Whitepaper: EMC CLARiiON MetaLUNs-- A Detailed review" EMC  Powerlink,
http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1024.1_clariion_metaluns_cncpt_wp_ldv.pdf

specifically look for section "Stacked MetaLUN"

9 Posts

April 20th, 2010 11:00

Thanks a ton!!!

Great info.  If I read that section correctly it seems as though it is a big no for striped, and a tentative yes for concat?

Note a stacked MetaLun only has poor performance for striped component LUNs.  For stacked concatenated component, LUNs performance is not greatly affected.

Anyway, thanks for the quick response

1 Rookie

 • 

20.4K Posts

April 20th, 2010 14:00

Thanks Glen. What about striped metaLUN with components from the same raid group used for heavy random i/o ?

4.5K Posts

April 20th, 2010 14:00

Not real sure about this - if you think about the stacked concept and then stripped, you would expect the performance is suffer under most conditions, but more with large, sequential IO. Maybe small, random IO would be OK, but I would still recommend not vertically stripping metaLUN components in the same raid group.

glen

4.5K Posts

April 20th, 2010 14:00

If you concatenate the two LUNs in the same Raid Group, performance will be about the same as with the original LUN - no gain in performance, but no real loss either.

Best would be to create a new LUN in a different raid group that has the same number of disks and raid type, create a LUN exactly the same size and you can stripe the two together - you gain the extra space and about double the performance. This of course depends on what's going on in the second raid group - if it's heavily utilized, you could actually lower the performance due to interference from LUNs in the other raid group.

glen

9 Posts

April 20th, 2010 14:00

Thanks, I appreciate all the help

392 Posts

April 21st, 2010 04:00

In a small block random workload with low or very low locality, a stacked MetaLUN would not have an adverse effect on performance.  This is because the drive heads  are skewing all over the RAID group's drives anyway.  However, this type of I/O is not common.  Most random I/O has a certain degree of locality, which restricts the drive head movement to a narrower region of the drives.  A stacked MetaLUN would have more than one region, where the drives would be 'popping' back and forth betweent them.

Again, it comes back to how well you understand your I/O: sequential, mixed, random.  If its random, how much locality does it have?  While stacked MetaLUNs are not recommended, if the System Administrator understands his or her choices and is willing to accept the consequences, there is no prohibition in Navisphere or FLARE in creating them.

1 Rookie

 • 

20.4K Posts

April 21st, 2010 06:00

Thanks John. What metrics in Navi analyzer can help determine low/high locality ?

190 Posts

April 21st, 2010 11:00

Could you not LUN migrate (assuming you have some free space) into a larger LUN?  That would take the LUN "stacking" out of the equation.  Perhaps a two-step process (migrate elsewhere, migrate back) but based on the original post it was only 146GB.

Dan

9 Posts

April 21st, 2010 12:00

That probably would have worked as well, just migrate it to some temp space, then destroy the lun, then rebind it with the correct amount of space.  Don't think I've ever had to migrate a lun, so I'll have to test that 

In the end, after evaluating I/O type, performance, raid group, and time constraints,  we decided to concat into the same raid group, however great information in terms of provisioning in this thread, so thanks to everyone.

It would be interesting to know if navi analyzer did have metrics for locality.

4.5K Posts

April 21st, 2010 18:00

Look at the Disk value Average Seek Distance (GB) - this gives a rough estimate of how far the heads are seeking in GB's. If you have a 100GB disk and the ASD is 50GB's, then you can assume that the heads are seeking 1/2 the capacity of the disk - probably very random. The closer you get to zero, the more sequential the seeks.

You can also look at Full Stripe Writes - if the writes are very sequential this number should approach Total Write IOPS.

Look at Read Cache Ratio - the closer the ratio to 1, the more sequential the Reads and the more Read Cache Hits/s you will get.

The data is there, just in different places - you need to put the different values together to make an approximation.

glen

392 Posts

April 22nd, 2010 04:00

You can use "Average Seek Distance" for the disk object. Of course, if you have multiple partitions on the RAID group or are using the disks in a pool, I wouldn't expect to see any locality at the disk level.

Also comparing "Read Bandwidth" with "Prefetch Bandwidth" for a given LUN (not pool based LUN) will indicate, if you are primarily sequential. At the same time, "Used Prefetches"should be very high (for purely sequential ~100%).
Full stripe writes, indicate localized writes, especially when you are doing primarily small I/Os, since they would have to get coalesced first.
"Read/Write Cache Hits" are always indications for localized data.
Is that helpful?

1 Rookie

 • 

20.4K Posts

April 22nd, 2010 09:00

Thank you Glen and John ..very helpful.

jps00 wrote:

You can use "Average Seek Distance" for the disk object. Of course, if you have multiple partitions on the RAID group or are using the disks in a pool, I wouldn't expect to see any locality at the disk level.


what did yo mean by multiple partitions ? Multiple LUNs ?  So if i look at disk object and i know that disk belongs to a RAID group where MetaLUNs are being utialized, what would be the most effective process to find out if particular LUN/MetaLUN has high/low locality. At that point i can't rely on disk object metrics and have to look at LUN metrics only  (Read Bandwidth with Prefetch Bandwidth, Read/Write Cache Hits ..etc)?

Thanks a lot

No Events found!

Top