Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1213

December 12th, 2008 13:00

LUNS per RAID Group NS40G and NS80G w/ CX700

NS40G and 80G with CX700 backends.
We had 15 300GB FC to carve for the NS40.
Created 3x4+1R5
I read that AVM algorithm likes 4 LUNS, so we created 12 LUNS, 4 per RG, balanced across SPs so the number of luns would be divisible by 4.

450GB FCx20 disks-4 4+1R5= 16 LUNs.

We're following the same rationale for the the ATAs as well, keeping all RG LUNs on same SP. We have 320 and 500GB ATA

Does it make sense to try giving AVM sets of LUNs divisible by 4 , or should we stick to just creating 2 per RG?

I don't want to waste resources with a large number of LUNs...

8.6K Posts

December 15th, 2008 01:00

two per raidgroup is enough - AVM wont stripe across LUNs from the same raidgroup anyway
it will look for 4,3,2 LUNs from different raidgroups when deciding to stripe

thats because there are tests showing that due to disk head movement the overall performance is better when concat'ing two LUNs from the same disk than striping

in your case creating four LUNs wont hurt - but AVM will only stripe across the three

At least for the "old" ATA in the CX700 there is the rule to assign all LUNs on one raidgroup to the same SP (not alternate like with FC) since these are internally single-ported disks

674 Posts

December 15th, 2008 00:00

Two Luns per RG is enough, as long as a single Lun is less than 2 TB
If a single Lun would be bigger than 2 TB, than use 4 or 8 Luns per RG

1 Rookie

 • 

92 Posts

December 15th, 2008 06:00

Gotcha. Thanks. I'll try to pull together another RG and feed it to the Celerra. It's a challenge to pull together optimized storage on our fully populated CXs. LUN migration has been a good friend. I know not to use it with the assigned LUNs ;>.

So the general rule for optimally adding storage to the Clariion for Celerra purposes should be somethng like this:

Add in blocks of 4 RG, SP and bus balanced, 2 LUNs per RG, constrained by <2TB LUN limit.
This will be easier after we get the CX4-960s

8.6K Posts

December 15th, 2008 08:00

LUN migration has been a good friend. I know not to use it with the assigned LUNs


just PLEASE dont use it to migrate from LUNs between different RAID types, sizes or disk type.
Or in other words dont do it if "moving" the LUN would result in it "matching" a different storage pool, i.e. dont move a LUN from a 4+1R5 to a 8+1R5
That would get AVM logic confused

Add in blocks of 4 RG, SP and bus balanced, 2 LUNs per RG, constrained by <2TB LUN limit.


yes, that would be optimal
wether or not it makes a difference for single or aggregate performance really depends on your workload
you might very well be fine with just using one raidgroup, but more gets you more "potential" performance

The disk selection is actually described in the Implementing Automatic Volume
Management with Celerra
manual that you can get from Powerlink which also contains flowcharts

quote page 15:

Most of the system-defined storage pools for CLARiiON storage systems first
search for four same-size disk volumes, from different buses, different SPs, and
different RAID groups.

The volumes must meet the following absolute criteria:
◆ A stripe volume cannot exceed 2 TB.
◆ Disk volumes must match the type specified in the storage pool storage profile.
◆ Disk volumes must be the same size.
◆ No two disk volumes can come from the same RAID group.
◆ Disk volumes must be on a single storage system.

If found, AVM stripes the LUNs together and inserts the stripe into the storage pool.

If AVM can¿t find the four disk volumes that are bus-balanced, it looks for four samesize
disk volumes that are SP-balanced from different RAID groups, and if not
found, AVM then searches for four same-size disk volumes from different RAID
groups.

Next, if AVM has been unable to satisfy these requirements, it looks for three samesize
disk volumes that are SP- and bus-balanced from different RAID groups, and
so on, until the only option left is for AVM to use one disk volume.

The one disk volume must meet the following criteria:
◆ A disk volume cannot exceed 2 TB.
◆ A disk volume must match type in the storage pool.
◆ If multiple volumes match the first two criteria, then the disk volume must be
from the least-used RAID group.

1 Rookie

 • 

92 Posts

December 15th, 2008 18:00

I've given AVM 20 300GB FC and 20 450GB FC in 5 disk R5, 8 Luns of each size which have been pulled into the clar_r5_perf pool.
Do I need to go back and make these all identical size luns within the pool or will AVM pull the like sized disks together and consume all of the allocated storage?
48 y 824469 APM00055200041-00C9 CLSTD d48 1,2
49 n 824469 APM00055200041-00CB CLSTD d49 1,2
50 y 824469 APM00055200041-00CD CLSTD d50 1,2
51 n 824469 APM00055200041-00CF CLSTD d51 1,2
52 n 824469 APM00055200041-00C8 CLSTD d52 1,2
53 y 824469 APM00055200041-00CA CLSTD d53 1,2
54 n 824469 APM00055200041-00CC CLSTD d54 1,2
55 y 824469 APM00055200041-00CE CLSTD d55 1,2
56 y 549623 APM00042302731-00C9 CLSTD d56 1,2
57 n 549623 APM00042302731-00CB CLSTD d57 1,2
58 y 549623 APM00042302731-00CD CLSTD d58 1,2
59 n 549623 APM00042302731-00CF CLSTD d59 1,2
60 n 549623 APM00042302731-00C8 CLSTD d60 1,2
61 y 549623 APM00042302731-00CA CLSTD d61 1,2
62 n 549623 APM00042302731-00CC CLSTD d62 1,2
63 y 549623 APM00042302731-00CE CLSTD d63 1,2

1 Rookie

 • 

92 Posts

December 15th, 2008 18:00

Just posted a followup...

8.6K Posts

December 16th, 2008 02:00

thats fine. AVM will consume both

It only stripes across like-size volumes but if needed it will concat multiple stripes of different sizes
No Events found!

Top