Unsolved

This post is more than 5 years old

1 Rookie

 • 

96 Posts

3354

January 14th, 2013 10:00

RAID 5 Storage Group - adding drives

We have a RAID 5 storage group with nine drives.

We may need to add drives to it in order to supoprt an increased workload (a 'test' host becoming a 'production' host).

EMC support tells me that the underlying private RAID arrays are 4+1. 

How that is even possible with 9 drives?  What sort of IO/s load can it currently support?

If we determine that it is best to add drives to it, should we add 5 at a time?  We will be adding 15 to 20 drives.  These are all 300GB 15K SAS drives.

Alternately, if we stay with RAID5, should we create a new RAID 5 storage group with 5 drives at a time?  (or is multiples of 5 okay?)  In this case, we would migrate LUNs off of the 9 drive storage group and probably re-do the group.

Thanks...

4 Operator

 • 

8.6K Posts

January 14th, 2013 10:00

Adding in multiple’s of 5 drives would be best

If you add 9 drive you get 1x 41R5 and 1x 31R5

9 Legend

 • 

20.4K Posts

January 14th, 2013 10:00

you can install navissecli and run it yourself (password: messner)

1 Rookie

 • 

96 Posts

January 14th, 2013 10:00

Thank you.

Regarding "VNX OE for Block 32 (Inyo)" and  "VNX OE for Block 31 (Elias)" - how can I tell which one we have?

How can I get someone to run this for me?

     navisecccli -h getrg -<engineering password>

..I had asked EMC Support what the underlying RAID group is on this Storage Pool, and I was told initially that is is an 8+1, then I was told there were underlying 4+1 PRGs. 

They had me run: 

/nas/sbin/naviseccli -h <xx.xxx.xxx.xxx> -user sysadmin -password -scope 0 storagepool -list -id n

9 Legend

 • 

20.4K Posts

January 14th, 2013 10:00

Christopher explains it here

https://community.emc.com/thread/166854

9 Legend

 • 

20.4K Posts

January 14th, 2013 11:00

right click on your array and look under Software tab

2 Intern

 • 

247 Posts

January 15th, 2013 00:00

Landog, if you can purchase one additional drive you could correct the pools to EMC best practices. In that case, I'd:

1. Create a new pool with the 15-20 drives. Make sure it's in multiples of 5.

2. LUN migrate the existing LUNs to this new pool. This will clear your old pool.

3. Remove the old pool, this frees up 9 drives.

4. Take the one drive you have purchased additionally together with the 9 "old" drives and expand the "new" pool.

If you're on VNX OE 32 this will even rebalance the new pool so that you get predictable performance (so consider upgrading to it prior to the above steps).

2 Intern

 • 

247 Posts

January 15th, 2013 01:00

Hi Sushant,

You're absolutely right, that's why I proposed removing the old storage pool and basically rebuilding it.

Once you make a "mistake" adding drives to a pool, you're pretty much out of options to correct it and will have to delete the entire pool. Rebalancing will not fix that, it will only spread the data out after an expansion of the pool.

Does that make sense?

January 15th, 2013 01:00

Hi Jon,

IMHO the pool rebalancing feature of INYO will still not reshuffle the private RGs from 9 drive (4+1 and 3+1) to 10 drive ( 4+1 and 4+1). Hence the rebalancing feature may not help here, if the pool is expanded with just 1 drive. If it does, would you be kind enough to help me understand?

January 15th, 2013 01:00

We have a RAID 5 storage group with nine drives.

We may need to add drives to it in order to supoprt an increased workload (a 'test' host becoming a 'production' host).

--Yes, you need to add extra drives to support increase in workload. But do you have any numbers about the additional load that you are expecting?

EMC support tells me that the underlying private RAID arrays are 4+1.

How that is even possible with 9 drives? What sort of IO/s load can it currently support?

--As per EMC Best practices, when you select R5 configuration for a storage pool, then it tries to make as many private RGs with 4+1 configuration and the rest of the drives are adjusted to make a smaller R5 private RG. Please note that this is something that EMC doesn't recommend that since the extent of the LUN which resides on the private RG with smaller number of drives will not be able to provide as much performance as the other RGs, so the LUN will suffer from unpredicatble performance.

If we determine that it is best to add drives to it, should we add 5 at a time? We will be adding 15 to 20 drives. These are all 300GB 15K SAS drives.

Alternately, if we stay with RAID5, should we create a new RAID 5 storage group with 5 drives at a time? (or is multiples of 5 okay?) In this case, we would migrate LUNs off of the 9 drive storage group and probably re-do the group.

--Yes, if you need to add extra drives to support increase in workload., the the drives should be added in multiples of 5 for a storage pool with R5 configuration. However, your pool already has a skew in terms of the number of drives in the underlying private RGs. But I dont think Flare 32 supports reshuffling of disks to break the existing 3+1 RG and make a 4+1 RG. So, the best bet is to do some lun migrations and migrate to a storage pool that complies with EMC best practice of R5 storage pool with disks in multiples of 5 or migrate the LUNs to a temporary location and redo this group as you suggested.

Thanks...

4 Operator

 • 

8.6K Posts

January 15th, 2013 02:00

You CANNOT expand a pool with just ONE drive

January 15th, 2013 04:00

Oops, you are right Rainer_EMC. Thanks

1 Rookie

 • 

96 Posts

January 15th, 2013 06:00

Thank you very much for the lesson!  I think I have a good understanding, now.

Regarding "VNX OE for Block 32 (Inyo)" and  "VNX OE for Block 31 (Elias)" -- I see our Block OE is 5.31.000.5.720.  I presume that is "Elias."

We have taken a close look at the existing load by looking at the NAR files and examining throughput with excel. (http://storagesavvy.com/2011/03/30/performance-analysis-for-clariion-and-vnx-part-1/)

It was surprising how much of our I/O load is handled by FAST Cache.  Approaching 100% on some of our workloads. 

We will rebuild the storage group.  What's the best practice for performance when building a RAID 10 storage group?  I'll have 16 drives initially, and the nine will be freed up after we migrate off of the current storage group.

Create with 8, add 8, then add 8 next time?  Will it make stripes across 4 drives that way?

2 Intern

 • 

247 Posts

January 15th, 2013 06:00

You could create the pool with 16 drives right from the start; it will accept that amount of drives without problems. The VNX will create two 4+4 RAID10 private RAID groups.

Then migrate your data and destroy your existing pool. This gives you 9 free disks.

At that point I'd add 8 drives to the pool and use the remaining one drive as a hotspare or something.

Are you sure you need RAID10? There's quite a bit of capacity loss due to the mirroring!

1 Rookie

 • 

96 Posts

January 15th, 2013 11:00

RAID10 v. RAID5 ??

I calculate that the new production load will use a peak of 3900 IOPS with the RAID 5 penalty applied (write IOPS x 4) and a peak of 2600 IOPS with a mirror penalty (write IOPS x 2).  This is if the load currently being picked up by FAST Cache continues to be handled by FAST Cache.

(At our peak I/O load there would be 1300 IOPS read and 650 IOPS write.)

RAID 10   1300+ (650x2)

RAID 5     1300 + (650 x 4)

The 16 drive RAID10 would be able to handle 2600 IOPS (16*180 = 2880).

A 15 drive RAID5 would not be able to handle 3900 IOPS (15*180 = 2700).

Also, it is a critical health care system.  Minimizing recovery time and exposure is important.

4 Operator

 • 

4.5K Posts

January 15th, 2013 14:00

With release 31 (Elias), you should always expand a Pool with the same number of disks that you started with. If you build a R10 Pool with 16 disks (which will create two 4+4 raid groups), when you expand that Pool, you should use 16 new disks. That way the performance of the new 16 disks will equal the performance of the old 16 disks. Once the capacity of the new disks equals the capacity of the old disks, all new data is written over all 32 disks. So some data will have the perforance of 16 disks and some will have performance of 32 disks. That's way you should not use fewer disks when you expand a Pool.

The data on the disks in release 31 is not restripped over the old and new disks when you add more disks to a Pool. In release 32 (Inyo), when you add new disks to an existing Pool, the data will "re-balance" over all the disks - old and new.

glen

0 events found

No Events found!

Top