Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2686

March 31st, 2016 08:00

VNX5400 Storage Pool Design

We have just install a new VNX5400 with following configuration for storing ESXi 5/6 vmdk LUN and Oracle RAC LUN:-

1) 30 x 1 TB SAS Disks

2) 20 x 800 GB SSD Disks

We would like to maximize the usuable capacity as well as protection (allow more disk failure)

Which is a better choice?

1) RAID 1+0

2) RAID 5 (with 1 disk failure allowed)

3) RAID 6 (with 2 disk failure allowed)

Seems RAID 1+0 is good for performance, however, it lost half usuable capacity and disk sprindle.

Or, should RAID 6 be better performance than RAID 1+0 in this situation???

Also,we notice the first 4 disks are RAID 5 OS disk, is it possible to ask Vendor to regroup them into a big storage pool????

4.5K Posts

March 31st, 2016 09:00

First, the Vault drives can not be used in a Pool.

Second, Raid 6 (two parity disks) provides the best protection, but also has a performance penalty. For SAS and SSD disks Raid 5 is usually best for both performance and protection. NL-SAS should use Raid 6.

I'd also recommend that you review the latest best practices guides for VNX with MCx available on support.emc.com in the Support By Product/VNX5400/White Papers area.

https://support.emc.com/docu42660_VNX2-Unified-Best-Practices-for-Performance---Applied-Best-Practices-Guide.pdf

https://support.emc.com/docu48706_Virtual-Provisioning-for-the-VNX2-Series---Applied-Technology.pdf

glen

195 Posts

March 31st, 2016 10:00

Given exactly what you have I believe I would consider the following RAID 5 configurations:

Leaving the four vault disks and one hot spare gives you 25 SAS disks.  You can use them as a total of 5 x (4+1R5) in one or more pools.

Your 20 SSDs can be used as 2 x (8+1R5) with two left over as hot spares, or if you are making two pools, you could use 2 x (4+1R5) in one, and 8+1R5 in the other with one hot spare left over.

One big pool would be simple, but two would permit you to separate things from each other.  You should give due consideration to this decision, as it will be inconvenient to adjust later on.

You should also be aware that a pool does not function well when it is close to 100% full.  Conservatively, I use ~85-90% full as a rule of thumb; you could probably go a bit higher than that without encountering any adverse behaviors.

The documents linked above are good references; you should look them over.

77 Posts

March 31st, 2016 18:00

Could you explain why NL-SAS should use Raid-6??? thanks.

195 Posts

March 31st, 2016 19:00

Their capacity and speed make rebuilds ... glacial is a good description. And their relative reliability makes failures more common than in other enterprise disks. These characteristics combine to produce situations where a second failure in a RAID 5 array is statistically significant enough to warrant the additional protection that RAID6 offers.

77 Posts

March 31st, 2016 20:00

If I use Raid-5 to form 5 x (4+1) SAS Raid5 Disk Group Storage Pool with 1 x SAS Disk Hotspare,

would it be having better performance than :-

1) 3 x (6+2) RAID6 Disk Group Storage Pool

2) 1 x (12 + 12) RAID1+0 Storage Pool

I think that RAID1+0 is safer for disk failure protection because the other RAID5 and RAID6 would take much longer time for disk rebuild after disk replacement, however, with RAID1+0, half of the disk spaces is wasted for mirroring.

Do you know how much time is usually needed for disk rebuild for RAID5 (4+1) and RAID6 (6+2) 1TB Disk Replacement during High IOPS over 10k???

195 Posts

April 1st, 2016 08:00

My analysis of the math on RAID10 is that you *can* survive loss of up to 50% of the disks, but only if they are the right disks.  You can also fail completely with the loss of just two disks, if they are any pair in the array.  Combined with the loss of capacity I have rarely found a compelling use case for RAID10.

The rebuild time question does not have a specific answer for time.  Generally I'd bet on the RAID6 finishing first, but that calculation is muddied up by both the size difference, and the high IOPS qualifier.  The 6+2 has more data, so it would be getting a higher percentage of I/O than the 4+1.  And the RAID6 write penalty may be increased if the high I/O is write heavy.  So the 'race' between the two may be much closer that you might initially think.

Also, you have a loss of capacity.  5 x 4+1R5 yields a net capacity of 20 disks (minus free space for pool health), while the 3 x 6+2R6 only yields 18 disks of net capacity (minus that same free space allowance).

Those extra spindles, along with the lack of RAID6 write penalty, should mean better performance for the R5 configuration.

My $0.02

No Events found!

Top