I realize that VNX5200 unlike VNX5600 does not seem to support the 4 drive vault pack e.g. 4 x 300GB SAS for the vault drives. They have minimum and maximum quantity base configs for base DPE 2.5" SAS drives starting from a quantity of 8 up to 25. Working with a specific requirement for 1 x R6 (14+2) 600GB with 2 spares, I decided that not not waste the drives from the minimum DPE base config of 8 x 600GB , I assume that the system should use the first 4 drives for the vault drives, giving a difference remaining of 4 drives.
I would then add the 4 drives + 18 additional drives = 4 + 18=22 ( 4 vault + ((14+2))+ 2 spares). 1 spare for vault and 1 spare for homogeneous Pool1. 22 drives in all, same size and speed.
Objective and questions:
Just a customer but I think I've got a couple of these:
The vault data is never rebuilt to a hot spare. That data is location specific, so rebuilding it elsewhere would do no good.
In VNX2, if you select 300GB vault drives, there is no room for user data, so a vault pack of 4 x 300GB disks does not require a hot spare.
In your config with only 600GB disks, you would not need to reserve two spares.
In pricing small 5200 configurations, I have noted that the starter 25 x whatever-disk are aggressively priced, and probably cost less than manually configuring a unit with mixed, or a few less, disks.
If the 14+2R6 configuration provided all the space that you required, then I would likely incorporate the 'extra' disks one of several ways:
4 vault disks, plus one hot spare, leaving 20.
1) a single pool with 4 x (4+1R5). Similar capacity to the single 14+2, more flexibility in LUN type (thin/thick/etc.), and highly reliable.
2) two RAID groups of 8+2R6. Slightly better capacity and reliability, with the ability to use LUNs and/or MetaLUNs to either isolate some LUNs from others, or engage all the spindles. You also get Symmetric Active/Active host multipathing by using RAID groups rather than pools.
3) four RAID groups of 4+1R5. Avoid R6 penalties while still providing fairly robust fault tolerance, plus other benefits as above. This is actually how I have a few 5200 and 5300 units with full 125 x 600GB disks configured for use with Oracle DBs. It isn't Flash-level performance, but I can construct MetaLUNs with workload appropriate numbers of spindles beneath them, and their performance under load is reliable and stable.
4) stick with your original plan and leave the 'extras' unallocated at the moment. They will cost relatively little to acquire, and give you some flexibility should you either desire to re-work things. You might, perhaps, decide to make a 2+2R10 or 3+1R5 for some type of logging, or for OS volumes...or whatever.
Thanks much for your input.
Considering I stick to the 22 x 600GB drives for both vault and storage provisioning, that would mean 4 drives for the vault, 18 drives for R6(14+2) + 2 spares to take care of vault if it will hold user data and for the storage pool. Fast Cache 3 drives will fill the DPE to full use 25.
This leads me to the following question:
Below are 3 capacity images for the 1st scenario
When I use the second scenario and try to select the remaining 2.5 " drives after the vault allocation to the Pool, the capacity calculator automatically adds an additional 25 2.5" DAE. Not sure why so that is why I have the remaining drives in a RG. But this led me to my second question above. Below are 3 capacity images for the second scenario
1. Yes, you can use disks that remain as you please, Raid Group or Pool.
2. Yes you can mix drives of different sizes of the same architecture, but it goes against EMC best practice.
also consider that although the Vault partitions will not be covered by spare policy, if you create User LUNs using the Vault drives, they will be covered by spares. Keep that in your calculations.