Was your question answered correctly? If so, please remember to mark your question Answered when you get the correct answer and award points to the person providing the answer. This helps others searching for a similar issue.
Also, you can use the internal EMC Forums and mailing lists if you need additional information.
In answer to your first question - about the data being stripped over all the disks without FAST VP enabled, the answer is initially the data will be positioned base on the tiering policy - the default is High:Auto. That means that any new writes will attempt to write into the highest tier available then over time the slices will be relocated based on the temp of the slices. If you have two tiers, EFD and SAS, then all the new Writes will go into the EFD tier until that is full, then into the SAS tier. Over time, the cold slices will be moved down and the hot slices moved up. The is the between tier re-balancing. You also have in-tier re-balancing that tries to re-balance the slices within the tier over all the disks in the tier.
If you have a single tier and no FAST VP enabled, the new data will be located in the first private LUNs in the first private raid group, then move the the next LUN in the first private raid group, etc, until you get to the next private raid group. With small files, the slices could end up on the single private raid group. FAST VP would then re-balance the slices based on the temp of those slices and move hot slices to private raid groups that have a lower temp, thereby re-balancing the data over more disks.
In metaLUNs the data is stripped over all the component LUNs in each raid group evenly, thereby stripping the data over all the disks more evenly. This depends on you following the MetaLUN Best Practices (I've attached in case you don't have it). MetaLUNs require more work to configure, but they probably provide the best perform if done correctly.
I plan to buy VNX5200 with 25 SAS drives only. I plan to use 4 vault drives as first RAID group, and next 20 drives as second classic RAID group.
What you mean about private RAID groups without FAST VP Pool? If i have RAID group i have no any private RAID groups isn't it ? As i know, private RAID groups exists only with FAST VP Pool.
My question is about different thing. If i have only SAS drives in VNX5200, then i should not use Pool with FAST VP, use only classic RAID group. So, is it faster for read/write operations (only VMware vSphere hosts access VNX ) than if i make Pool, enable FAST VP and make LUN from Pool on the same SAS disks?
As i understand here, first thing is about memory used by SP and for IO cache. With only SAS disks i do not enable FAST VP, so make more RAM available for cache, and so performance should be better, than with FAST VP Pool LUN.
Don't over complicate it then !
Generally speaking, your plan may provide a very moderate performance increase, but it won't be significant, particularly for VMware workloads.
You'll also need to have more than one RAID Group as a single group can only contain a maximum of 16 drives.
Have you considered getting a few EFD drives and using a small FAST Cache ? It's a very effective bang for buck item, and if you're spending all that coin up front, a worthy investment. Although in your case it means you'll have to also purchase another DAE to fit them in. If there's a likelihood you'll need to expand in the future, this may also suit.
Thank you for 16 disks limitation! Don't know it.
EFD disks very expensive, we can't use them.
So, I must use Pool, FAST VP and make LUN's on it, if I want to use all 20 disks.
How about hotspare in VNX 5200? Should i reserve one disk for it, don't include it to Pool?
You could still achieve your original plan, just create to RAID Groups, each with 2 x (4+1) RAID 5 counts. then you can create metaluns striped across them.
It's really up to you which way to go. If you have 25 disks, you'll lose 4 to vault 20 to either your POOL or RAID Groups and have 1 left over for hot spare.
Your planned usage really does matter here, as does your relative risk tolerance.
If you intended to provision all the storage to the hosts and keep everything the same forever (or whatever the business definition of a really long time was...), then LUNs and MetaLUNs on basic RAID groups would be attractive.
If you want to be able to provision storage in different sizes, and more importantly, take it back and re-provision it in different sizes, the pools would likely make more sense. You could do this with RAID groups, but there might be extra steps, like defragmenting the groups, to consider.
If your usage includes databases you might want to provide at least two 'lumps' of storage, so that you could allocate database log files from different physical resources than database data files.
Whether you are using pools or groups, the two physical sizes that suggest themselves (to me...) are 4+1R5, and 8+1R5.
In practical terms you could, for example, form a pool using 18 disks formed from 2 x (8+1R5) groups, or form two RAID groups as 8+1R5 and create LUNs and MetaLUNs from them. That would leave you Three unbound disks for hot spares.
You could also form a pool from 20 disks using 4 x (4+1R5) groups, or again, form those same disks into four separate 4+1R5 groups and create LUNs and MetaLUNs from them. That would leave you one hot spare.
If you elect to go with pools, those are likely your two best approaches, and I would mention that as far as capacity is concerned, you end up with 16 disks worth of user capacity either way.
If you decided on RAID groups, you could create 2 x (4+1R5) RAID groups *and* a 8+1R5 RAID group and still have two disks available for a hot spare. While you could construct a pool with those same characteristics, doing so would be a violation of EMC best practice.
Doesn't really make any sense at all to go with a 8+1 RAID 5 drive count for Pools or RAID Groups.
Effective capacity is the same and you'll lose the performance benefit of the 2 extra disks being included, basically robbing the platform of 10% iops and throughput. Having any more than 1 hot spare for only 25 disks is a waste, when EMC best practice is 1 per 30.
Sticking with 4+1 groups is the logical choice. This conforms to best practice from both a drive count and spare perspective.