Start a Conversation

Unsolved

This post is more than 5 years old

5309

February 9th, 2016 10:00

VNX2 Storage Pools - Non FAST

VNX2 - Non FAST and all 1 Tier (NL-SAS)

If I have a VNX2, or even a VNX1 for that matter, configured with Storage Pools will my data be striped to all disks in the pool as it would be with FAST enabled?

The way I understand FAST is that data extents (slices) will be moved between tiers in either 1gb (vnx1) or 256mb (vnx2) slices.

I have an application that is heavy sequential write ~70%. Right now I am using RG and multiple Traditional LUNs presented to a Linux host and bound as a single VG.

195 Posts

February 15th, 2016 08:00

Your planned usage really does matter here, as does your relative risk tolerance.

If you intended to provision all the storage to the hosts and keep everything the same forever (or whatever the business definition of a really long time was...), then LUNs and MetaLUNs on basic RAID groups would be attractive.

If you want to be able to provision storage in different sizes, and more importantly, take it back and re-provision it in different sizes, the pools would likely make more sense.  You could do this with RAID groups, but there might be extra steps, like defragmenting the groups, to consider.

If your usage includes databases you might want to provide at least two 'lumps' of storage, so that you could allocate database log files from different physical resources than database data files.

Whether you are using pools or groups, the two physical sizes that suggest themselves (to me...) are 4+1R5, and 8+1R5.

In practical terms you could, for example, form a pool using 18 disks formed from 2 x (8+1R5) groups, or form two RAID groups as 8+1R5 and create LUNs and MetaLUNs from them.  That would leave you Three unbound disks for hot spares.

You could also form a pool from 20 disks using 4 x (4+1R5) groups, or again, form those same disks into four separate 4+1R5 groups and create LUNs and MetaLUNs from them.  That would leave you one hot spare.

If you elect to go with pools, those are likely your two best approaches, and I would mention that as far as capacity is concerned, you end up with 16 disks worth of user capacity either way.

If you decided on RAID groups, you could create 2 x (4+1R5) RAID groups *and* a 8+1R5 RAID group and still have two disks available for a hot spare.  While you could construct a pool with those same characteristics, doing so would be a violation of EMC best practice.

February 16th, 2016 02:00

Doesn't really make any sense at all to go with a 8+1 RAID 5 drive count for Pools or RAID Groups.

Effective capacity is the same and you'll lose the performance benefit of the 2 extra disks being included, basically robbing the platform of 10% iops and throughput. Having any more than 1 hot spare for only 25 disks is a waste, when EMC best practice is 1 per 30.

Sticking  with 4+1 groups is the logical choice. This conforms to best practice from both a drive count and spare perspective.

No Events found!

Top