Unsolved
This post is more than 5 years old
38 Posts
1
8082
October 9th, 2010 21:00
Storage Pools - FLARE30 - Expansion - is data redistributed?
CX4-240
FLARE 30
No FAST enabler
Let's say I have a storage pool made up of just 5 FC disks (raid 5). Thin LUNs have been created and some amount of data has already been stored.
If I were to add another 5 (or 10) FC disks (of same size) to this storage pool, will the Clariion redistribute the existing thin LUN data across all FC disks in the same fashion as a striped expansion of a traditional MetaLUN, or will the new disks be used only for newly written data?
0 events found
No Events found!


jgrinwis
1 Rookie
•
99 Posts
0
October 11th, 2010 04:00
I've got an extra question, if you add an extra 5 disks to a existing 5 disk R5 pool. Will it create an extra 4+1R5 and attach that to the pool or will it create a 9+1R5 pool?
jps00
2 Intern
•
392 Posts
0
October 11th, 2010 05:00
4+1R5
jsessler
38 Posts
0
October 11th, 2010 14:00
That's not what I wanted to hear. My expectation was that it would work just as MetaLUN striped expansion would, and restripe data across all spindles. In a sense, you need to be careful about the initial design of the storage pool, otherwise you could wind up with a hot area in the pool.
With the addition of FAST, will it at all redistribute data across the same tier of disk in an attempt to redistribute hot blocks?
I guess the possible "answer" would be to add additional disks to the pool, then do a thin-to-thin LUN migration. My assumption then being that the new LUN would be more evenly distributed across all spindles.
Jeff
Kumar_A
2 Intern
•
727 Posts
0
October 11th, 2010 14:00
FLARE will not re-distribute the existing data into the new drives, but the algorithm makes sure that new data will be placed in such a manner that all the drives get equally used. So, you will have more storage chunks getting allocated from the new drives after they are added.
dynamox
11 Legend
•
20.4K Posts
•
87.4K Points
0
October 11th, 2010 16:00
that's is disappointing to hear indeed. I was expecting the same functionality as on VMAX where once new devices are added to the pool, existing data in the pool gets rebalanced onto the new drives, thus giving you more performance and avoid creating hot spots.
RRR
6 Operator
•
5.7K Posts
0
October 12th, 2010 03:00
Very disappointing indeed. You'd expect when you need more performance, you only need to add a few extra drives and everything is ok again, but I guess that's not the case then.
RFE ? (=request for enhancement)
Message was edited by: RRR
dynamox
11 Legend
•
20.4K Posts
•
87.4K Points
0
October 12th, 2010 09:00
what are supposed to see ? Thick LUNs within pool are better for performance application ?
kelleg
6 Operator
•
4.5K Posts
1
October 12th, 2010 09:00
You might also want to review the latest Virtual Provisioning White Paper:
White Paper EMC CLARiiON Virtual Provisioning - Applied Technology.pdf
See the Executive Summary, second paragraph and page 15 for usage of thin LUNs
glen
kelleg
6 Operator
•
4.5K Posts
0
October 12th, 2010 09:00
Thin LUNs are not appropriate for performance sensitive applications.
glen
dynamox
11 Legend
•
20.4K Posts
•
87.4K Points
0
October 12th, 2010 10:00
ok, but even with thick LUNs, if i add more space to the pool ..data is not re-striped. So if i have a thick LUN that needs more IOPS, adding more devices to the pool will not help. Maybe Clariion developers should have picked over VMAX engineer's shoulder..to see how they are implementing pool functionality.
jsessler
38 Posts
1
October 12th, 2010 10:00
That's not exactly what it says, although it's understandable why thin will not perform as well as thick, there is still a problem with storage pools if data is not redistributed when additional disks (of the same tier) are added.
Thick - Allocates space upfront in the pool so there is no overhead penalty for allocating on-demand as a thin LUN does.
Thin - Allocates space as necessary, so a performance-sensitive application will not perform as well when it must allocate additional space.
Now then, going back to my scenario e.g. pool starts with 5 FC (4+1) RAID 5, and create a thick LUN. Since the space is pre-allocated, does that imply that all its data will now reside on those 5 disks? If so, then adding additional disks after the fact to the pool will provide no performance gain since data is not redistributed across all the spindles.
In a sense, thick LUNs may result in worse performance based on the number of disks in the pool at the time of allocation, where a thin LUN may perform better assuming new growth spans the newly spindles.
Jeff
RRR
6 Operator
•
5.7K Posts
0
October 13th, 2010 01:00
Once again: who's going to apply for an RFE ?
jsessler
38 Posts
0
October 13th, 2010 10:00
bertog,
Acording to the the FAST white paper, the FAST process only relocates data slices up and down the tiers, so it doesn't appear to alleviate the issue of hot spindles within a given tier. Sure, if the slice gets promoted to flash it may help to rebalance, but it's clear there is a need for redistribution of data within a given tier on the addition of new disks.
dynamox
11 Legend
•
20.4K Posts
•
87.4K Points
0
October 13th, 2010 10:00
bertog,
please correct me if i am wrong but FAST moves data between tiers (flash, fc, sata). If i have a pool of FC drives ..what is it going to do for me ? Does FAST analyze hot FC versus cold FC and moves data accordingly ?
bertog
61 Posts
0
October 13th, 2010 10:00
Hello,
As noted earlier, data is not redistributed across all the drives in a pool when a pool is expanded. However, the addition of FAST functionality to pools with Release 30 has alleviated the need for this somewhat. When FAST is enabled on the array and the storage pool, the system will monitor all activity and move data as appropriate to optimize (based on the policies selected) data placement within that pool; in this respect it is redistributed according to analysis of activity levels. The FAST white paper covers this in more detail. http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h8058-fast-clariion-wp.pdf
Within engineering we have heard the request for pool rebalancing upon expansion in the manner you are requesting, and is something that is being looked at; but definitely not included in Release 30. (I cant comment about roadmaps or futures on this forum)
I encourage you folks to review the Virtual provisioning white paper linked previously. Even when a pool is initially created, a thin LUN or full LUN is not necessarily allocated across all the drives. The system creates private RAID groups according to algorithms designed to maximize best practice sizes, and allocate slices to the user LUNs from the private RAID groups to balance capacity use and performance. This is even more complicated to explain in a posting now that pools can contain different drive types - white paper!
Generally speaking, we consider it a best practice to expand a pool by as large a number of drives as possible at each time, rather than expanding multiple times with smaller groups of drives. In a perfect world you would double the pool capacity each time - 10 drive pool expanded by 10 drives for example. Similarly, it is a best practice to have performance sensitive applications on full LUNs within a pool; or if you need to be sure of exact data placement (e.g. striped across 3 RAID Groups) use RAID Group LUNs and MetaLUNs.