Start a Conversation

Unsolved

This post is more than 5 years old

1820

January 15th, 2014 01:00

Single pool with 2.5inch SAS and 3.5 SAS without FAS,possible?

Can I mix in same Pool SAS 2.5inc 600GB and SAS 3.5inch 600,without FAST Possible.

Scenario:

Old VNX 5100 with 15 x  3.5 inchDisks SAS 15K 600GB

New VNX 5100 with 25 x  2.5 inchDisks SAS 10K 600GB

I want to created a single Pool from the Above without the need to use FAST license.Possible or only 2 pools?

8.6K Posts

January 15th, 2014 03:00

Sure

But how would you create a pool across two VNX systems ?

32 Posts

January 15th, 2014 04:00

Not 2 systems.

I am migrating data to the NEW 5100, After that i am shuttting down the OLD Array and moving the drives to the new Array

6 Posts

January 15th, 2014 09:00

Yes, you can create the pool across a mix of drive form factor sizes (e.g. 2.5" and 3.5" drives).  The one thing I will caution you about is mixing drive speeds.  The VNX pool will consider both 10K and 15K drives as tier of drive and this will lead to unpredictable performance within the pool.

6 Posts

January 15th, 2014 10:00

dynamox, to clarify, that is the value of having FAST licensed on your array.  There is some decent detail here, under point #3.  I know this doesn't solve your concerns, but to have private raid groups re-balanced based on performance metrics you have to have FAST licensed.

1 Rookie

 • 

20.4K Posts

January 15th, 2014 10:00

this is where i have an issue, if in flare 32 you are doing in-tier FAST, if my 15k private raid groups are "colder" then my 10k private raid groups ..move hot slices to those private RGs.  So why would i not get any benefits if i am forced to mix 10k and 15k drives in the same pool ?

474 Posts

January 15th, 2014 10:00

To clarify what jwardsni1 is saying...

VNX looks at drive technology when determining pool structure, not speed or size..  So 10K and 15K drives are perceived as the same tier.  300GB 10K and 600GB 15K are also perceived as the same tier.  In your case it's not all that bad really since the drives are the same size, you can only guarantee the performance of the 10K drives, even though your pool has 15K drives also, you performance will be based on the lowest speed component.

Based on your scenario, you will create a new pool of 25 x 300GB 10K disk (RAID5 4+1 I assume), then migrate data into that pool, then add the 15 x 300GB 15K disks (which will also be RAID5 4+1).  After adding the disks, the VNX will rebalance the pool (assuming you are on VNX OE v.32.x code before the pool expansion).  Without the FASTVP license the rebalance will be based purely on the capacity consumption of the disks in the pool, so after rebalance each disk will have approximately the same amount of data and the same amount of free space as the other disks.

As additional slices are allocated for LUNs, the VNX will add those slices to the private RGs with the most free space as a % of total capacity.  Since all the disks are the same size, you will essentially have even balance across the 10K and 15K disks.  But it's unlikely you will see any performance benefit from the 15K disks over what 10K disks would provide.

1 Rookie

 • 

20.4K Posts

January 15th, 2014 13:00

but the fact that i can service more iops with 15k drives, FAST should move more slices to those drives right ? What metrics does FAST consider for intra-tier FAST ?

474 Posts

January 15th, 2014 13:00

Does your array have the FASTVP enabler?  If so, then the slices are relocated based on how busy they are.  If no enabler, then it's capacity usage only.

Again, FASTVP doesn't care whether the disks are 10K or 15K, it's simply ranking the slice activity across the array and distributing them as evenly as possible.  The 15K disks will likely end up colder than the 10K disks.

474 Posts

January 16th, 2014 16:00

I believe that FASTVP is looking at the temperature of the slice, not the temperature of the disks.. It essentially ranks the slices hottest to coldest, puts the hottest ones in SSD, the next hottest in the SAS tier, and the rest in the SATA tier.  For in-tier balancing, I believe it's using the same data and just distributing the heat across the drives as evenly as possible.  I do not believe that FASTVP is tracking the heat of the disks themselves.   So the 10K disks being busier than the 15K disks will not be a factor for FASTVP's work.

1 Rookie

 • 

20.4K Posts

January 16th, 2014 17:00

if it does not track heat of the disks then how does it know where to move slices to (still talking about in-tier).

474 Posts

January 17th, 2014 12:00

All it really needs to know is how hot the slices are, then distribute them so that you have a sort of hot/warm/cold distribution on each disk that roughly matches the next disk.  If one private RG has many hot slices and another private RG has none, then it can move some of the hot slices from the busy RG over to the cold RG, and move some of the cold slices the other direction.. effectively evenly distributing the workload.

FASTVP really only tracks the frequency of IO's over time to each slice (1GB on VNX, 256MB on VNX2).  It stack ranks them, and then distributes them across the private LUNs that it has available in the pool.   During distribution, it takes into consideration where they are already located and moves the fewest slices possible to get the most balanced result.

Before in-tier rebalancing was added to the code, at a very high level, All FASTVP was trying to do is put the hottest slices in SSD until the SSD tier was 90% full, then put the next hottest slices in the FC/SAS tier until it was 90% full, and then put the rest in SATA.  FASTVP itself does not need to know how busy the disks themselves actually are, it's just putting the busiest on the fastest it has available.  Since FASTVP doesn't know the difference between 10K and 15K disks, they are the same tier and are treated exactly the same.  In-tier rebalancing or load balancing is simply the ability for FASTVP to move slices between private LUNs in the same tier, it doesn't add any additional statistics on how to calculate what gets moved and to where.

Hopefully that makes sense.

474 Posts

January 17th, 2014 13:00

It's a little more sophisticated than just IOPS, ie: it takes into account how long ago each IO occurred, and I think a few other factors, but again it's all focused on measuring the workload on a specific slice rather than a physical disk, private RG, private LUN, pool, etc.

1 Rookie

 • 

20.4K Posts

January 17th, 2014 13:00

Thank you Richard, i was hoping there was a little more "sophistication" than simple IO frequency. 100 sequential IOPS are not the same as 100 random write IOPS.

38 Posts

August 14th, 2014 01:00

Just to add a little more; FAST VP does factor in a performance capability associated with a slice within a private RG, So a slice made up of 15K drives will have more performance capability that the same slice on 10K drives. Also the capacity factors in - remember that you could have (not desirable) a tier with say 300GB drives and 600GB drives. The private RAID Groups with 600GB drives will have twice the amount of slices, so that also goes towards the rebalancing consideration.

So the rebalance isn't simply slice temperature and then distribute those to get an even aggregate temperature across all private RAID Groups, it does consider additional factors when doing that.

The temperature calculation is I/O frequency though, but after rebalancing the target result is to achieve a balance of aggregate temperature within each private RAID Group in each tier is within 5% of each other.

I hope that adds something to the complexity of the mechanics of FAST VP but in a positive way

regards,

~Keith

No Events found!

Top