Start a Conversation

Unsolved

This post is more than 5 years old

24325

July 9th, 2012 13:00

Enable Automatic Storage Tiering PS6010E

hi guys

is there any guy to enable Automatic Storage Tiering in my PS6010E

I have this but I don't know what else to do

http://screencast.com/t/L3Yjnw6fIhd

I would like to use this capability for my Vmware Volumes

any guidance? pdf? or something?

thanks a lot

July 9th, 2012 14:00

Hi, you need to put the array in the same pool as the volumes are in for it to tier the data.

10 Posts

July 9th, 2012 15:00

Ok, In that I need to create a Volume in the SSD disk before doing the merge?

http://screencast.com/t/DEXyPFbDw44

by the way my Vmware Pool is SAS Based Disks and my tier disks are SATA SSD

does it matter?

10 Posts

July 10th, 2012 12:00

anyone guys?

thanks

July 10th, 2012 13:00

Hi.

As long as you have no volumes in the SSD pool, you can just merge it into the VMware pool.

I did however notice you were running a firmware version of 5.0.4, and i do not believe the advanced load balancing is available before version 5.1.2 firmware. It is needed for more advanced tiering of hot blocks.

The recommended firmware version now is 5.2.4-H1 and can be downloaded from support.equallogic.com

July 10th, 2012 13:00

Oh, and its adviceable to run the same firmware version on all members.

And ofcause, read the upgrade documentation :-)

July 10th, 2012 14:00

Oh, one more thing.

You can read more about load balancing/tiering in this document:

www.google.dk/url

5 Practitioner

 • 

274.2K Posts

July 10th, 2012 20:00

Definitely upgrading to 5.2.4-H1 is a must.   The tiering you're looking for was introduced in 5.1.x.   There are many very important fixes in the current release and all members in the group must be at the same level.  

Before you decide to merge the pools, and they don't have to be empty to merge pools.  A review of SANHQ would be advisable.   SANHQ can be downloaded from the Equallogic website.  It's a fantastic monitoring tool for the arrays.

Tiering isn't instantaneous, nor continous.  So if you IO pattern is very spiky you probably won't get the expected benefit.   Also mixing SSD and non-SSD members in same pool isn't advisable.  You lose the great benefit of the SSD's that way.   Tiering is designed to keep a steady state between members of the same pool.  In a two member pool for example, if one member is on average doing more work than the other, they will swap out blocks of data.  "Hot" (busy) volume data for "cold" *(non busy) blocks.  Moving data around has a cost, so there's a threshold that must be reached before this exchange takes place.    

So for example, if you are running SQL or Exchange and the volumes are on the SAS pool, merging the SAS/SATA pools could reduce the performance.  Since data between members are striped in proportion to their relative sizes.   SATA members tend to have larger disks than SAS arrays.  So more data is on the SATA generally speaking.   So those IO spikes aren't typically long enough to trigger the tiering algorithm.  

So by running SANHQ you can make some educated guesses on whether the load would be better served by merging the pools or not.

You can also open a case with Dell Support and they can review the SANHQ archive and array diags to help you.  

Regards,

July 11th, 2012 09:00

Hi don.

I feel bad, that i gave some rather bad advice.

I was giving my advice from the understanding i got from the load balacing whitepaper.

Especially page 9 and 10 shows, that with fx hybrid arrays in pools with "lower performing" arrays, that most hot data should remain on the ssd tier, and the cold data should be balanced over to the lower performing array.

I snagged this from the tiering with the APLB segment:

"For example, combining large capacity PS65x0 class arrays with lower capacity PS60x0 arrays using disks that provide higher I/O to get better total system ROI may be the appropriate design for some customers. Others might choose to combine members with 10K SAS and members with SSD to meet their application workload requirements. Many other configurations are possible, these are simply examples."

I guess its more complicated than that :-)

5 Practitioner

 • 

274.2K Posts

July 11th, 2012 09:00

Re: SSD tier.  It will if the average load is high enough.   But if you look on SANHQ and the average latency of the members aren't far apart then tiering won't happen often.  So transitional loads may not see benefit.  It won't cross that latency threshold.  

10 Posts

July 11th, 2012 09:00

I see there's a lot to talk in this topic

5 Practitioner

 • 

274.2K Posts

July 11th, 2012 11:00

Larlochacon,

Yes there is.  It's complicated subject that's very site dependent.   SANHQ data is going to be key here.  It's a balancing act, if the balancer is too aggressive you'll waste time doing IOs between members vs. to the server requesting the data.  That additional overhead could result in performance problems.  Again, *IF*, it was too aggressive.  Drives, controllers, can only do so much I/O at any one time.  

July 17th, 2012 22:00

Let me just add to this discussion. Our company had initially bought a PS6010XV half populated running raid 50. That gave us about 2TB of storage capacity. We soon ran out storage and we had essentially two options: populate the rest of the array, or buy a new array with slower drives but more capacity. We decided on the latter because populating our PS6010XV put us over budget and for a cheaper price we got a fully populated PS6010E which gives us about 20TB of storage capacity.

Great, now we have another SAN. What do we do? Do we add it to the existing pool? Or create a separate pool because the two SANS have different spindle speeds?

Well, if these arrays were running the firmware versions prior to 5.1.x then creating two pools for each respective array would have been a good choice. This is because had they been in the same pool, the 15k drives would have to "slow" down to the 7200 RPM speed of the new PS6010E array.

So now let's talk about the new firmware version. One of the best parts of this firmware version is the Automatic Performance Load Balancer. This load balancing technology monitors the latency of the members within the pool. If it notices a certain block of data is being accessed a lot on a particular member and that member has a latency of over 20ms it will then see if there is another member in the pool with a lower latency so it can move that "hot" block to the member. It's really hard explaining, but pretty much since the primary metric that APLB uses is latency, it does not matter if you have a 15k array in the same pool as a 7.2k array.

5 Practitioner

 • 

274.2K Posts

July 17th, 2012 22:00

Networksolutionguy,

You are correct.  The latency difference does need to be somewhat sustained in order to function.  So when the array members are busy the APLB works exactly as you describe and as advertised.  However, if not, then it is possible IO spikes will occur and not exceed the threshold where the page swap will occur.   Also given your example, most of the work will be done by the SATA array, it has about 90% of all the pages for that pool.   Made worse by the 1/2 populated nature of the SAS array.   As I mentioned earlier in the thread,  using SANHQ is key to understanding what configuration will work best for each customer.  

Regards,

No Events found!

Top