Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

26705

December 2nd, 2011 09:00

Volume pinning in a multi-member 5.1.2 group?

In a multi-member group I very much want to take advantage of performance load balancing for the majority of volumes, but there are some exceptions where I absolutely want to pin 100% of specific volumes to specific member arrays.

I know this can be done by setting each member up with a different RAID configuration and then setting the volume RAID preference, but this seems like the wrong approach.  What's the right way?

5 Practitioner

 • 

274.2K Posts

December 2nd, 2011 10:00

Not sure why you want to do that, but you can bind a volume to a member, assuming you have enough space.

It can be done at the CLI.

GrpName>volume select bind

Make sure that you still have at least 10% free space on that member after doing this.

You will also potentially impact the storage tiering algorithms ability to balance IO.

Regards,

74 Posts

December 2nd, 2011 11:00

Can you expand a bit on the balancing algorithm?  

I learned a lot in the description here:

EqualLogic PS Series Architecture: Load Balancers - Community - Dell

5 Practitioner

 • 

274.2K Posts

December 2nd, 2011 10:00

5.1.x has a new algorithm that will swap inuse "hot" blocks for "cold" blocks on a different member.  This prevents one member doing more work than another.   It's based on relative latency between members in same pool.   Depending on the size of the volume in relation to member size you could impede the balancer from moving pages.  I.e. if you used most of a member, there's less free space to do the block swap.  

What do you think you'll gain by binding a volume?

7 Posts

December 2nd, 2011 10:00

Thanks Don, that's exactly what I needed to know.  Can you expand a bit on the balancing algorithm?  I wonder if the pinned volume would be disregarded in load calculations or if the low-load blocks in the pinned volume would monkey the works.

5 Practitioner

 • 

274.2K Posts

December 2nd, 2011 11:00

You might also consider create a new pool and put a member there.   Then just move volumes there.

Either way works!

Regards,

7 Posts

December 2nd, 2011 11:00

We have a policy by which we keep our most critical active and standby data in physically separate cabinets at the datacenter, which means sometimes we want to be sure we know exactly which EQL array that data is sitting on.  Less-critical data can live wherever it best fits.  As we move to a single-group multi-member configuration, I want to be sure we have a mechanism to comply with that policy, which your answer provides perfectly.

The good news is there will be plenty of room on the arrays once the migration is complete so hopefully we won't have too much trouble with the block swap.

No Events found!

Top