Start a Conversation

Unsolved

This post is more than 5 years old

K

2909

January 25th, 2018 00:00

Multi-member group pool split up or bind volume to member

Hello

I've got two PS4100 arrays on one site. There is a default pool and two thin LUNs (12TB and 4TB) on it. Now I want to have separate pools on each member and move LUNs to this separate pools to avoid situation, that if one array goes down I have both LUNs unavailable. I also found a topic about pinning volumes to members, without affecting the pool... And here's my consideration - which solution will be more suitable? And how the hosts will react if one array will go down - if there is common pool for both volumes, but they are pinned to separate members.

5 Practitioner

 • 

274.2K Posts

January 26th, 2018 01:00

Hello,

 I replied to this earlier but didn't seem to get posted.

As long as you have enough free space in the pool to hold the volume space for both members you can remove one and put it in a pool then move volumes to that new pool.  It will take some time to do.  First the data must be evacuated then moved again to the new pool.

 You can BIND a volume to a specific member.  The downside is that you have to then manually balance out the two members. Otherwise one could become busier while they other is idle. If you only have the two volumes this isn't as big a concern, but during peak times you will be limited to that one members IO capacity.  

 The arrays are highly redundant.  It's pretty rare to have a complete failure and data loss.  Keeping up with EQL array and drive firmware is very important as well.  Most commonly cases where RAID fails is being left unmonitored and not maintained.  Hopefully you are running RAID-6 or RAID-10?   Not the less redundant RAID-5/50?

 Regards,

Don

2 Posts

January 26th, 2018 15:00

Hello

Thanks for your answer. I'm using RAID 6, but the firmware is unfortunately not so fresh (7.1.5).
Getting back to my considerations -  if I bind volume to a member (having one default pool across members), will this volume stay online if the other member will go down? I'm 99,999% sure it will (pools configurations are transparent to hosts) but I'm just searching for confirmation  of my thoughts. And from the other side - if I would have a large datastore composed from 2 volumes which would be bound to separate members or spreaded across members in default pool - whole datastore goes down with one member failure?

5 Practitioner

 • 

274.2K Posts

January 27th, 2018 16:00

Hello,

 Yes, if you BIND a volume to a member, MEMBER A, and MEMBER B goes down, all volumes on MEMBER A will remain online with no interruption.   Any volumes spread across both MEMBER A and MEMBER B will go offline until MEMBER B is brought back online.  Data in multimember pool are striped not mirrored. 

 Regards,

Don

August 17th, 2019 11:00

Hi Don,

In relation to the conversation here, I  was looking for potential MPIO solutions to my 1G fabric setup which comprises 6x ESX hosts and 2x PS6210, 1x PS4100 members

As per the our conversation on the ESX/MEM thread, this implementation is constrained to 1G due to the lowest common denominator which is the PS4100 member. Everything else is 10G ready.

The issue is I can't feasibly break the group and discard the PS4100 array.

Is it possible to attach the high IO utilisation volumes to the PS6210 members and implement 10G connectivity? If I could cut the low IO data loose and attach it to the PS4100 I'd be happy to take my chances with that, but I need to preserve IO and sanctity to the higher utilisation volumes.

As an aside, the vSphere datastores are all VVols, so not monolithic volumes from a EQL perspective.

Thanks,

Greg

1 Rookie

 • 

1.5K Posts

August 17th, 2019 15:00

Hello Greg, 

The problem is going to be 10GbE hosts connecting to GbE array.   That's usually not a good thing as the 10GbE server can overrun the array.  Usually when you split it like that you have a mix of GbE and 10GbE servers. 

 What firmware are you running on the arrays?   Hopefully current? 

 For vvols in multimember groups especially did you set the NO-OP timeout in addition to the login_timeout and delayed ack disabled?   

 First thing I would look at is SANHQ.  How hard are you hitting these arrays?   If well within the GbE network then going to 10GbE isn't going to yield greater performance. 

 You can't bind Storage Containers or individual VVOLs to a member.  So you would have to move the 4100 into it's own pool. 

 Regards, 

Don 

 

 

 

 

No Events found!

Top