Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

7425

May 10th, 2017 12:00

ScaleIO Configuration - 5 MDM query

Hi All,

After a successful ScaleIO POC with a Partner, during our conversation the partner had a couple of queries regarding upgrading the  present 3 MDM config to 5MDM config. The queries were :

1. Do the new MDM's need to be added as a Stand by MDM's ?

2. Since in a 5 MDM config 1 is Primary, 2 are Seconday and 2 Tie breakers , in case the Primary goes down, how is it decided that among the 2 Surviving Standby's who takes over as the primary and what role do the Tie breakers Play in this situation

3. In a hypothetical scenario where the primary MDM goes down, what happens if there is a split brain b/w the Surviving Seconday MDM's and the Tie breakers ( 1 Tie breaker can see only 1 Standby )

The 3rd Query from the partner raises this question inside me - Is this scenario really possible and if yes has anyone encountered this in their environment.

Thanks in Advance.

73 Posts

May 17th, 2017 08:00

#1, Yes, the new MDM/TB nodes will be added as standby's first, then the cluster mode can be switched from 3_node to 5_node using those newly added standby nodes.

#2, This is the MDM voting process you are asking about. It works the same way in 3 or 5_node. The MDM actors (not TBs) must have an updated MDM repository and will need at least 3 votes (2 votes in 3 node cluster) to become the master. Each MDM and TB has 1 vote. Each MDM will vote for itself, as it connects to itself the quickest. The TB nodes vote for the MDM it connects to first. If they both connect to the same MDM, then that MDM will have 3 votes and becomes the master. If the TBs vote for different MDMs, then the votes are tied at 2 each and we move to round two of the vote.

In round 2, each MDM will now give their vote to the MDM with the lowest available ID and MDMs with higher IDs won't connect to the TB nodes. This will guarantee an MDM becomes master.

#3, in this split brain scenario where the surviving 4 nodes are split evenly (1 MDM/1 TB on each side), then there will be no master MDM as there will only be 2 votes available for the MDM nodes. This is so we can avoid the split brain scenario and avoid data corruption with 2 master mdms.

31 Posts

May 23rd, 2017 04:00

Great Stuff..

Thanks a lot for the reply Rick

2 Posts

March 22nd, 2019 01:00

Hi, Expert

sorry for jump into this discussion, but thanks for your very useful stuff.

got one question, during the installation, for the standby MDM, installed with a dedicated system. after this standby MDM add into cluster, will install some new packages by gw or some other ways?

 

Thanks

Best Regards

No Events found!

Top