PowerFlex Switchover Failed When MDM Not Synchronized
Summary: Network connectivity issues cause Secondary MDM/s status to become Not synchronized.
Symptoms
Scenario
Consistent MDM disconnections and connectivity issues between Primary and Secondary MDM/s cause Secondary MDM/s status to become Not synchronized.
Symptoms
Lack of consistent connectivity between Primary and Secondary MDM/s shows the following symptoms:
Output of scli --query_cluster shows Secondary MDM/s disconnected.
Output of scli --query_cluster shows Secondary MDM/s Not synchronized.
Cmatrix shows that the rebuild is stuck.
SDC<>SDS connectivity issues (SDS resides on a problematic MDM).
query_all output:
(info) Notice the Secondary MDMs' status.
Status: Not synchronized
Cluster:
Name: parsio, ID: 06002ca3767a6153, Mode: 5_node, State: Degraded, Active: 5/5, Replicas: 1/3
Virtual IPs: 192.168.20.100
Master MDM:
Name: parsiomanager2, ID: 0x148190e6333dbc12
IPs: 192.168.20.2, Management IPs: 10.8.8.56, Port: 9011, Virtual IP interfaces: eth1
Version: 2.6.10000
Actor ID: 0x1d0ac48b18c6f742, Voter ID: 0x2bb7b8b1353f6a82
Certificate Info:
Subject: /GN=MDM/CN=ScaleIO-10-8-8-56/L=Hopkinton/ST=Massachusetts/C=US/O=EMC/OU=ASD
Issuer: /GN=MDM/CN=ScaleIO-10-8-8-56/L=Hopkinton/ST=Massachusetts/C=US/O=EMC/OU=ASD
Valid From: Apr 2 11:25:03 2019 GMT
Valid To: Mar 31 12:25:03 2029 GMT
Thumbprint: 60:C0:50:38:FC:3D:49:D5:00:8F:9F:CE:4F:27:D2:23:A1:E3:07:AF
Slave MDMs:
Name: parsiomanager1, ID: 0x51ddc23b3cddea90
IPs: 192.168.20.1, Management IPs: 10.8.8.54, Port: 9011, Virtual IP interfaces: eth1
Status: Not synchronized, Version: 2.6.10000
Actor ID: 0x641bc8002b885d00, Voter ID: 0x06f3ab9245ee83d0, Replication State: Synchronization in-progress
Certificate Info:
Subject: /GN=MDM/CN=ScaleIO-10-8-8-54/L=Hopkinton/ST=Massachusetts/C=US/O=EMC/OU=ASD
Issuer: /GN=MDM/CN=ScaleIO-10-8-8-54/L=Hopkinton/ST=Massachusetts/C=US/O=EMC/OU=ASD
Valid From: Apr 2 11:33:08 2019 GMT
Valid To: Mar 31 12:33:08 2029 GMT
Thumbprint: A3:49:37:C3:66:2F:53:05:96:2D:74:10:1F:D2:DF:A4:E7:F5:85:7B
Name: parsiomanager3, ID: 0x3bf30e1a42079e61
IPs: 192.168.20.4, Management IPs: 10.8.8.59, Port: 9011, Virtual IP interfaces: eth1
Status: Not synchronized, Version: 2.6.10000
Actor ID: 0x667027d528ddfcd1, Voter ID: 0x493913d41133d6b1, Replication State: Synchronization in-progress
Certificate Info:
Subject: /GN=MDM/CN=ScaleIO-10-8-8-59/L=Hopkinton/ST=Massachusetts/C=US/O=EMC/OU=ASD
Issuer: /GN=MDM/CN=ScaleIO-10-8-8-59/L=Hopkinton/ST=Massachusetts/C=US/O=EMC/OU=ASD
Valid From: Apr 2 10:17:11 2019 GMT
Valid To: Mar 31 11:17:11 2029 GMT
Thumbprint: E9:10:DD:56:E9:2D:C8:6F:ED:D8:57:75:FF:DF:BB:15:41:FA:C1:32
Tie-Breakers:
Name: parsiotb1, ID: 0x3c5587221ac0cfe4
IPs: 192.168.20.3, Port: 9011
Status: Normal, Version: 2.6.10000
Voter ID: 0x683fd39d2af27284
Name: parsiotb2, ID: 0x20c2b3f30969f533
IPs: 192.168.20.5, Port: 9011
Status: Normal, Version: 2.6.10000
MDM events:
(info) Notice the multiple disconnections on the MDMs and SDCs' from SDSs, examples are below.
MDM, ID 51ddc23b3cddea90, lost connection.
MDM, ID 3bf30e1a42079e61, is not responding.
SDC ID: 9503ac9800000003 disconnected from IP 192.168.20.2 of SDS parsioesx02.ansys.com-ESX; ID: 10d84abe00000002
4856 2020-12-01 12:19:38.735 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 51ddc23b3cddea90, lost connection 4857 2020-12-01 12:19:38.204 MDM_CLUSTER_CONNECTED INFO The MDM, ID 51ddc23b3cddea90, connected 4858 2020-12-01 12:19:43.397 MDM_CLUSTER_NOT_RESPOND WARNING The MDM, ID 3bf30e1a42079e61, is not responding 4859 2020-12-01 12:19:44.824 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 3bf30e1a42079e61, lost connection 4860 2020-12-01 12:19:44.203 MDM_CLUSTER_CONNECTED INFO The MDM, ID 3bf30e1a42079e61, connected 4861 2020-12-01 12:19:44.569 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 51ddc23b3cddea90, lost connection 4862 2020-12-01 12:19:44.701 MDM_CLUSTER_CONNECTED INFO The MDM, ID 51ddc23b3cddea90, connected 4863 2020-12-01 12:19:45.276 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 51ddc23b3cddea90, lost connection 4864 2020-12-01 12:19:45.397 MDM_CLUSTER_CONNECTED INFO The MDM, ID 51ddc23b3cddea90, connected 4865 2020-12-01 12:19:48.480 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 3bf30e1a42079e61, lost connection 4866 2020-12-01 12:19:48.601 MDM_CLUSTER_CONNECTED INFO The MDM, ID 3bf30e1a42079e61, connected 4867 2020-12-01 12:19:49.431 SDC_DISCONNECTED_FROM_SDS_IP WARNING SDC ID: 9503ac9800000003 disconnected from IP 192.168.20.2 of SDS parsioesx02.ansys.com-ESX; ID: 10d84abe00000002 4868 2020-12-01 12:19:50.377 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 51ddc23b3cddea90, lost connection 4869 2020-12-01 12:19:50.403 SDC_CONNECTED_TO_SDS_IP INFO SDC ID: 9503ac9800000003 is now connected to IP 192.168.20.2 of SDS parsioesx02.ansys.com-ESX; ID: 10d84abe00000002 4870 2020-12-01 12:19:50.498 MDM_CLUSTER_CONNECTED INFO The MDM, ID 51ddc23b3cddea90, connected 4871 2020-12-01 12:19:51.183 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 3bf30e1a42079e61, lost connection 4872 2020-12-01 12:19:51.304 MDM_CLUSTER_CONNECTED INFO The MDM, ID 3bf30e1a42079e61, connected 4873 2020-12-01 12:19:53.669 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 51ddc23b3cddea90, lost connection 4874 2020-12-01 12:19:53.800 MDM_CLUSTER_CONNECTED INFO The MDM, ID 51ddc23b3cddea90, connected 4875 2020-12-01 12:19:54.833 SDC_DISCONNECTED_FROM_SDS_IP WARNING SDC ID: 9503ac9800000003 disconnected from IP 192.168.20.2 of SDS parsioesx02.ansys.com-ESX; ID: 10d84abe00000002 4876 2020-12-01 12:19:56.858 SDC_CONNECTED_TO_SDS_IP INFO SDC ID: 9503ac9800000003 is now connected to IP 192.168.20.2 of SDS parsioesx02.ansys.com-ESX; ID: 10d84abe00000002 4877 2020-12-01 12:19:56.867 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 3bf30e1a42079e61, lost connection 4878 2020-12-01 12:19:56.197 MDM_CLUSTER_CONNECTED INFO The MDM, ID 3bf30e1a42079e61, connected 4879 2020-12-01 12:19:56.873 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 51ddc23b3cddea90, lost connection 4880 2020-12-01 12:19:57.426 MDM_CLUSTER_CONNECTED INFO The MDM, ID 51ddc23b3cddea90, connected 4881 2020-12-01 12:19:58.284 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 3bf30e1a42079e61, lost connection 4882 2020-12-01 12:19:58.405 MDM_CLUSTER_CONNECTED INFO The MDM, ID 3bf30e1a42079e61, connected 4883 2020-12-01 12:19:59.995 SDC_DISCONNECTED_FROM_SDS_IP WARNING SDC ID: 9503ac9800000003 disconnected from IP 192.168.20.2 of SDS parsioesx02.ansys.com-ESX; ID: 10d84abe00000002 4884 2020-12-01 12:20:00.997 SDC_CONNECTED_TO_SDS_IP INFO SDC ID: 9503ac9800000003 is now connected to IP 192.168.20.2 of SDS parsioesx02.ansys.com-ESX; ID: 10d84abe00000002 4885 2020-12-01 12:20:01.372 MDM_CLUSTER_LOST_CONNECTION WARNING The MDM, ID 51ddc23b3cddea90, lost connection 4886 2020-12-01 12:20:01.503 MDM_CLUSTER_CONNECTED INFO The MDM, ID 51ddc23b3cddea90, connected
MDM trc.x:
(info) Notice that multiple combs going into a DEGRADED state and start rebuilding, examples are below.
Multi-Head: f7c90002 Row: 1022 DEGRADED->DEGRADED (INITIATE_MIGRATE)
01/12 12:21:31.587572 0x7f24a0ad6eb0:multiHeadRow_MoveState_Inner:02966: [multiHead_HandleMigrate:892]: MultiHead: f7c90002 Row: 1022 DEGRADED->DEGRADED (INITIATE_MIGRATE) 01/12 12:21:31.587578 0x7f24a0ad6eb0:multiHeadRow_MoveState_Inner:02966: [multiHead_HandleMigrate:892]: MultiHead: f7c90002 Row: 380 DEGRADED->DEGRADED (INITIATE_MIGRATE) ... 01/12 12:21:31.590164 0x7f24a064deb0:mdmTgtMsg_SendAsyncAddSingleCombEX:04000: TgtId: 10d871cb00000000 CombId: 7be4000081e9 CombState: SECONDARY raid: [tgtId: 10d84abe00000002, state: 0x1, type: SECONDARY] primaryTgtGenNum: 121 mdmTgtConnectionGenNum: 7311 tgtCombCmdGenNum: 1 01/12 12:21:31.590407 0x7f24a0ae8eb0:mdmTgtMsg_SendAsyncStartMigrate:04605: TgtId: 10d84abe00000002 CombId: 7be4000081e9 MigrateTo:10d871cb00000000 primaryTgtGenNum: 121 tgtCombCmdGenNum: 461 mdmTgtConnectionGenNum: 7346 migrateNum: 246 isFwdRebuild: 1 01/12 12:21:31.590552 0x7f24a0ae8eb0:multiHeadRow_MoveState_Inner:02966: [multiHead_HandleMigrate:892]: MultiHead: f7c80001 Row: 489 DEGRADED->DEGRADED (INITIATE_MIGRATE) ... 01/12 12:21:31.592950 0x7f24a0833eb0:mdmTgtMsg_SendAsyncAddSingleCombEX:04000: TgtId: 10d8bfeb00000004 CombId: 7be38000034a CombState: SECONDARY raid: [tgtId: 10d84abe00000002, state: 0x1, type: SECONDARY] primaryTgtGenNum: 128 mdmTgtConnectionGenNum: 4958 tgtCombCmdGenNum: 1 01/12 12:21:31.592958 0x7f24a0833eb0:mdmTgtMsg_SendAsyncAbortMigrate:04674: TgtId: 10d84abe00000002 CombId: 7be3800001b8 primaryTgtGenNum: 70 tgtCombCmdGenNum: 592 mdmTgtConnectionGenNum: 7346 01/12 12:21:31.592971 0x7f24a0833eb0:mdmTgtMsg_SendAsyncAbortMigrate:04674: TgtId: 10d84abe00000002 CombId: 7be3800000eb primaryTgtGenNum: 152 tgtCombCmdGenNum: 623 mdmTgtConnectionGenNum: 7346 01/12 12:21:31.593121 0x7f24a0ae8eb0:mdmTgtMsg_SendAsyncFreeComb:04408: TgtId: 10d871cb00000000 CombId: 7be3800001b8 mdmTgtConnectionGenNum: 7311 01/12 12:21:31.593138 0x7f24a0ae8eb0:mdmTgtMsg_SendAsyncFreeComb:04408: TgtId: 10d898de00000001 CombId: 7be3800000eb mdmTgtConnectionGenNum: 7321
Cmatrix output:
(info) Notice that the rebuild is stopped, hence looks like it is stuck in the UI, for example below.
Policy=REBUILD_STOPPED, issue=MULTIPLE, coolingOff=FALSE, bypass=FALSE
--------------------------------------------------------------------------
cmatrix status dump (FdID=8ba78f9f00000000, 01/12 11:00:54.998786)
policy=REBUILD_STOPPED, issue=MULTIPLE, coolingOff=FALSE, bypass=FALSE
nMaxRows=032, nActiveRows=005, nKnownTgts=005
matrixGen=495, nCycles=2377, duration [ms]: average<1, max=0
matrix memory foot-print is 17344 [bytes]
row/ column ownership:
i=000 :: tgtId=10d871cb00000000 (fsId=10d871cb00000000)
i=001 :: tgtId=10d898de00000001 (fsId=10d898de00000001)
i=002 :: tgtId=10d84abe00000002 (fsId=10d84abe00000002)
i=003 :: tgtId=10d898ee00000003 (fsId=10d898ee00000003)
i=004 :: tgtId=10d8bfeb00000004 (fsId=10d8bfeb00000004)
cells:
I+D++
+I+++
+DI++
+++I+
++++I
Impact
The MDM cluster is not synchronized, MDM switchover is impossible at this state.
Repeated loss of connectivity causes disruptions in serviceability.
The rebuild is stuck, and the system is at a potential single point of failure for a DU.
IO errors might be seen on the SDCs.
Cause
When the Master MDM must make changes to the state of data blocks, it must write these state changes to the MDM repository file then sync those changes to the Slave MDMs. When those write operations are complete, the MDM notifies the SDSs that the changes are finalized and they can resume serving to write IOs to the SDCs from the primary copy only (until the rebuild is completed).
Once the Primary MDM is not able to update the Secondary MDM/s, it causes the MDM to not be able to respond quickly enough to the SDSs requests and may cause IO errors on the SDCs.
Since the MDM switchover is impossible due to lack of synchronization between the cluster members, and the issue seems to be on the Primary MDM, then another MDM must be added to the system, in a different location where the connectivity issue does not exist, and replace it with one of the Secondaries, and then to switch over to it, in that way to kick the problematic Primary MDM out of the cluster.
Resolution
Workaround
In an SVM environment, the GW VM can be used for the operation of installing and new MDM for the switchover, if it is not residing on the local drive of the ESXi server, and can be vMotioned to a different ESXi that does not have the issue OR install an MDM on any VM for this workaround.
In a non-SVM environment, a VM or a non-MDM node can be used for the operation of installing and new MDM for the switchover.
Impacted Versions
N/A, network issues
Fixed In Version
N/A