ScaleIO: Troubleshooting MDM_Disconnect errors

Summary: Primary Metadata Manager (MDM) ownership moves between MDM servers on a frequent basis.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

The following event appears when using the showevents.py tool:

6956  2017-07-06 18:21:05.803 MDM_CLUSTER_LOST_CONNECTION WARNING        The MDM, ID 27fea9a11c073e82, lost connection

The following appears in the trc logs of the secondary MDM server:

06/07 18:21:05.486947 0x7ffbc89feeb0:netPath_IsKaNeeded:01858:  :: Connected Live CLIENT path 0x7ffb9400a060 of portal 0x7ffb94003780 net 0x7ffbac0044b0 socket 17 inflights 0 didn't receive message for 3 iterations from 10.xxx.xxx.xxx:9011. Marking as down

 

Cause

MDM disconnects typically occur when the secondary MDMs or tiebreaker have not seen a keep alive within the timeout period of 500 milliseconds. 

 

Resolution

Check the Network Interface Cards (NICs) on the MDM and TB servers for dropped packets:

[root@scaleio-1 ~]# ifconfig ens192
ens192: flags=4163  mtu 1500
inet 10.xxx.xxx.xxx  netmask 255.xxx.xxx.0  broadcast 10.xxx.xxx.xxx
inet6 fe80::250:56ff:feb7:2a06  prefixlen 64  scopeid 0x20
ether 00:50:56:b7:2a:06  txqueuelen 1000  (Ethernet)
RX packets 311779767  bytes 53460032583 (49.7 GiB)
RX errors 0  dropped 41  overruns 0  frame 0
TX packets 312147963  bytes 45970694962 (42.8 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

In addition, check the latency of the connection between the MDM nodes and TB using the ping command:

[root@scaleio-1 ~]# ping 10.xxx.xxx.xxx
PING 10.xxx.xxx.xxx (10.xxx.xxx.xxx) 56(84) bytes of data.
64 bytes from 10.xxx.xxx.xxx: icmp_seq=1 ttl=64 time=0.414 ms
64 bytes from 10.xxx.xxx.xxx: icmp_seq=2 ttl=64 time=0.395 ms
64 bytes from 10.xxx.xxx.xxx: icmp_seq=3 ttl=64 time=0.370 ms
64 bytes from 10.xxx.xxx.xxx: icmp_seq=4 ttl=64 time=0.399 ms
64 bytes from 10.xxx.xxx.xxx: icmp_seq=5 ttl=64 time=0.497 ms
64 bytes from 10.xxx.xxx.xxx: icmp_seq=6 ttl=64 time=0.534 ms

If the latency varies or comes close to 500 ms this could be the issue for the disconnect.

There are also non-network reasons for the MDM disconnect. If the process becomes hung or is not receiving adequate CPU resources, it cannot send the keepalive packet in a timely manner. Check the system for CPU utilization using the top command.

On VMware systems, the Virtual Machine (VM) might not receive sufficient resources if the system is over subscribed. You can check if this is the situation by examining the CPU ready time for the VM.   

Affected Products

VxFlex Product Family

Products

PowerFlex Software, VxFlex Product Family
Article Properties
Article Number: 000064168
Article Type: Solution
Last Modified: 20 May 2025
Version:  3
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.