PowerFlex: ESXi SDC Configuration and Troubleshooting

Summary: This article explains troubleshooting and manual installation/removal/configuration of the ESXi SDC.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

SDC on ESXi host unable to communicate with MDM.

Occurring during deployment, the SDC may have issues communicating with the MDM.

The error can look something like:

 Error: Failed: Configure SDC driver on ESX (The SDC cannot communicate with the MDM cluster) 

Occurring after deployment, the SDC may show offline to the MDM, or ScaleIO volumes may not be accessible.

Cause

Any of the following can cause issues communicating with the MDM:
  1. Duplicate IP addresses.
  2. Packet loss.
  3. MTU set incorrectly somewhere in the path between SDC and MDM.
  4. Generic network connectivity issues

Resolution

Installation

  1. Upload (SFTP) the SDC package to the /tmp directory on ESXi.
  2. Install the package by running:
    esxcli software vib install -d <full_path_to_offline_bundle>> where <full_path_to_offline_bundle> is the full path to the sdc <build>.X-esx<version>.zip file.
    
  3. Reboot the ESXi host.
  4. Generate a GUID for the SDC, which can be done on: http://www.guidgen.com
  5. Configure the SDC
esxcli system module parameters set -m scini -p "IoctlIniGuidStr=<GUID> IoctlMdmIPStr=<List of MDM IPs>"

      6. Load the SDC module
esxcli system module load  m scini              
  If the above did not work, so you can ensure that the driver is loaded by running:                 
vmkload_mod -l | grep scini

        7.Reboot the ESXi host

 

Configuration

Initial

  1. Generate a GUID for the SDC.  This can be done at: http://www.guidgen.com
  2. Configure the SDC module with its GUID and the MDM IP addresses. There should be exactly one space between the end of the GUID and the beginning of the IP variable:
    esxcli system module parameters set -m scini -p "IoctlIniGuidStr=GUID_GOES_HERE IoctlMdmIPStr=10.0.0.x,10.0.0.y,10.1.1.x,10.1.1.y"
    
  3. Load the SDC module:
    esxcli system module load  m scini
    
  4. Reboot the ESXi host.

Change/Modify

Updating the IP addresses requires you to set the entire module parameter string again. (Remember to specify both the GUID and the MDM IP addresses.)
  1. Retrieve the current GUID:
    esxcli system module parameters list -m scini | grep IoctlIniGuidStr)
    
  2. Configure the SDC module with its GUID and the new list of MDM IP addresses:
    esxcli system module parameters set -m scini -p "IoctlIniGuidStr=GUID_GOES_HERE IoctlMdmIPStr=10.0.0.x,10.0.0.y,10.1.1.x,10.1.1.y"
    
  3. Reboot the ESXi host.

Removal

  1. Clear SDC configuration:
    esxcli system module parameters set -m scini -p "" 
    
  2. Retrieve the package name:
    esxcli software vib list | grep sdc
    
  3. Uninstall SDC:
    esxcli software vib remove -n PACKAGE_NAME

     

If the VIB was uninstalled (step 3) without clearing the SDC module parameters (step 1), remove the configuration manually:
  1. Get a list of scini related parameters

    cat /etc/vmware/esx.conf | grep scini
    
  2. Edit /etc/vmware/esx.conf and remove all lines returned by the previous step. 
  3. Save the file and exit
 

Troubleshooting

SDC Module Parameter Retrieval

If the MDM does not recognize the SDC or there is a reason to think that SDC does not communicate with the MDM, the MDM IPs and the GUID can be retrieved by running:
esxcli system module parameters list -m scini

 

ESXi SDC Connectivity Check

This article assumes two data networks, configured with 192.168.152.x and 192.168.154.x subnets.

Find out which vmknic is on which subnet, and MTU:

Ex.
[root@server1-101:~] esxcfg-vmknic -l 
Interface Port Group/DVPort/Opaque Network IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type NetStack 
vmk0 sys-mgmt-krnl IPv4 192.168.105.101 255.255.255.0 192.168.105.255 00:xx:xx:xx:xx:xx 9000 65535 true STATIC defaultTcpipStack 
vmk1 sio-data1-krnl IPv4 192.168.152.121 255.255.255.0 192.168.152.255 00:xx:xx:xx:xx:xx 9000 65535 true STATIC defaultTcpipStack 
vmk2 sio-data2-krnl IPv4 192.168.154.141 255.255.255.0 192.168.154.255 00:xx:xx:xx:xx:xx 9000 65535 true STATIC defaultTcpipStack
vmk3 sys-vmotion IPv4 192.168.106.101 255.255.255.0 192.168.106.255 00:xx:xx:xx:xx:xx 9000 65535 true STATIC defaultTcpipStack 

 

Ping the Primary MDM data IPs or Virtual IPs using each vmknic:

If jumbo frames are configured and this test fails, network connectivity is not 100%:
vmkping -d -s 8972 -I vmk1 192.168.152.101 
vmkping -d -s 8972 -I vmk2 192.168.154.101

Try without jumbo frames:
vmkping -I vmk1 192.168.152.101 
vmkping -I vmk2 192.168.154.101

 

Ping the Secondary MDM:

Jumbo:
vmkping -d -s 8972 -I vmk1 192.168.152.102 
vmkping -d -s 8972 -I vmk2 192.168.154.102

 

Normal:
vmkping -I vmk1 192.168.152.102 
vmkping -I vmk2 192.168.154.102

 

Ping the subnet gateways:

Jumbo:
vmkping -d -s 8972 -I vmk1 192.168.152.1
vmkping -d -s 8972 -I vmk2 192.168.154.1

 


Normal:
vmkping -I vmk1 192.168.152.1
vmkping -I vmk2 192.168.154.1

 

Affected Products

PowerFlex rack, VxFlex Product Family
Article Properties
Article Number: 000175073
Article Type: Solution
Last Modified: 22 Jan 2025
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.