Start a Conversation

Unsolved

This post is more than 5 years old

2694

October 31st, 2016 05:00

Windows Failover Cluster Manager not showing CSVs on C:\ClusterStorage folder

I have added a new node into an existing ScaleIO environment but it doesn't seem to pick up the CSVs in the ClusterStorage folder when adding the node into the cluster.

I can see that when I run --query_all_sdc command that the node is there as an SDC but there are no IOPs on it unlike the others.

Also when running drv_cfg --query_mdms command on the SDC, I have the following:

MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981

IPs [0]-10.10.10.14

MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981

IPs [0]-10.10.11.14 [1]-10.10.10.15 [2]-10.10.11.15

How do I get them on one line as with the other nodes?  As I added the second line IPs later as when I initially added the first IP, I put a space after the comma between the IP addresses.

Is there a way to unmap volumes to SDCs also?

Thanks,

John

33 Posts

October 31st, 2016 14:00

regarding query 

How do I get them on one line as with the other nodes? 

As I added the second line IPs later as when I initially added the first IP,

I put a space after the comma between the IP addresses.

If the CLI and the MDM do not reside on the same server, add the --mdm_ip parameter to all CLI

commands.

In a non-clustered environment, use the MDM IP address.

In a clustered

environment, use the IP addresses of the master and slave MDMs, separated by a

comma. For example:

scli --mdm_ip 10.10.10.3,10.10.10.4 --login --username

supervisor1


regarding query

I can see that when I run --query_all_sdc command that the node is there as an SDC but there

are no IOPs on it unlike the others.

please Verfiy SDC<=>MDM connection

# /opt/emc/scaleio/sdc/bin/drv_cfg --query_mdm

and make sure when we SSH to the Primary MDM and run the command "scli --query_all_sdc", we need to make sure the SDC is not in the Disconnected state:-

33 Posts

October 31st, 2016 14:00

regarding query Is there a way to unmap volumes

please make sure the sdc's are not disconnected, if so is a seperate process, but yes it is able to be done

https://support.emc.com/kb/484801

unmap volumes to SDCs

Unmap the volume from all the SDCs:-

scli --mdm_ip 10.X.X.11 --unmap_volume_from_sdc --volume_name vol_1 --all_sdcs

Volume will not be accessible to the SDC. Press 'y' to confirm.

Successfully un-mapped volume vol_1 from all SDC nodes

from GUI goto Frontend > Volumes >

Unmap Volume

taken from scaleio 2.0 user guide

https://support.emc.com/docu67392_ScaleIO-2.0-User-Guide.pdf?language=en_US

To unmap volumes, perform these steps:

1. In the Frontend > Volumes view, navigate to the volumes, and select them.

2. From the Command menu or context-sensitive menu, select Unmap Volumes. The Unmap Volumes window is displayed, showing a list of the volumes that will be unmapped.

3. If you want to exclude some SDCs from the unmap operation, in the Select Nodes panel, select one or more SDCs for which you want to retain mapping. • You can use the search box to find SDCs.

4. Click Unmap Volumes. The progress of the operation is displayed at the bottom of the window. It is recommended to keep the window open until the operation is completed, and until you can see the result of the operation.

via cli

Example (unmap volume from a single SDC)

scli --mdm_ip 192.168.1.200 --unmap_volume_from_sdc

--volume_name vol_1 --sdc_ip 192.168.1.3

Example (unmap volume from all SDCs)

scli -–mdm_ip 192.168.1.200 --unmap_volume_from_sdc

--volume_name vol_1 –-all_sdcs

taken from scaleio user guide 2.0

https://support.emc.com/docu67392_ScaleIO-2.0-User-Guide.pdf?language=en_US

what scaleio version are you on?>

16 Posts

November 1st, 2016 02:00

Thanks dstratton.  It is version 1.31.1277.3

  1. The reason I asked for the unmapping of the SDC for the volume on one SDC is because I thought it was related to how I set it up so wanted to remove it and add it again.
  2. Regarding drv_cfg --query_mdms command answer: The environment is clustered and I ran this command on the newly added SDC that I wanted to add to ScaleIO.  The issue is the first time I ran the command to add MDM (primary and slave IPs) to the SDC , I placed a space after the comma used to delimit the IP addresses so therefore the slave and secondary IP for the primary was not picked up.  I therefore ran the command again to add the otehr 3 IP addresses but when querying it, its on a separate line as in the example on the original post.
    1. First time I used the command: scli --mdm_ip 10.10.10.3, 10.10.10.4, 10.10.10.5, 10.10.10.6 --login --username
  3. Also, on the GUI it is showing 9 SDCs but 10 defined?  where can I check which one is missing?  There should be 10 in the cluster.
  4. Regarding the SDC with no IOPs: When running "scli --query_all_sdc" I can see that there is one diconnected but that node is still getting IOPs.  However, the new node I added, is "connected" but has no IOPs.
    1. How do I check why it is disconnected and how do I enable it?
    2. How do I get the new node to working with IOPs?
  5. When adding this server into Failover Cluster Manager, "C:\ClusterStorage" is not showing the volumes and that the other nodes are showing when browsing to that directory.  Shouldn't all nodes that have been added to the cluster present the same volumes and same subfolders?

Thanks for all your help...

John

16 Posts

November 1st, 2016 03:00

Thanks dstratton.  It is version 1.31.1277.3

  1. The reason I asked for the unmapping of the SDC for the volume on one SDC is because I thought it was related to how I set it up so wanted to remove it and add it again.
  2. Regarding drv_cfg --query_mdms command answer: The environment is clustered and I ran this command on the newly added SDC that I wanted to add to ScaleIO.  The issue is the first time I ran the command to add MDM (primary and slave IPs) to the SDC , I placed a space after the comma used to delimit the IP addresses so therefore the slave and secondary IP for the primary was not picked up.  I therefore ran the command again to add the otehr 3 IP addresses but when querying it, its on a separate line as in the example on the original post.
    1. First time I used the command: scli --mdm_ip 10.10.10.3, 10.10.10.4, 10.10.10.5, 10.10.10.6 --login --username
  3. Also, on the GUI it is showing 9 SDCs but 10 defined?  where can I check which one is missing?  There should be 10 in the cluster.
  4. Regarding the SDC with no IOPs: When running "scli --query_all_sdc" I can see that there is one diconnected but that node is still getting IOPs.  However, the new node I added, is "connected" but has no IOPs.
    1. How do I check why it is disconnected and how do I enable it?
    2. How do I get the new node to working with IOPs?
  5. When adding this server into Failover Cluster Manager, "C:\ClusterStorage" is not showing the volumes and that the other nodes are showing when browsing to that directory.  Shouldn't all nodes that have been added to the cluster present the same volumes and same subfolders?

I have just seen this on the node that is shown as disconnected.

C:\Program Files\emc\scaleio\sdc\bin>drv_cfg.exe --query_mdms

Failed to open \\?\root#scsiadapter#0000#{cc9ba7b0-6d22-4016-81c5-3369f0a163c4}.

Code 0x5

Failed to open kernel device

Have you ever seen this before and what do you suggest?

Thanks for all your help...

John

16 Posts

November 2nd, 2016 06:00

The problem has been sorted.  The issue with adding this new node to the cluster was due to the fact that we have 2 MDMs and each MDM has 2 network interfaces = 4 IP addresses.

When I used the drvcfg command to add the MDM IPs I added a space and so only 1 IP was added to the SDC.  I then later added the remaining 3 IP addresses, but this appeared as another MDM, even though all the GUIDs are exactly the same.  The Server sees 2 MDMs.  This means that when adding to the Failover Cluster Manager and running through the cluster validation, the report was identifying 2 identical storage devices, therefore failing.

Although the server is added to the cluster, no VMs could be migrated to it as the CSVs were not showing the VM files and folders.

To resolve this I had to go into the registry, where the MDM is specified and modify the key data on 1 line rather than 2 lines.

So:

MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981

IPs [0]-10.10.10.14,10.10.11.14 [1]-10.10.10.15 [2]-10.10.11.15

Instead of:

MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981

IPs [0]-10.10.10.14

MDM-ID bd4cXXXXXXX0e23 SDC ID cf0XXXXX00000005 INSTALLATION ID a47fbXXXXXX1981

IPs [0]-10.10.11.14 [1]-10.10.10.15 [2]-10.10.11.15

The registry key is located here:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\scini\Parameters\mdms

And reboot the server.

This happened as there wasn't a obvious way to remove the single IP and then re-adding the IPs but editing the registry worked and the cluster is now allowing live migrations to the new node.

Thanks for all your input, leading me to the solution.

John

No Events found!

Top