Start a Conversation

Unsolved

This post is more than 5 years old

A

5 Practitioner

 • 

274.2K Posts

1643

September 23rd, 2015 06:00

Shutdown Single SVM for Maintenance

I have installed ScaleIO 1.32 on top of VMWare by leveraging the ScaleIO VMWare plugin. Since the ScaleIO VM's (SVM) leverage internal disks as RDM's for the SDS's, VMWare fails to put a host into maintenance mode since it can't move that SVM anywhere. What's the proper procedure to use so I can shut down a single ESX host to perform maintenance on it? (Add more disks, memory, etc.) Do I just log into the Linux kernel on the SVM and do a normal linux shut down first before shutting off the ESX host? Is there a graceful ScaleIO shutdown of the SDS I need to do first?

I haven't been able to find any admin guides specific to maintenance on a VMWare install, but please point me in the right direction if there is one.

Thanks!

34 Posts

December 10th, 2015 14:00

The best way to do this is to use the inactivate_protection_domain from the SCLI to shut down nodes for maintenance. you will find this on page 224 of the users guide for ScaleIO 1.32.2 to get full details.

51 Posts

December 11th, 2015 12:00

While you've provided the correct way to shut down an entire ScaleIO cluster, the question was regarding shutting off a single ESX host. 

See my reply for that procedure.

51 Posts

December 11th, 2015 12:00

Jason,

Planned ahead of time, the best practice for this would be to remove the SDS gracefully via the GUI or SCLI. This will drain the data from the SVM's disks in an orderly fashion, (moving the data to the unused capacity in the SDS cluster) and then remove the SDS from the cluster. This of course necessitates the presence of enough unused capacity to remove an entire SDS.

Once the SDS is fully removed from the ScaleIO cluster, the SVM can be shut down and the ESX node should have no problems entering maintenance mode.


Afterward, you'll just need to readd the SDS and its disks to the ScaleIO cluster.  ScaleIO will rebalance the data allocations across the cluster evenly, after which you can proceed to remove the next node.

The other option is to just shut down the SVM, but then ScaleIO needs to do a rebuild and there's the chance of DU/DL if anything fails before that's completed. Which I don't recommend.

No Events found!

Top