ECS: ObjectScale: How to run KB Automation Scripts (Auto Pilot)
Summary: CLI commands to run KB Automation Scripts.
Instructions
From xDoctor for ECS v4.8-104.0 onwards, an xdoctor-ansible container is included that allows the user to run KB Automation scripts from the CLI.
Ansible
Help
Overview
admin@provo-gen3-cyan:~> sudo xdoctor ansible --help ┌────────────────────────────────┐ │ xDoctor Ansible Container Help │ └────────────────────────────────┘ usage: xdoctor ansible [-h] [--info] [--start] [--cleanup] [--update] optional arguments: -h, --help show this help message and exit --info Current Info of the xDoctor Ansible Container --start Start the xDoctor Ansible Container --cleanup Stop, Remove and Unload the xDoctor Ansible Container --update Update the xDoctor Ansible Container
Info
Current Info and Status of the xDoctor Ansible Container
admin@provo-gen3-cyan:~> sudo xdoctor ansible --info
Note: xdoctor/ansible image is outdated, please use `sudo xdoctor ansible --update` ...
┌────────────────────────────────┐
│ xDoctor Ansible Container Info │
└───┬────────────────────────────┘
│ Latest image = /opt/emc/xdoctor/repo/xdoctor-ansible_3.0.0-1105.4c5593a9.xz
│ Latest version = 3.0.0-1105.4c5593a9
│ Loaded image = c8a434239326
│ Loaded version = 2.9.0-1078.fa1dcdcb
│ Container = RUNNING
│ Status = RUNNING
Update
Updating the xDoctor Ansible Container
The xDoctor Ansible Container is built in the xDoctor RPM package. This means that each time you upgrade xDoctor a new Ansible Container Image might be available as well. A notification is presented when the loaded image can be updated by a new one.
admin@provo-gen3-cyan:~> sudo xdoctor ansible --info
Note: xdoctor/ansible image is outdated, please use `sudo xdoctor ansible --update` ...
┌────────────────────────────────┐
│ xDoctor Ansible Container Info │
└───┬────────────────────────────┘
│ Latest image = /opt/emc/xdoctor/repo/xdoctor-ansible_3.0.0-1105.4c5593a9.xz
│ Latest version = 3.0.0-1105.4c5593a9
│ Loaded image = c8a434239326
│ Loaded version = 2.9.0-1078.fa1dcdcb
│ Container = RUNNING
│ Status = RUNNING
admin@provo-gen3-cyan:~> sudo xdoctor ansible --update
Ansible Update ...
Successfully stopped, removed and unloaded the xdoctor-ansible container/image
The xdoctor-ansible container is not running. Starting it ...
admin@provo-gen3-cyan:~> sudo xdoctor ansible --info
┌────────────────────────────────┐
│ xDoctor Ansible Container Info │
└───┬────────────────────────────┘
│ Latest image = /opt/emc/xdoctor/repo/xdoctor-ansible_3.0.0-1105.4c5593a9.xz
│ Latest version = 3.0.0-1105.4c5593a9
│ Loaded image = 77928ed0705e
│ Loaded version = 3.0.0-1105.4c5593a9
│ Container = RUNNING
│ Status = RUNNING
Start
Starting the xDoctor Ansible Container
admin@provo-gen3-cyan:~> sudo xdoctor ansible --start
Ansible Start ...
The xdoctor-ansible container is not running. Starting it ...
admin@provo-gen3-cyan:~> sudo xdoctor ansible --info
┌────────────────────────────────┐
│ xDoctor Ansible Container Info │
└───┬────────────────────────────┘
│ Latest image = /opt/emc/xdoctor/repo/xdoctor-ansible_3.0.0-1105.4c5593a9.xz
│ Latest version = 3.0.0-1105.4c5593a9
│ Loaded image = 77928ed0705e
│ Loaded version = 3.0.0-1105.4c5593a9
│ Container = RUNNING
│ Status = RUNNING
Cleanup
Stop, remove, and unload the xDoctor Ansible Container and Image.
admin@provo-gen3-cyan:~> sudo xdoctor ansible --cleanup
Ansible Cleanup ...
Successfully stopped, removed and unloaded the xdoctor-ansible container/image
admin@provo-gen3-cyan:~> sudo xdoctor ansible --info
┌────────────────────────────────┐
│ xDoctor Ansible Container Info │
└───┬────────────────────────────┘
│ Latest image = /opt/emc/xdoctor/repo/xdoctor-ansible_3.0.0-1105.4c5593a9.xz
│ Latest version = 3.0.0-1105.4c5593a9
│ Loaded image = NO_IMAGE_LOADED
│ Loaded version = NO_IMAGE_LOADED
│ Container = NO_CONTAINER
│ Status = NOT_LOADED
Auto Pilot
Help
Overview
admin@ecsnode1:~> sudo xdoctor autopilot --help
usage: xdoctor autopilot [-h] [--kb KB] [--kb-list]
[--target-node TARGET_NODE] [--target-vdc TARGET_VDC]
[--target-rack TARGET_RACK] [--debug]
optional arguments:
-h, --help show this help message and exit
--kb KB KB number
--kb-list List of available KB automations
--target-node TARGET_NODE
Target Node
--target-vdc TARGET_VDC
Target VDC
--target-rack TARGET_RACK
Target Rack
--debug Debug Mode
KB List
List of the available KB Automation scripts
admin@provo-gen3-cyan:~> sudo xdoctor autopilot --kb-list
╔═════════════════════════════════╗
║ Available KB Automation Scripts ║
╚═════════════════════════════════╝
╓────────────────╖
║ CUSTOMER Level ║
╙────────┬───────╜
┌────────┼──────┬──────────┬───────────────────┐
│ KB Nr. │ Ver. │ KB Title │ Supported Targets │
└────────┼──────┴──────────┴───────────────────┘
┌────────┼──────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────┐
│ 64221 │ 3.0 │ ECS: xDoctor: RAP081: SymptomCode: 2048: NTP daemon not running or All servers not suitable for synchronization found │ --target-rack │
└────────┼──────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬──────────────────────────────────────────────────────────────────────────┬───────────────┐
│ 79798 │ 3.0 │ ECS: xDoctor: RAP007: SymptomCode: 2028: Root File System Low Disk Space │ --target-node │
└────────┼──────┴──────────────────────────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬──────────────────────────────────────────────────────────────────┬──────────────────────────────────────────┐
│ 81550 │ 3.0 │ ECS: xDoctor: RAP059: Detected rsyslogd is not running on a node │ --target-node --target-rack --target-vdc │
└────────┼──────┴──────────────────────────────────────────────────────────────────┴──────────────────────────────────────────┘
┌────────┼──────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┐
│ 203562 │ 3.0 │ ECS: xDoctor RAP145: rackServiceMgr is using memory above configured threshold │ --target-rack │
└────────┼──────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬──────────────────────────────────────────────────────────────────────────┬──────────────────────────────────────────┐
│ 205933 │ 3.0 │ ECS: xDoctor: RAP137: Total swap memory inconsistent across the ECS rack │ --target-node --target-rack --target-vdc │
└────────┼──────┴──────────────────────────────────────────────────────────────────────────┴──────────────────────────────────────────┘
┌────────┼──────┬──────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────────────────┐
│ 209779 │ 3.0 │ ECS High load observed on ECS nodes/performance issues observed in 3.6.x/3.7.x/3.8.x │ --target-node --target-rack --target-vdc │
└────────┼──────┴──────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────────────┘
┌────────┼──────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────┐
│ 20987 │ 3.0 │ ECS: How to clear svc tools cache from reporting old values; svc tools are reporting old values when executing commands │ --target-node │
└────────┼──────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬───────────────────────────────────────────────────────────────────────────────────┬──────────────┐
│ 35068 │ 3.0 │ ECS: xDoctor: RAP040: The /root/MACHINES files are not consistent across the rack │ --target-vdc │
└────────┼──────┴───────────────────────────────────────────────────────────────────────────────────┴──────────────┘
┌────────┼──────┬──────────────────────────────────────────────────────────┬───────────────┐
│ 39838 │ 3.0 │ ECS: xDoctor: RAP073: Switch Connection Failure detected │ --target-rack │
└────────┼──────┴──────────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬───────────────────────────────────────────────────────────────┬───────────────┐
│ 50341 │ 3.0 │ ECS xDoctor: One or More Network Interface is Down or Missing │ --target-node │
└────────┼──────┴───────────────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬───────────────────────────────────────────────────────────────────────────────┬──────────────┐
│ 224905 │ 3.0 │ ECS: Compliance Check failed with Port 13000 is not in allowed udp ports list │ --target-vdc │
└────────┼──────┴───────────────────────────────────────────────────────────────────────────────┴──────────────┘
┌────────┼──────┬──────────────────────────────────────────────────┬───────────────┐
│ 182750 │ 3.0 │ ECS: Gen3: ipmitool fails to query the BMC/iDRAC │ --target-node │
└────────┼──────┴──────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────┐
│ 78420 │ 3.0 │ ECS: xDoctor RAP092: slave-X or pslave-X is not communicating with one or more ToR switches after a node reboot │ --target-node │
└────────┼──────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬────────────────────────────────────────┬───────────────┐
│ 19614 │ 3.0 │ ECS: How to add or remove a DNS server │ --target-rack │
└────────┼──────┴────────────────────────────────────────┴───────────────┘
┌────────┼──────┬──────────────────────────────────────────────────────┬───────────────┐
│ 20769 │ 3.0 │ ECS: How to setup SNMP v2c and v3 monitoring support │ --target-rack │
└────────┼──────┴──────────────────────────────────────────────────────┴───────────────┘
┌────────┼──────┬─────────────────────────────────────────┬───────────────┐
│ 273244 │ 3.0 │ ECS: How to add or remove an NTP server │ --target-rack │
└────────┼──────┴─────────────────────────────────────────┴───────────────┘
┌────────┼──────┬────────────────────────────────────────────────────────────────┬──────────────┐
│ 201555 │ 3.0 │ ECS: RAP075 Application reporting 403 error due to time skewed │ --target-vdc │
└────────┼──────┴────────────────────────────────────────────────────────────────┴──────────────┘
▀
Identifying the target
To identify the NAN node IP, the Rack color (name), or the VDC Name from the ECS/ObjectScale system topology output.
Connect to the active node of the rack:
Command:
# ssh master.rack
Run the Command:
# sudo xdoctor --top --vdc
This command displays the ECS system topology, including VDCs, racks, and nodes.
admin@ecsnode1:~> sudo xdoctor --top --vdc ECS | |- CLOUD - ID:[21a7111a45e4a9dbca62e0fee1749bbb] | |- Local VDC - ID:[8af5b9c3-9c0c-43b5-9402-14d181ade5bf] Name:[VDC1] |- Local SP - ID:[52576f30-f8f3-493a-9999-6fee4494f53b] Name:[SP1] | | | |- Local RACK - Name:[red] Primary:[169.254.1.1] PSNT:[CKM0000000000] SWID:[CKM0000000000] | | | | | |- Node 1, [ provo], NAN.IP:[ 169.254.1.1], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] | | |- Node 2, [ sandy], NAN.IP:[ 169.254.1.2], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] | | |- Node 3, [ orem], NAN.IP:[ 169.254.1.3], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] | | |- Node 4, [ ogden], NAN.IP:[ 169.254.1.4], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] | | |- Node 5, [ layton], NAN.IP:[ 169.254.1.5], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] | | |- Node 6, [ logan], NAN.IP:[ 169.254.1.6], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] | | |- Node 7, [ lehi], NAN.IP:[ 169.254.1.7], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] | | |- Node 8, [ murray], NAN.IP:[ 169.254.1.8], Public.IP:[ 0.0.0.0], DNS:[ ], NTP:[ ] Note: 'xdoctor --top --details' displays detailed VDC -and Rack information
Locate the Virtual Data Center (VDC):
- Look for the line that starts with:
|- Local VDC - ID:[…] Name:[VDC1]
- The VDC Name is shown at the end of the line.
Example:VDC1
To locate the Rack name:
- Under the VDC section, find the Local RACK:
|- Local RACK - Name:[red] Primary:[169.254.1.1] PSNT:[…]
- The Rack color is indicated by the "Name" field.
Example:red
Identify the NAN IP Node of Interest:
- Under the Rack, each Node is listed with its name and IPs:
|- Node 1, [ provo], NAN.IP:[ 169.254.1.1], Public.IP:[ 0.0.0.0 ] |- Node 2, [ sandy], NAN.IP:[ 169.254.1.2],
KB
Running a KB Automation script
In the below example the automated script for KB 79798 is performed on node 169.254.6.2.
sudo xdoctor autopilot --kb-list
getrackinfoOr
getclusterinfoOr both
admin@provo-gen3-cyan:~> sudo xdoctor autopilot --kb=79798 --target-node=169.254.6.2 ... [Prompt for acknowledgement] ******************************************************************************* ******************************************************************************* This Automated Knowledge Base (KB) will identify and remove frequently encountered files from the ObjectScale and ECS, aiming to safely reclaim space in the root file system. To proceed, you can review or delete the files on the system. Would you like to proceed with the steps by typing 'Yes' or 'Y', or skip the review and deletion actions by typing 'No' or 'N' ******************************************************************************* ******************************************************************************* Yes ... Status: PASS Time Elapsed: 0h 1m 5s Debug log: /tmp/autopilot/log/autopilot_79798_20250623_123509.log Message: Before cleanup available space: 220G used percentage: 50%. After cleanup available space: 240G / used percentage: 46%. Space reclaimed: 20.0G.
Automation Execution Behavior
All automations are performed within a screen session, and the system allows only one concurrent execution at a time. If an automation is left unattended for more than 1 hour, the screen session is automatically aborted and cannot be resumed.
When the automation is initially launched, the screen session command is printed to the terminal. This allows you to reattach to the session if needed, as long as it is within the 1-hour window.
List and reattach commands:
# sudo screen -ls # sudo screen -r [session_name]
Example of more than one concurrent session:
admin@ecsnode1:~> sudo xdoctor autopilot --kb 79798 --target-node 169.254.1.1 Checking for existing screen sessions... Starting screen session 'autopilot_kb_79798_20250625_175515'... Screen session 'autopilot_kb_79798_20250625_175515' started successfully. Attaching to screen session 'autopilot_kb_79798_20250625_175515'... Using /etc/ansible/ansible.cfg as config file This Autopilot Automation was blocked because the maximum number of concurrent executions (1) has been reached. Please try again later. [screen is terminating] admin@ecsnode1:~>
Additional Information
Refer to this video:
ObjectScale: How to run KB Automation Scripts.
Duration: 00:08:04 (hh:mm:ss)
When available, closed caption (subtitles) language settings can be chosen using the CC icon on this video player.
You can also view this video on YouTube.