ECS: xDoctor: RAP007: Symptom Code: 2028: Root File System Low Disk Space

Summary: ECS: xDoctor: RAP007: SymptomCode: 2028: Root File System Low Disk Space

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

RAP007: File system root (/)(RFS) low disk space

  • LVRoot too full
  • One of the RFS resources exceeded a certain severity threshold.
  • RAP007
  • RAP 007
  • SymptomCode: 2028
Example:
------------------------------------------------------
ERROR - (Cached) File system root (/) - low disk space
------------------------------------------------------
Node      = 169.254.1.1
Extra     = {'169.254.1.1': {'169.254.1.1': '82'}}
RAP       = RAP007
Solution  = KB 469957
Timestamp = 2022-09-08_144748
PSNT      = CKM00000000000 @ 4.8-86.0


Examine the output of the affected node:

Command:
df -k /

Example:
admin@node1:~> df -k /
Filesystem             1K-blocks      Used Available Use% Mounted on
/dev/mapper/ECS-LVRoot 459425792 373686300  85739492  82% /


Confirm this command output of the affected node for a human readable format:

Command:
df -h /

Example:
admin@node1:~> df -h /
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/ECS-LVRoot  439G  357G   82G  82% /



Confirm that the RFS size on the other nodes is below 80%.

Command:
svc_exec "df -h | grep LVRoot"

Example:
admin@node1:~> svc_exec "df -h | grep LVRoot"
svc_exec v1.0.3 (svc_tools v2.6.0)                 Started 2022-09-08 15:07:41

Output from node: r1n1 (xx.xx.xx.xxx)                 retval: 0
/dev/mapper/ECS-LVRoot  439G  357G   82G  82% /

Output from node: r1n2 (xx.xx.xx.xxx)                  retval: 0
/dev/mapper/ECS-LVRoot  439G  223G  216G  51% /

Output from node: r1n3 (xx.xx.xx.xxx)                  retval: 0
/dev/mapper/ECS-LVRoot  439G  184G  255G  42% /

Output from node: r1n4 (xx.xx.xx.xxx)                  retval: 0
/dev/mapper/ECS-LVRoot  439G  197G  242G  45% /

Output from node: r1n5 (xx.xx.xx.xxx)                  retval: 0
/dev/mapper/ECS-LVRoot  439G  167G  272G  39% /

Output from node: r1n6 (xx.xx.xx.xxx)                  retval: 0
/dev/mapper/ECS-LVRoot  439G  169G  271G  39% /

Output from node: r1n7 (xx.xx.xx.xxx)                  retval: 0
/dev/mapper/ECS-LVRoot  439G  162G  277G  37% /

Output from node: r1n8 (xx.xx.xx.xxx)                  retval: 0
/dev/mapper/ECS-LVRoot  439G  239G  200G  55% /

Cause

Open a service request for ECS support to investigate.

Several factors could cause this issue. 

A large file left behind during an action by support could need cleanup.

Resolution

IMPORTANT! A new feature has been released in xDoctor 4-8.104.0 and above. This knowledge base (KB) is now automated with xDoctor AutoPilot addressing most issues without the need for support involvement.

This feature is native to xDoctor 4-8.104.0 and above, for syntax and usage issues reference ECS: ObjectScale: How to run KB Automation Scripts (Auto Pilot). (may require login to Dell Support)

To find the master node of the rack:

  Command:

ssh master.rack

To find the NAN IP, use the ip identified in the alert or from the command below:

getrackinfo

 Example:

admin@ecsnode1:~> getrackinfo
Node private      Node              Public                                BMC
Ip Address        Id       Status   Mac                 Ip Address        Mac                 Ip Address        Private.4(NAN)    Node Name
===============   ======   ======   =================   ===============   =================   ===============   ===============   =========
192.168.219.1     1        MA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.101   169.254.1.1       provo-red
192.168.219.2     2        SA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.102   169.254.1.2       sandy-red
192.168.219.3     3        SA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.103   169.254.1.3       orem-red
192.168.219.4     4        SA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.104   169.254.1.4       ogden-red
192.168.219.5     5        SA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.105   169.254.1.5       layton-red
192.168.219.6     6        SA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.106   169.254.1.6       logan-red
192.168.219.7     7        SA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.107   169.254.1.7       lehi-red
192.168.219.8     8        SA       00:00:00:00:00      0.0.0.0           00:00:00:00:00      192.168.219.108   169.254.1.8       murray-red

 

  1. Run the automation command from a primary node with xDoctor 4-8.104.0 and above.
Command: 
Note: Only --target-node is supported for this action.
 sudo xdoctor autopilot --sr <SR Number> --kb 79798 --target-node <NAN IP>

Example:
admin@ecsnode1:~> sudo xdoctor autopilot --kb 79798 --target-node 169.254.1.1
Checking for existing screen sessions...
Starting screen session 'autopilot_kb_79798_20250624_174701'...
Screen session 'autopilot_kb_79798_20250624_174701' started successfully.
Attaching to screen session 'autopilot_kb_79798_20250624_174701'...
  1. Accept the acknowledgment for the tasks about to be performed.
Command:
# yes or no
Example:
TASK [Prompt for acknowledgement] *************************************************************************************************************************************************************
[Prompt for acknowledgement]
*******************************************************************************
*******************************************************************************
This Automated Knowledge Base (KB) will identify and remove frequently encountered files from the ObjectScale and ECS, aiming to safely reclaim space in the root file system. To proceed, you can review or delete the files on the system.

Would you like to proceed with the steps by typing 'Yes' or 'Y', or skip the review and deletion actions by typing 'No' or 'N'
*******************************************************************************
*******************************************************************************
:y
ok: [169.254.1.1] => {"attempts": 1, "changed": false, "delta": 122, "echo": true, "rc": 0, "start": "2025-06-24 17:47:05.349132", "stderr": "", "stdout": "Paused for 2.05 minutes", "stop": "2025-06-24 17:49:08.139302", "user_input": "y"}If you choose to list the files review them prior to deletion. Please note them in your SR. (Recommended) 
Example:
TASK [Summary of all logs for deletion or to be truncated] *************************************************************************************************************************************
ok: [169.254.1.7] => {
    "msg": [
        "*******************************************************************************",
        "*******************************************************************************",
        "OS and Fabric/Lifecycle logs summary:",
        "*******************************************************************************",
        "*******************************************************************************",
        "List of ECS OS logs older than 30 days to be deleted:",
        [],
        "*******************************************************************************",
        "List of fabric agent log older than 7 days old to be deleted:",
        [
            "/opt/emc/caspian/fabric/agent/log/agent.log.08132024-211413.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08172024-084127.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.07252024-022706.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08022024-043750.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08032024-082603.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08052024-160519.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08082024-025117.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08102024-100059.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08122024-173908.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08162024-045223.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08182024-115909.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.07262024-061010.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.07272024-093236.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.07292024-171125.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08012024-004843.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08192024-155339.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08042024-121611.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08062024-195410.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08092024-063954.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08112024-135003.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08152024-010345.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.08202024-194017.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.07282024-132417.gz",
            "/opt/emc/caspian/fabric/agent/log/agent.log.07302024-205940.gz"
        ],
        "*******************************************************************************",
        "List of lifecycle log older than 7 days old to be delete:",
        [],
        "*******************************************************************************",
        "List of service .out files over 500 MiB to be truncated:",
        [],
        "*******************************************************************************",
        "List of crash files to be delete:",
        [],
        "*******************************************************************************",
        "List of support log files to be deleted:",
        [
            "/tmp/svc_collect-ansauto_jira-VDC1-20240820_173137.zip",
            "/tmp/svc_collect-ansauto_jira-VDC1-20240827_192900.zip",
            "/home/admin/my.pcap"
        ],
        "*******************************************************************************",
        "*******************************************************************************",
        "oc_map log file summary:",
        "*******************************************************************************",
        "*******************************************************************************",
        "oc_map chk logs to be deleted:",
        [],
        "*******************************************************************************",
        "oc_map retry logs to be deleted:",
        [],
        "*******************************************************************************",
        "oc_map results logs to be deleted:",
        [
            "/home/admin/oc_map/suite/oc_cache/07-23-2024/09-55-30_atasoy_ns2_test1_obj_stat_results.log",
            "/home/admin/oc_map/suite/oc_cache/07-23-2024/09-56-11_atasoy_ns2_test1_obj_stat_results.log"
        ],
        "*******************************************************************************"
  1. Confirm to delete the files listed above:

Command:

# yes or no

Example:

TASK [Getting user confirmation for file deletion and truncate] ********************************************************************************************************************************
[Getting user confirmation for file deletion and truncate]
Confirm Delete and/or Truncate on all the files listed above. To proceed, type 'Yes' or 'Y' to delete and/or truncate files or 'No' or 'N' to end (Default: 'No'):
ok: [169.254.1.1] => {"changed": false, "delta": 433, "rc": 0, "start": "2024-08-28 18:04:51.316878", "stderr": "", "stdout": "Paused for 7.23 minutes", "stop": "2024-08-28 18:12:05.162385", "user_input": "y"}
  1. Review summary for deleted files and RFS savings from the execution:

Example:

TASK [Summary of total file sizes deleted and suspect files for review] ***********************************************************************************************************************
ok: [169.254.1.1] => {
    "msg": [
        "*******************************************************************************",
        "*******************************************************************************",
        "Root file system clean up summary:",
        "*******************************************************************************",
        "30+ day old ObjectScale / ECS OS logs total size deleted: 61.0 GB",
        "7+ day old fabric agent logs total size deleted: 198.2 MB",
        "7+ day old lifecycle logs total size deleted: 0 Bytes",
        "ObjectScale / ECS service .out logs larger than 500 mb total size reclaimed: 0 Bytes",
        "Support logs total size deleted: 9.3 GB",
        "Crash file(s) deleted: 614.5 MB",
        "*******************************************************************************",
        "Before cleanup available space / used percentage: 159G 64%",
        "After cleanup available space / used percentage: 187G 58%",
        "*******************************************************************************",
        "*******************************************************************************",
        "oc_map log deletion summary:",
        "*******************************************************************************",
        "WARNING! oc_map process found. Skipped log deletion for oc_map. Please find the case owner of the process:",
        "oc_map PID(s): 5712\n62680",
        "oc_map results logs total size reclaimable: 13.5 kB",
        "*******************************************************************************",
        "*******************************************************************************",
        "IMPORTANT! If capacity reduction is not successful you must review the below folders with SR owners and Jira for possible deletion:",
        "*******************************************************************************",
        "List of OE/CE/SR directories to review and manually clean after confirming they are safe to remove:",
        "*******************************************************************************",
        [
            "14G /tmp/ECSOE-22821"
        ],
        "*******************************************************************************",
        "List of home directories sizes and top 10 subdirectory files to review and manually clean after confirming they are safe to remove:",
        "*******************************************************************************",
        "Top 5 home directories by size:",
        "*******************************************************************************",
        "Review the sizes of the home directories listed below.",
        "If a home directory is consuming a large amount of space, then that path must be investigated.",
        "*******************************************************************************",
        [
            "22G /home/admin",
            "40K /home/emc",
            "32K /home/usera",
            "32K /home/user7",
            "28K /home/service"
        ],
        "*******************************************************************************",
        "Top 10 subdirectory files under home to review for deletion:",
        "==> Below are the top 10 subdirectory files within the home directories.",
        "==> Review and manually clean these files after confirming they are safe to remove.",
        "*******************************************************************************",
        [
            "8.4G /home/admin/impage_3.6.2.6",
            "7.4G /home/admin/impage_3.8.0.4",
            "3.5G /home/admin/impage_3.7.0.7",
            "1.8G /home/admin/ecsnode8.gslabs.lab.emc.com.pcap",
            "96M /home/admin/ecsnode8.gslabs.lab.emc.com.pcap2",
            "96M /home/admin/ecsnode8.gslabs.lab.emc.com.pcap1",
            "96M /home/admin/applicationname_year_month_date_ecsnode8.gslabs.lab.emc.com.pcap2",
            "96M /home/admin/applicationname_year_month_date_ecsnode8.gslabs.lab.emc.com.pcap1",
            "77M /home/admin/ecsnode8.gslabs.lab.emc.com.pcap0",
            "74M /home/admin/xdoctor-ansible_3.0.0-1220.4df0e785.xz"
        ],
        "*******************************************************************************",
        "List of oc_map dmp logs to review cluster history in SR and/or Jira to confirm it is safe to delete the file(s): ",
        [
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/04-29-2025/02-47-21_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-21-2025/07-22-52_B1_a_parkawstest_parkawstest_ls_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-21-2025/07-22-52_B1_a_parkawstest_parkawstest_dangling_ls_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-21-2025/07-22-52_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-21-2025/07-30-01_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-21-2025/07-30-01_B1_a_parkawstest_parkawstest_ob_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-23-2025/02-39-50_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-23-2025/02-39-50_B1_thomas_bk1_acl_s3_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-23-2025/02-39-50_B2_thomas_bk1_acl_s3_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-23-2025/03-40-17_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-23-2025/03-40-17_B1_thomas_bk1_acl_s3_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-23-2025/03-40-17_B2_thomas_bk1_acl_s3_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-15-33_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-22-17_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/01-45-49_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/01-45-49_B1_thomas_thomas_bk1_acl_ls_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-12-37_B1_thomas_thomas_bk1_acl_dangling_ls_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-11-45_B1_thomas_thomas_bk1_acl_ls_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-11-45_B1_thomas_thomas_bk1_acl_dangling_ls_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-18-11_rt_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-12-37_B1_thomas_thomas_bk1_acl_ls_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-15-33_B1_thomas_bk1_acl_s3_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-18-11_B1_thomas_thomas_bk1_acl_ob_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-15-33_B2_thomas_bk1_acl_s3_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-22-17_B1_thomas_bk1_acl_s3_dmp.log",
            "/opt/emc/xdoctor/tools/ee_scripts/oc_map/suite/oc_cache/06-24-2025/02-22-17_B2_thomas_bk1_acl_s3_dmp.log",
            "/home/admin/oc_map/suite/oc_cache/06-23-2025/04-39-59_rt_dmp.log",
            "/home/admin/oc_map/suite/oc_cache/06-23-2025/04-39-59_B1_aa_nfs_aa_bucket_ls_dmp.log"
        ],
        "DMP logs total size: 5.2 kB",
        "*******************************************************************************",
        "IMPORTANT! Below is the node's vnest summary. If it is consuming over 100 GB of the node's SSD capacity, CE must review it in Jira.",
        "*******************************************************************************",
        "vnest is below 100 gigabytes Folder path: /opt/emc/caspian/fabric/agent/services/object/data/vnest/vnest-main/recycle, Size: 1.1G",
        "*******************************************************************************"
    ]
}
TASK [Set fact for root FS cleanup summary] ***************************************************************************************************************************************************
ok: [169.254.1.1] => {"ansible_facts": {"context": " Before cleanup available space: 159G used percentage: 64%. After cleanup available space: 187G / used percentage: 58%. Space reclaimed: 28.0G."}, "changed": false}

PLAY RECAP ************************************************************************************************************************************************************************************
169.254.1.1                : ok=41   changed=11   unreachable=0    failed=0    skipped=36   rescued=0    ignored=0

===============================================================================================================================================================================================
Status: PASS
Time Elapsed: 0h 2m 43s
Debug log: /tmp/autopilot/log/autopilot_79798_20250624_175616.log
Message:  Before cleanup available space: 159G used percentage: 64%. After cleanup available space: 187G / used percentage: 58%. Space reclaimed: 28.0G.
===============================================================================================================================================================================================

Affected Products

ECS Appliance

Products

ECS Appliance, ECS Appliance Hardware Gen1 U-Series, ECS Appliance Software with Encryption, ECS Appliance Software without Encryption
Article Properties
Article Number: 000079798
Article Type: Solution
Last Modified: 24 Jul 2025
Version:  17
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.