VNX: /home directory is 100% full (User Correctable)
Summary: There are many reason to fill up the /home directory. one of them is the stats files which are generally larger in size.
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
home directory is getting 100% full.
/dev/mapper/emc_vg_pri_ide-emc_lv_home 604736 574016 0 100% /home
Cause
emc-srm file could be one of the reason to fill up the /home directory which are generally large in size.
Resolution
Follow the below steps:
[root@VNX4145-CS0 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 2030768 1291992 633952 68% /
none 1036364 0 1036364 0% /dev/shm
/dev/hda1 264445 16721 236802 7% /boot
/dev/mapper/emc_vg_pri_ide-emc_lv_home
604736 604736 574016 100% /home <---------------------------
Go to home directory and check the usage:
[root@VNX4145-CS0 nasadmin]# cd /home
[root@VNX4145-CS0 home]# du -sh * | sort -nr | head
541M nasadmin
284K defunctnisskrk3
60K monitor2
40K defunctnissexw2
28K monitor
20K sysadmin6
20K sysadmin5
20K sysadmin1
20K nissbxs
20K ndctdmh
>> [root@VNX4145-CS0 nasadmin]# du -sh * | sort -nr | head
684K RecoverPoint Journal Pool_stats_20130806132541
412K nohup.out
401M emc-srm
352K users_2.passwd
324K perfdata
316K chtemp
200K nbsnas_cspoller.tar.gz
200K nas_cspoller.tar.gz
184K config.rpt
160K lists-tar-VNX4145-CS0-1477788731-3.tar.gz
>> [root@VNX4145-CS0 nasadmin]# ll -h | grep -i emc-srm
drwxr-xr-x 4 nasadmin nasadmin 4.0K Aug 9 09:48 emc-srm
>> [root@VNX4145-CS0 nasadmin]# cd emc-srm
[root@VNX4145-CS0 emc-srm]# ll
total 24
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:45 lock_nas_replicate
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_cifs.server
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_cifs.user_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_nfs.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_nfs.vdm.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_nfs.vdm.group
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_cifs.server
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_cifs.user_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_nfs.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_nfs.vdm.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_nfs.vdm.group
drwxr-xr-x 4 nasadmin nasadmin 4096 Aug 9 09:48 server_2
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-nfs.vdm.group.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 14:48 server_2-startStats-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_2-startStats-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_2-startStats-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_2-startStats-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:32 server_2-startStats-nfs.vdm.group.out
drwxr-xr-x 4 nasadmin nasadmin 4096 Aug 9 09:48 server_3
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-nfs.vdm.group.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 12:47 server_3-startStats-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_3-startStats-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_3-startStats-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_3-startStats-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:33 server_3-startStats-nfs.vdm.group.out
-rwx------ 1 nasadmin nasadmin 11475 Feb 7 15:45 startStats.pl
-rwx------ 1 nasadmin nasadmin 1546 Feb 7 15:45 vnxfile-nas_replicate.sh
From above we can see that the emc-srm file contains the stats information.
We can take the backup of data and store it to the customer's local system and remove it from the /home directory
[root@VNX4145-CS0 nasadmin]# rm -rf emc-srm
after deleting the srm file:
[root@VNX4145-CS0 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 2030768 1291992 633952 68% /
none 1036364 0 1036364 0% /dev/shm
/dev/hda1 264445 16721 236802 7% /boot
/dev/mapper/emc_vg_pri_ide-emc_lv_home
604736 163560 410456 29% /home <---------------------------
[root@VNX4145-CS0 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 2030768 1291992 633952 68% /
none 1036364 0 1036364 0% /dev/shm
/dev/hda1 264445 16721 236802 7% /boot
/dev/mapper/emc_vg_pri_ide-emc_lv_home
604736 604736 574016 100% /home <---------------------------
Go to home directory and check the usage:
[root@VNX4145-CS0 nasadmin]# cd /home
[root@VNX4145-CS0 home]# du -sh * | sort -nr | head
541M nasadmin
284K defunctnisskrk3
60K monitor2
40K defunctnissexw2
28K monitor
20K sysadmin6
20K sysadmin5
20K sysadmin1
20K nissbxs
20K ndctdmh
>> [root@VNX4145-CS0 nasadmin]# du -sh * | sort -nr | head
684K RecoverPoint Journal Pool_stats_20130806132541
412K nohup.out
401M emc-srm
352K users_2.passwd
324K perfdata
316K chtemp
200K nbsnas_cspoller.tar.gz
200K nas_cspoller.tar.gz
184K config.rpt
160K lists-tar-VNX4145-CS0-1477788731-3.tar.gz
>> [root@VNX4145-CS0 nasadmin]# ll -h | grep -i emc-srm
drwxr-xr-x 4 nasadmin nasadmin 4.0K Aug 9 09:48 emc-srm
>> [root@VNX4145-CS0 nasadmin]# cd emc-srm
[root@VNX4145-CS0 emc-srm]# ll
total 24
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:45 lock_nas_replicate
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_cifs.server
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_cifs.user_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_nfs.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_nfs.vdm.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_2_nfs.vdm.group
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_cifs.server
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_cifs.user_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_nfs.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_nfs.vdm.client_top
-rw-r--r-- 1 nasadmin nasadmin 0 Aug 9 09:48 lock_startStats.pl_server_3_nfs.vdm.group
drwxr-xr-x 4 nasadmin nasadmin 4096 Aug 9 09:48 server_2
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_2-nohup-nfs.vdm.group.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 14:48 server_2-startStats-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_2-startStats-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_2-startStats-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_2-startStats-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:32 server_2-startStats-nfs.vdm.group.out
drwxr-xr-x 4 nasadmin nasadmin 4096 Aug 9 09:48 server_3
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:47 server_3-nohup-nfs.vdm.group.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 12:47 server_3-startStats-cifs.server.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_3-startStats-cifs.user-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_3-startStats-nfs.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:53 server_3-startStats-nfs.vdm.client-top.out
-rw-r--r-- 1 nasadmin nasadmin 0 Feb 7 15:33 server_3-startStats-nfs.vdm.group.out
-rwx------ 1 nasadmin nasadmin 11475 Feb 7 15:45 startStats.pl
-rwx------ 1 nasadmin nasadmin 1546 Feb 7 15:45 vnxfile-nas_replicate.sh
From above we can see that the emc-srm file contains the stats information.
We can take the backup of data and store it to the customer's local system and remove it from the /home directory
[root@VNX4145-CS0 nasadmin]# rm -rf emc-srm
after deleting the srm file:
[root@VNX4145-CS0 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 2030768 1291992 633952 68% /
none 1036364 0 1036364 0% /dev/shm
/dev/hda1 264445 16721 236802 7% /boot
/dev/mapper/emc_vg_pri_ide-emc_lv_home
604736 163560 410456 29% /home <---------------------------
Additional Information
Take customer's permission before proceeding on the same.
Customer can either move these files to their local system/computer, or if they don't require the statistic data go ahead with deleting the srm file.
Customer can either move these files to their local system/computer, or if they don't require the statistic data go ahead with deleting the srm file.
Products
VNX1 Series, VNX2 SeriesArticle Properties
Article Number: 000059211
Article Type: Solution
Last Modified: 07 Nov 2025
Version: 3
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.