This post is more than 5 years old
261 Posts
0
9074
Celerra root file system filling up frequently
Hello ALL,
It appears to be the root filesystem becoming full every time i clear it up. When this occurs, usermapping can have issue recognizing or mapping new users. I cleared this to below 50%. This allowed user login correctly. but, in 10 days it fills up.How can i set this up the right way?
I had somebody from EMC explain me that
" to make sure not to use the control station to store logs or other things. it is an important piece of the Celerra." I am not sure what that means.
Is there a cofiguration done the wrong way ..how do i fix this.I am not seeing this filling up for my other celerras.
Please suggest!
=========================================================
Here's the output:
nasadmin@ ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda3 2.0G 1.3G 591M 70% /
/dev/hda1 122M 8.6M 107M 8% /boot
none 1013M 0 1013M 0% /dev/shm
/dev/mapper/emc_vg_pri_ide-emc_lv_home
591M 21M 540M 4% /home
/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backup
827M 65M 721M 9% /celerra/backup
/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backendmonitor
7.8M 1.4M 6.0M 19% /celerra/backendmonitor
Please let me know if any other out put needed.
thanks for reading
christopher_ime
2K Posts
0
December 15th, 2010 23:00
deepat,
Can you check the size of the following folders:
/var/spool/mqueue
/var/spool/mail
If alerting via smtp (ConnectHome, Email User, Notifications, etc) is enabled and being sent properly, the mail queue shouldn't have old files hanging around; actually in a healthy environment the "mqueue" folder may even be empty. If this is not the case, not only will the "mqueue" folder get backed-up, but also in /var/spool/mail the file: "root" (there is one for each configured user) could very well be quite large as it will continually save system errors (unable to deliver message).
Of course, in this case the fix would be to get mail flowing (SMTP server? bad recipient email address?, etc.). Just a thought.
dynamox
1 Rookie
1 Rookie
•
20.4K Posts
0
November 23rd, 2010 14:00
what did you exactly clear, what files ?
SAMEERK1
296 Posts
1
November 24th, 2010 05:00
have a look at this primus
emc252448
Sameer Kulkarni
eServices
deeppat
261 Posts
0
November 24th, 2010 13:00
Dynamox,
I cleared the messages from /var/log/messages. It hardly makes any difference .
How can i fix it ? since i don't get this problem in other celerras>please suggest.
sebbyr
99 Posts
0
November 25th, 2010 05:00
What level of code are you running? What cron jobs are you running, if any?
Thank
Sebby Robles
eServices Support
deeppat
261 Posts
0
November 25th, 2010 06:00
its running the code 5.6 .45 -5
and No cronjobs running..
sebbyr
99 Posts
1
November 26th, 2010 10:00
Please engage support via a CHAT, or by opening an SR. You need to determine why this is filling up first. There may be files or jobs that you are not aware of.
Thanks
Sebby Robles
eServices Support
EMC Celerra Support
deeppat
261 Posts
0
November 26th, 2010 10:00
sebby , i 'll 've to do that eventually.
I want to know if I can fix it some way.
Dynamox and EMC_Rainer - could you guys please help me here?
dynamox
1 Rookie
1 Rookie
•
20.4K Posts
1
November 26th, 2010 14:00
su to root (su ..not su -)
cd /
du -hs *
save these numbers and run this again the next day or so to track which directories are growing the most.
deeppat
261 Posts
0
December 13th, 2010 12:00
Hello Dynamox,
I 've started facing the same issue in just one weeks time.The root file has started filling up to 100% now.
its a NS 704G with CIFS users only.I am wondering what all directories i can remove and how to fix this permanently.I had been monitoring using du -hs *
but the list is so big , i am unable to locate which directory size is actually increasing.
can you please guide me on how i can get this issue fixed ? Aprreciate your time and help!
thanks for reading
dynamox
1 Rookie
1 Rookie
•
20.4K Posts
0
December 14th, 2010 07:00
not sure why your list is so big:
[nasadmin@NS80CS /]$ su
Password:
[root@NS80CS /]# cd /
kjstech
1 Rookie
1 Rookie
•
358 Posts
0
December 2nd, 2015 12:00
In my case I have a bazillion temp_server_2@cifs.server@1449087851.csv.tmp1449087556060 that crashes winscp upon entering the /tmp directory, and an rm *csv.tmp* results in a "argument list too long". Any idea what caused this?
kjstech
1 Rookie
1 Rookie
•
358 Posts
0
December 2nd, 2015 13:00
Whatever it is I'm using this to clean up files older than 120 days
cd /tmp
find . -maxdepth 1 -name '*tmp*' -mtime +120 -exec rm -f {} \;
so far im down to 84% used on / and counting. Seriously the file dates were for at least 3 years until present day.