Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

9074

November 23rd, 2010 13:00

Celerra root file system filling up frequently

Hello ALL,

It appears to be the root filesystem becoming full every time i clear it up.  When this occurs, usermapping can have issue recognizing or mapping new users.  I cleared this to below 50%.  This allowed user login correctly. but, in 10 days it fills up.How can i set this up the right way?

I had somebody from EMC explain me that

" to make sure not to use the control station to store logs or other things.  it is an important piece of the Celerra." I am not sure what that means.

Is there a cofiguration done the wrong way ..how do i fix this.I am not seeing this filling up for my other celerras.

Please suggest!

=========================================================

Here's the output:

nasadmin@ ~]$ df -h
Filesystem            Size  Used Avail    Use% Mounted on
/dev/hda3             2.0G  1.3G  591M  70% /
/dev/hda1             122M  8.6M  107M   8% /boot
none                 1013M     0 1013M     0% /dev/shm
/dev/mapper/emc_vg_pri_ide-emc_lv_home
                      591M   21M  540M      4% /home
/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backup
                      827M   65M  721M      9% /celerra/backup
/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backendmonitor
                      7.8M  1.4M  6.0M      19% /celerra/backendmonitor
Please let me know if any other out put needed.

thanks for reading

December 15th, 2010 23:00

deepat,

Can you check the size of the following folders:

     /var/spool/mqueue

     /var/spool/mail

If alerting via smtp (ConnectHome, Email User, Notifications, etc) is enabled and being sent properly, the mail queue shouldn't have old files hanging around; actually in a healthy environment the "mqueue" folder may even be empty.  If this is not the case, not only will the "mqueue" folder get backed-up, but also in /var/spool/mail the file: "root" (there is one for each configured user) could very well be quite large as it will continually save system errors (unable to deliver message).

Of course, in this case the fix would be to get mail flowing (SMTP server? bad recipient email address?, etc.).  Just a thought.

1 Rookie

 • 

20.4K Posts

November 23rd, 2010 14:00

what did you exactly clear, what files ?

296 Posts

November 24th, 2010 05:00

have a look at this primus

emc252448

Sameer Kulkarni

eServices

261 Posts

November 24th, 2010 13:00

Dynamox,

I cleared the messages  from  /var/log/messages. It hardly makes any difference .

How can i fix it ? since i don't get this problem in other celerras>please suggest.

99 Posts

November 25th, 2010 05:00

What level of code are you running?  What cron jobs are you running, if any?

Thank

Sebby Robles

eServices Support

261 Posts

November 25th, 2010 06:00

its running the code 5.6 .45 -5

and No cronjobs running..

99 Posts

November 26th, 2010 10:00

Please engage support via a CHAT, or by opening an SR.  You need to determine why this is filling up first.  There may be files or jobs that you are not aware of.

Thanks

Sebby Robles

eServices Support

EMC Celerra Support

261 Posts

November 26th, 2010 10:00

sebby , i 'll 've to do that eventually.

I want to know if I can fix it some way.

Dynamox and EMC_Rainer - could you guys  please help me here?

1 Rookie

 • 

20.4K Posts

November 26th, 2010 14:00

su to root (su ..not su -)

cd /

du -hs *

save these numbers and run this again the next day or so to track which directories are growing the most.

261 Posts

December 13th, 2010 12:00

Hello Dynamox,

I 've started facing the same issue in just one weeks time.The root file has started filling up to 100% now.

its a NS 704G with CIFS users only.I am wondering what all directories i can remove and how to fix this permanently.I had been monitoring using du -hs *

but the list is so big , i am unable to locate which directory size is actually increasing.

can you please guide me on how i can get this issue fixed ? Aprreciate your time and help!

thanks for reading

1 Rookie

 • 

20.4K Posts

December 14th, 2010 07:00

not sure why your list is so big:

[nasadmin@NS80CS /]$ su
Password:
[root@NS80CS /]# cd /

[root@NS80CS /]# du -hs *
4.7M    bin
6.0M    boot
366M    celerra
248K    dev
9.0M    etc
22M     flash
4.0K    fs_c
4.0K    fs_r
313M    home
4.0K    i386
4.0K    initrd
48M     lib
16K     lost+found
12K     media
4.0K    misc
12K     mnt
787M    nas
62M     nasmcd
2.4G    nbsnas
4.0K    nix
4.0K    opt
du: `proc/10887/task': No such file or directory
du: `proc/10887/fd': No such file or directory
du: `proc/22419/task': No such file or directory
du: `proc/22419/fd': No such file or directory
du: `proc/22828/task': No such file or directory
du: `proc/22828/fd': No such file or directory
976M    proc
16M     root
13M     sbin
4.0K    selinux
4.0K    srv
0       sys
15M     tmp
563M    usr
220M    var

1 Rookie

 • 

358 Posts

December 2nd, 2015 12:00

In my case I have a bazillion temp_server_2@cifs.server@1449087851.csv.tmp1449087556060 that crashes winscp upon entering the /tmp directory, and an rm *csv.tmp* results in a "argument list too long".  Any idea what caused this?

1 Rookie

 • 

358 Posts

December 2nd, 2015 13:00

Whatever it is I'm using this to clean up files older than 120 days

cd /tmp

find . -maxdepth 1 -name '*tmp*' -mtime +120 -exec rm -f {} \;

so far im down to 84% used on / and counting.  Seriously the file dates were for at least 3 years until present day.

No Events found!

Top