Start a Conversation

Unsolved

This post is more than 5 years old

2342

June 6th, 2017 07:00

Networker memory requirement

I have networker 8.2.2.6 and has CPU 2.5 GH , 24 CORE , Memory 32 GB  , Its using datadomain. Has around 165 groups and 2600 clients. We have some job waits for days before starting. I have tried different combination of server parallelism , savegroup parallelism , device target and max session and still not getting good session start. Groups wait for 2/1 days before starting. What is logic behind parallelism ? why i am not getting good session.

Ran vmstat and see lots of swapping ...

[amaurya@dccbrmsta ~]$ vmstat 1 10

procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----

r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st

4  0 669588 5051176 913076 21007152    0    0   135   178    2    1  8  2 90  0  0

4  0 669588 5050272 913076 21007368    0    0     0  1360 8899 5073 15  3 82  0  0

6  0 669588 5051460 913076 21007476    0    0     0   644 7882 4500 14  3 82  0  0

4  0 669588 5053084 913084 21007600    0    0     0   456 7877 4836 14  3 83  0  0

4  0 669588 5054036 913084 21007848    0    0     0  1244 8141 4309 14  4 82  0  0

4  0 669588 5052812 913084 21007992    0    0     0    80 7364 4249 14  3 83  0  0

3  0 669588 5052300 913084 21008028    0    0     0 14476 7325 4072 13  2 85  0  0

5  0 669588 5053532 913084 21008256    0    0     0   812 5195 3168 13  0 87  0  0

3  0 669588 5053260 913084 21008404    0    0     0   832 6366 3836 14  0 86  0  0

4  0 669588 5054044 913084 21008484    0    0     0   776 5534 3517 13  0 87  0  0

Any comment ???

19 Posts

June 6th, 2017 09:00

They start but they sit there doing nothing , percentage completed

showing 0% like below screen shot

2.4K Posts

June 6th, 2017 09:00

By default a group starts every 24 hrs.So if a group does not start, the whole timing will be screwed. I don't believe that.

However, it may well be that a subprocess of a started group for a certain client will not run for a specific reason.

But what shall somebody suggest without knowing your configuration?

If I have such a problematic client, I would first try to isolate him and grant him a separate group for himself just to test/verify the problem.

96 Posts

June 6th, 2017 11:00

Hello.

Have you made some change in your environment?. Is this a new behaviour?.

Take a look in the daemon log.

Sometimes I have seen unstable behavior in this version. You should upgrade to latest version.

https://emcservice.force.com/CustomersPartners/kA2j0000000kAXPCA2

2.4K Posts

June 7th, 2017 03:00

Make sure that the client(s) can be contacted and that they answer as expected. From the server, run:

   nsradmin -p nsrexec -s

If you see the prompt ("nsradmin>" then it is o.k. - just press "q" to quit.

Abort the group and run the following command to see whether the first step (probe) will perform at all

   savegrp [-l level] [-c client] -pv -G

14.3K Posts

June 8th, 2017 04:00

I don't know what your clients are and how much load they produce, nor how many clients you have per group (which is also a factor to consider), but I alway size my datazones to not have more than 800 clients (well, that is not true, more correctly it should be 800 client definitions which means I have less clients).  In the past I could see some performance impact when I would cross 600 count, but this is illusion since real load comes not so much from count as much as from load produced by clients.  Next, your groups might have too many clients and perhaps stream attack against server on TCP level is just too much to handle (depending on optimization you did).  My environment is mixed meaning 60% of clients at minimum is database servers so at max those remaining 40% would be only file system backup.  Next, I load balance schedules in groups (eg. I place no more than 20 clients in group and make sure only 4 at the time have full backup on specific day meaning 4 will have full on Monday, 4 will have full on Tuesday and so on.... I don't do backups on weekends, but you might have different approach - perhaps you are one of those who run full backups only on weekends for example and then server goes bazinga).

19 Posts

June 8th, 2017 06:00

I have around 12 groups and each around 100 vm in them. we do full back

daily. then monthly for 5 years. daily for 2 week. opened case with emc

support and it seems VADP was not running under SAN mode instead running

in NBD mode. so we fixed that issue and they running smooth now. let us

see how that go. we also increased proxy node parallelism to 100 for all

5 vproxy server. Looks like that helped too.

2.4K Posts

June 8th, 2017 12:00

It would have been helpful if you would have mentioned VADP from the very beginning.

No Events found!

Top