I don't know what your clients are and how much load they produce, nor how many clients you have per group (which is also a factor to consider), but I alway size my datazones to not have more than 800 clients (well, that is not true, more correctly it should be 800 client definitions which means I have less clients). In the past I could see some performance impact when I would cross 600 count, but this is illusion since real load comes not so much from count as much as from load produced by clients. Next, your groups might have too many clients and perhaps stream attack against server on TCP level is just too much to handle (depending on optimization you did). My environment is mixed meaning 60% of clients at minimum is database servers so at max those remaining 40% would be only file system backup. Next, I load balance schedules in groups (eg. I place no more than 20 clients in group and make sure only 4 at the time have full backup on specific day meaning 4 will have full on Monday, 4 will have full on Tuesday and so on.... I don't do backups on weekends, but you might have different approach - perhaps you are one of those who run full backups only on weekends for example and then server goes bazinga).
Maurya555
19 Posts
0
June 6th, 2017 09:00
They start but they sit there doing nothing , percentage completed
showing 0% like below screen shot
bingo.1
2.4K Posts
0
June 6th, 2017 09:00
By default a group starts every 24 hrs.So if a group does not start, the whole timing will be screwed. I don't believe that.
However, it may well be that a subprocess of a started group for a certain client will not run for a specific reason.
But what shall somebody suggest without knowing your configuration?
If I have such a problematic client, I would first try to isolate him and grant him a separate group for himself just to test/verify the problem.
ledugarte
96 Posts
0
June 6th, 2017 11:00
Hello.
Have you made some change in your environment?. Is this a new behaviour?.
Take a look in the daemon log.
Sometimes I have seen unstable behavior in this version. You should upgrade to latest version.
https://emcservice.force.com/CustomersPartners/kA2j0000000kAXPCA2
bingo.1
2.4K Posts
0
June 7th, 2017 03:00
Make sure that the client(s) can be contacted and that they answer as expected. From the server, run:
nsradmin -p nsrexec -s
If you see the prompt ("nsradmin>" then it is o.k. - just press "q" to quit.
Abort the group and run the following command to see whether the first step (probe) will perform at all
savegrp [-l level] [-c client] -pv -G
ble1
4 Operator
•
14.4K Posts
0
June 8th, 2017 04:00
I don't know what your clients are and how much load they produce, nor how many clients you have per group (which is also a factor to consider), but I alway size my datazones to not have more than 800 clients (well, that is not true, more correctly it should be 800 client definitions which means I have less clients). In the past I could see some performance impact when I would cross 600 count, but this is illusion since real load comes not so much from count as much as from load produced by clients. Next, your groups might have too many clients and perhaps stream attack against server on TCP level is just too much to handle (depending on optimization you did). My environment is mixed meaning 60% of clients at minimum is database servers so at max those remaining 40% would be only file system backup. Next, I load balance schedules in groups (eg. I place no more than 20 clients in group and make sure only 4 at the time have full backup on specific day meaning 4 will have full on Monday, 4 will have full on Tuesday and so on.... I don't do backups on weekends, but you might have different approach - perhaps you are one of those who run full backups only on weekends for example and then server goes bazinga).
Maurya555
19 Posts
0
June 8th, 2017 06:00
I have around 12 groups and each around 100 vm in them. we do full back
daily. then monthly for 5 years. daily for 2 week. opened case with emc
support and it seems VADP was not running under SAN mode instead running
in NBD mode. so we fixed that issue and they running smooth now. let us
see how that go. we also increased proxy node parallelism to 100 for all
5 vproxy server. Looks like that helped too.
bingo.1
2.4K Posts
0
June 8th, 2017 12:00
It would have been helpful if you would have mentioned VADP from the very beginning.