Well, in some docs they say so, but it really comes down to sessions IMHO. And with some clients I can generate sessions same as 100 other clients in total so your mileage may vary. I personally would always deploy new backup server as soon as I see that I can't have breathing space between groups and that things start to overlap too fast. Of course, 1000 clients might be ok and might be too much. If this is file system backup and it takes at max 30 minutes per client, I can see how I would distribute that within 24 hour. If you must run those during smaller period, it might be too much stress on server itself (unless heavily tuned which may also include some flash storage for NSR database). And if out of those 1000 you have 800 with databases expect more to hit this poor fellow. So, number of clients might be indication, but it is far from solid pointer and it really comes down to how much your server can take. I normally would trim my backup server(s) to 40-60k max sessions per day. For example, one busier server is giving me:
mminfo -q 'savetime>=yesterday 00:00:01,savetime<=23:59:59' -r ssid | wc -l
When more busy, I would just deploy next vm backup server (using same DD) and off it goes. Sizing of the box also matters of course (CPU and memory plus some performance tweaks at OS level plus nice and clean distribution of groups and client backup jobs).
We have around 600 clients and 14k sessions , but the sat full night backup window is only 6:00 pm to 1:00 am.
I can see some serious overlaps.
Is this ideal? I have tried to add more storage nodes, and that might have helped but I still see that the backup server is really throttled. very slow to respond etc.
Is it a good idea to have a new backup server, and its own datazone at this point?
Also I have a group with around 70 clients which all start at the same time?
I normally do not put more that 20 clients in group, but I leave it so without restrictions (eg. savegrp parallelism). Your server simply might not have enough resources (memory, CPU) and you might need more power or tweaking at OS level (depending which OS it is). If your server is out of capacity (hardware wise), then it makes sense to have new datazone.
we tend to go to max 1000 clients but thats 2500+ client instances defined. nw8.2.3 had a big change by changing to a different media db (no more WISS, backported from nw9) which is heralded by emc as improving on performance issues. only too bad that there also have been some major issues in nw8.2.3 that we ran into and had to be fixed or are still awaiting to be fixed (clients with names of 48+ chars didnt work anymore nor could be defined; weird nw storagenode media requests when clients have more than 1 nw sn define, then it starts asking for a device on a nw sn that does not have a device in required pool; and now with nw18.104.22.168 media db issues resulting in many backup failures, so no nw8.2.3.x that we regard as stable enough to actually run on...). the new media db also no longer would suffer if there are more than 50k ssids on daily basis. the max of pre-nw8.2.3 versions is the amount of secs in a day as every ssid gets a unique nsavetime. with nw8.2.3 that is dealt with an a client basis instead of the whole backup environment due to which technically one could have much more ssid's per day than before. so once there will be a stable nw8.2.3.x then we definitely will go for that.
email from support on this as I had opened a ticket with them when I posted this on the forums. Got it late yesterday. I will post the abbreviated email below.
Thanks for your patience.
I finally received answer to your question.
The recommendation is no more than 50 clients per VADP proxy.
Please let me know if you have further questions or concerns.
According to patch list, 46+ char bug they fixed in 22.214.171.124 (and potential data loss thingy with older tape cloning records and nsrim which is kind of big deal). I'm not sure about new mdb issue in 126.96.36.199 - those servers where I use it (testing phase) are relativly smallish so I do not see any issues (big ones I still run on 188.8.131.52 which just works fine for me).