Is there a document specifically about optimizing backups to CloudBoost? Even though I'm using parallel save streams my backups are still running way slower than I would have expected. This is for both backups with lots of small files (e.g. Windows filesystem) and backups with a few large files (e.g. Exchange database .edb files).
There are a lot of factors that could contribute to slower performance, such as the obvious WAN bandwidth and latency, but the less known average chunk size and file types. With the introduction of CloudBoost 2.2.2, we now support increasing the average chunk size to 1MB, which will increase WAN performance, but could have a negative impact on your deduplication ability. Generally, DB files tend to have lower performance and lower chunk rates.
Another affect on performance could be your data path workflow. With limited knowledge on what you are doing, you may want to consider a client-direct model where you utilize a NetWorker Storage Node (v.9.1 and higher). The latest storage nodes have the CloudBoost client SDK built-in and can improve speeds by offloading processes to these devices.
While increasing the number of parallel save streams is useful, you can also add multiple CB Devices within NetWorker NMC that point to a single CloudBoost appliance. Generally you'll want to have multiple devices in NetWorker at multiple streams.
One last detail is resources. What type of resources can the CloudBoost appliance utilize? Meaning, are you using more than the minimum recommended for your CPU and RAM? An often overlooked point is the /lvm disks (disk #2). It really needs to have SSDs due to the number of IOPs required with the frequent SQL writes/lookups.
I understand this may be vague, but would need more details as noted. Feel free to post more details. Likewise, the chunk size will be an important factor, but we'd need /nsr/logs/cloudboost to identify those.