Storage nodes are very resource hungry as a large amount of data passes through them - if you look through the questions I have asked I have posted some rules of thumb I have gathered which no one really seemed willing to declare were good or not. We have found CPU tends to be the limiting factor as servers tend to have plenty of memory in these days (most of our storage nodes have 16GB+). It may be interesting if you post the spec of your storage nodes, the amount of throughput you are hoping to get and the number and type of drives you are hoping to drive...
a)If other applications are running concurrently with the backup on the storage node, they impose an additional load on the system. Heavy swapping or paging activities indicate that the server is CPU or memory-bound.
b)When a NetWorker server is saving a large number of save sets, such as 500 or more, memory consumption can reach high values. In this event, the parallelism may need to be lowered.
For best results, install the maximum amount of memory that your computers will sustain, especially for the NetWorker server and storage nodes.