we are fairly new to the Networker 8.1 feature PSS Parallel Save Streams. Trying to put together the optimized backup of a specific Oracle DB server that has complicated disk layout: a single client with many file systems, needs to be a single group (because involves savepnpc pre-post), has two (or potentially more) large file systems that need PSS, and many other smaller FSs that are OK without PSS (i.e. single stream).
For the non-PSS savesets we created one client resource, and for the PSS savesets another client.
Number of devices limited (preferably at 4), and those devices need to be single-streamed: VTL, virtual tape drives (single-streamed for better dedup ratio).
The client parallelism is set to a high value (>> number of avail devices), because the server that actually does the backup (the "client") is a backup proxy server, having snapshot "disks" of the actual DB servers (multiple), and the actual DB servers have various disk/FS layouts (one or more large PSS filesystems, some or numerous smaller FSs). Furthermore, we expect the fastest performance from the maximum parallel streams per PSS save sets, i.e. 4. The high client parallelism setting is to allow these varying number of savesets (e.g. #PSS savesets * 4).
The actual problem: because of the high number of client parallelism, the PSS save sets (e.g. 2 such savesets) are started with 4 streams each, and four (4) out of those 4+4=8 streams are allocated the 4 avail devices. But then, as there are no further devices avail, and the actual backup data stream is not started before all the 4 PSS streams allocated a device, there is something like a "deadlock": neither of the 2 PSS savesets actually backs up, neither progresses or continues, both are waiting for a device that is not (and will not be) available until the group cancelled.
Actually, what we are willing to get is PSS save sets (of the same client, belonging to the same group) backed up one after the other, 4 parallel streams each time for best performance.
I am not quite sure that this can be achieved this way, without separate groups...
I think the solution is in multiple client and/or groups.
Will the problem reproduce if you configure 2 clients with 1 save set each in the same group ?