You are correct in thinking that when target sessions is reached Networker will move on to the next device. Can you verify that your AFTDs are mounted and not full and labelled in the Default pool?
I set server parallelism to 64, majority of clients to 2 and only few 4, for all groups Savegrp parallelism is 0 (does not override anything), and same is for all pools max parallelism is set to 0 (Limits the number of parallel sessions per device allowed when saving to this pool). Maybe increasing max parallelism would solve this issue? Otherwise I do not see problem with parallelism settings, maybe I am wrong.
Before last night backup, I made sure AFTDs were staged and had more then enough free space. AFTDs were and still are mounted and labeled in Default pool.
That sounds reasonable. Is backup-sun your storage node or the server? Is the problem seen just on the AFTDs on backup-sun or also on the other server/storage node?
backup-sun is server. Well, it sounds reasonable but it can't be done. I tried and Max parallelism for Default pool can't be changed, error is "NSR pool: 'Default' may not be modified. So that idea is off.
Default pools has one AFTD on server and two other on storage node. So I would say problem is with server and storage node, bacause Default pool is not considering another AFTDs in the pool.
Is this maybe a limit of Default pool? (max parallelism=0)
I checked clients and found few clients configured to have only server as storage node, this could be the reason why a group would wait only for Default pool AFTD from server. I corrected that. Now I wait and see what happens tonight.
After corrections backup went as expected tonight. Problem was that windows clients that my colleagues added addressed nsrservhost and windows storage node as storage nodes, some of them only nsrservhost (which caused windows groups to wait for their turn for only AFTD on backup server) and they should address only windows storage node which is policy agreed about clients.
DavidHampson-rY
294 Posts
0
November 25th, 2010 05:00
Are your clients configured to use both server and storage node as the storage node?
Max_parallelism=0 means there is no limit to parallelism but that is at the pool; parallelism can also be set on server, client or group
DavidHampson-rY
294 Posts
0
November 25th, 2010 02:00
You are correct in thinking that when target sessions is reached Networker will move on to the next device. Can you verify that your AFTDs are mounted and not full and labelled in the Default pool?
_sasha_
6 Posts
0
November 25th, 2010 03:00
I set server parallelism to 64, majority of clients to 2 and only few 4, for all groups Savegrp parallelism is 0 (does not override anything), and same is for all pools max parallelism is set to 0 (Limits the number of parallel sessions per device allowed when saving to this pool). Maybe increasing max parallelism would solve this issue? Otherwise I do not see problem with parallelism settings, maybe I am wrong.
DavidHampson-rY
294 Posts
0
November 25th, 2010 03:00
I would next suggest you look at your parallelism settings to see if that may be causing an issue.
_sasha_
6 Posts
0
November 25th, 2010 03:00
Before last night backup, I made sure AFTDs were staged and had more then enough free space. AFTDs were and still are mounted and labeled in Default pool.
DavidHampson-rY
294 Posts
0
November 25th, 2010 04:00
That sounds reasonable. Is backup-sun your storage node or the server? Is the problem seen just on the AFTDs on backup-sun or also on the other server/storage node?
_sasha_
6 Posts
0
November 25th, 2010 04:00
backup-sun is server. Well, it sounds reasonable but it can't be done. I tried and Max parallelism for Default pool can't be changed, error is "NSR pool: 'Default' may not be modified. So that idea is off.
Default pools has one AFTD on server and two other on storage node. So I would say problem is with server and storage node, bacause Default pool is not considering another AFTDs in the pool.
Is this maybe a limit of Default pool? (max parallelism=0)
Any other idea?
_sasha_
6 Posts
0
November 25th, 2010 05:00
I checked clients and found few clients configured to have only server as storage node, this could be the reason why a group would wait only for Default pool AFTD from server. I corrected that. Now I wait and see what happens tonight.
_sasha_
6 Posts
0
November 26th, 2010 00:00
After corrections backup went as expected tonight. Problem was that windows clients that my colleagues added addressed nsrservhost and windows storage node as storage nodes, some of them only nsrservhost (which caused windows groups to wait for their turn for only AFTD on backup server) and they should address only windows storage node which is policy agreed about clients.
Thanks David.