Unsolved
This post is more than 5 years old
1 Rookie
•
44 Posts
0
3738
October 4th, 2010 06:00
Using only one volume of a pool per backup session
Hello
I have a pool with 2 volumes in it. I'm having a problem while doing backups with this pool, it uses both volumes for the backup,and i only want to use only one volume, and the other should be used just in case the first one becomes full while doing the backup.
Anyone knows how to overcome this behaviour??
Greetings



coganb
736 Posts
0
October 4th, 2010 07:00
Hi,
You can increase the target sessions value for the drive. If, for example, you have a savegrp parallelism of 6 for the group backing up to the pool and the target sessions is set to 7 for the tape drive, then there will never be too many streams for the device and it will not move onto the other tape until the first tape is full.
-Bobby
DavidHampson-rY
294 Posts
0
October 4th, 2010 07:00
You can increase the target sessions on your devices so that backup streams will continue to go to one drive until that value is reached after which it will then use an additional drive.
Mark_Bellows
240 Posts
0
October 4th, 2010 08:00
osk,
David and coganb are correct. Increasing the sessions will help resolve this issue.
It may take a couple of adjusments to get it to the correct level, so dpn't give up if it doesn't work the first time through.
Mark
fernadoroves
1 Rookie
•
44 Posts
0
October 4th, 2010 08:00
ok thank you guys!
DavidHampson-rY
294 Posts
0
October 4th, 2010 08:00
The only thing that may trip you up is if you have more sessions than your target session which will mean the next tape will get mounted in another drive. If that may happen it would be necessary to take a broader overview of your configuration to decide the best way to tackle this.
wlee
263 Posts
0
October 4th, 2010 20:00
How about modifying your pool properties called "max parallelism"
Man page on nsr_pool explains this parameter:
max parallelism (read/write, hidden)
This attribute can be used to impose an upper limit for the number of parallel sessions saving to a media belonging to the pool. Fewer parallel save session written to media reduces the time required to recover data from a saveset. Value of zero imposes no limit on number of parallel save sessions written to media belonging to this pool.
So... if I set the device target session=12 and the pool max parallelism=12, then networker will only use one tape at a time from that pool.
Never tried it... but I think it sounds right ...
DavidHampson-rY
294 Posts
0
October 4th, 2010 20:00
The Max Parallelism attribute is there to prevent Networker multiplexing too many savesets to one tape (and thus taking longer to read data in the event of recovery), probably not what we want in this case. The target sessions is the minimum number of data streams (savesets) sent to the device before diverting new streams to another device.
In the suggestion you make Wallace, the first device will accept savestreams until it is receiving 12 streams of data; at that point we are at our target session value so Networker will choose to direct the 13th stream of data to another device (if one is available). We have also reached the max_parallelism limit so that parameter would also act as a limiting factor, the difference being that when all the devices have reached their target session value any new savestreams will be shared out amongst the drives (assuming we haven't also set the max_parallelism), so the amount of savesets to the device can exceed the target sessions value (but only when all devices in the system have reached their target session value). In the event of the max_parallelism being set, when all devices reach that limit savesets will queue until other savesets complete backing up.
wlee
263 Posts
0
October 4th, 2010 22:00
Hypothetically, you got a group that has x number of savestreams to backup.
So savegrp starts, which tells nsrd to choose the pool with the max parallelism=12.
There is a tape mount request for that pool.
The tape is mounted.
Savestreams are channeled through the nsrmmd to the tape drive. The initial number of savestreams channeled cannot be higher than the device target sessions value, and also cannot be higher than the pool's max parallelism.
If all the savestreams are processed at this point, then the backups will only use one tape a a time.
If target sessions=max parallelism, then if there are more streams, then they have to wait because max parallelism prevents any new backup savesets from being processed. Because of this limit, there is no need for NetWorker to load another tape. Again, only one tape will be used at any time.
Using my example of target sessions=max parallelism=12, the 13th stream cannot be processed on that pool because only 12 streams are allowed because of max parallelism.
Now I agree that this is not the ideal solution because you are hard coding the target sessions on the drive, and this is not necessarily optimal in general.
The only other alternative is to change the pool properties so that it only can use one backup device; then this ensures that only one tape can be loaded. However this can also cause problems if other pools are allowed to use that same drive too.
fernadoroves
1 Rookie
•
44 Posts
0
October 5th, 2010 05:00
Hi guys
Well today the backup ran and networker used only one volume of the two in the pool. Still i noticed that it used
the volume that was empty, and not the one that has information on it.(it's at 75% of use). Why networker chose the empty
volume(this volume is newer than the other one) instead of the one that still had space on it for doing the backup?
thanks!
DavidHampson-rY
294 Posts
0
October 5th, 2010 07:00
Networker will use appendable volumes in the same pool first, if I remember correctly if there are several appendable volumes available it will choose the one with the lowest mounts value (I think there is a knowledge base article on this so you could try doing a search on it as I don't have access). Since the empty tape has been labelled into the pool and is thus appendable it can also be mounted.
fernadoroves
1 Rookie
•
44 Posts
0
October 5th, 2010 07:00
but my idea is to use the one that has data in it so when it becomes full while doing a backup, networker would use the empty one....
how can i achieve this?
thanks!
DavidHampson-rY
294 Posts
0
October 5th, 2010 07:00
Thanks for correcting me Wallace, I forgot that the max-parallelism is set on the pool, not the device. Yes, that should work.
wlee
263 Posts
0
October 5th, 2010 18:00
Only way I know for sure is to pre-load the tape you want to use first before starting a backup. The when the backup starts and wants a appendable volume from that pool, it will already know that a tape is already loaded and start using it.
Historically, NetWorker was designed to use older tapes instead of new tapes. So if you have a tape that is partially written and a brand new tape, both from the same pool, then if backups start and wants a tape from that pool then NetWorker should pick the first one, not the new tape. I am not sure if this behavior has changed with the current versions.
However, assuming that NetWorker is still designed to use older tapes first, then you can compare the two volumes date that they were first labeled. See which one is older. Is this the tape it chooses?
mminfo -mvV -r "volume,olabel,labeled,recycled" volume1 volume2
where:
olabel: The first time the volume was labeled
labeled: The most recent time the volume was labeled
recycled: Number of times the volume was relabeled
DavidHampson-rY
294 Posts
0
October 5th, 2010 19:00
There is only one way you can ensure the tape with most data gets used first and that is to load it first. Tapes get used in (from memory) this order:
Appendable tapes loaded
Appendable tapes not loaded
Expired tapes available for recycling
Unlabelled tapes if automedia management enabled
I've just taken a quick look at the manual to verify this and found this info:
---------------
If two or more volumes from the appropriate pool are available, the server uses this hierarchy to select a volume:
1. Mounted volumes from the appropriate pool with the mode appendable are selected. This includes newly labeled volumes. If more than one mounted volume is appendable, the server uses this hierarchy:
a. Device availability.
The server writes to the volume from the appropriate pool that is mounted on the device with the fewest current sessions.
b. Volume label time.
The server writes to the volume with the oldest label time if the mounted volumes are appendable and session availability is not an issue.
2. If a library is in use and there is no mounted, appendable volume in the library, the server determines whether there is an unmounted, appendable volume available. This includes newly labeled volumes. If multiple unmounted, appendable volumes are available, the volume with the oldest label time is selected.
3. If no mounted volumes are appendable and Auto Media Management is enabled, a mounted volume with the mode recyclable is selected. The server relabels and mounts the volume.
Note: A volume is automatically set to recyclable when all save sets on the volume, including partial save sets that span other volumes, are marked as recyclable.
If a stand-alone device is being used and Auto Media Management is not enabled, the server sends a mount request notification.
4. If a library is in use and no unmounted, appendable volumes exist, the server determines whether there is an unmounted, recyclable volume.
If Auto Media Management is not enabled, or if there are no appendable or recyclable volumes, the server sends a mount request notification.
---------------
So from that if multiple appendable tapes are available it should select the oldest labelled one first, so from that we should expect it to right to the oldest one until that is full then move on to the next one. Have we been seeing the opposite of that or is it just something we may be thinking may happen?
DavidHampson-rY
294 Posts
0
October 6th, 2010 05:00
Don't blame me, I only quoted the manual
I always thought that the selection was based upon the volume with the least mounts when more than one appendable tape was available. Try verifying this with mminfo -q volume=volumename -r mounts and see if that might be the case. Perhaps one of the EMC guys can give us an official answer on this?