In out env. NFS mounted disks do not get backed up with saveset All. I believe that is also a design of NW unless you changed it with -L (scheduled) or -x (client initiated) switch for save command.
NetWorker won't backup NFS mount by default unless this has been explicitly configured (speaking of saveset All or client initiated backups). Another issue could be how this is presented in (v)fstab (if), but again backup should not happen. Can you tell us how the backup is performed, how does the save command look like, how does /etc/vfstab (if Solaris) or /etc/fstab (other UNIX) look like and output of mount command that could show us what is and how mounted on the first system.
Anyway, to avoid that (even that should not be the case in first place) you could write skip directive for mnt point on second machine.
If the disk is mounted on the second system (does not matter how it is mounted) then the Networker client software will detect the file system when the savefs command runs as part of a group backup (if backing up the All saveset) and thus back it up.
You either explicitly define the savesets used (rather than using ALL) or use a directive so that nothing gets backed up from that file system.
The best thing to take this one forward is to run savefs -p on each client and see what the file systems are reported back as. Also confirm that these clients are being backed up with the ALL saveset.
This problem has surfaced over the last few months the software has not been updated recently but RMAN backups have recently been implemented. In fact the nfs mounts that are being backed up are the RMAN shares. ps -ef |grep save comes back with multiples of the command similar to the one here: /usr/sbin/save -s bkupsvr -g scoop -LL -f - -m scoop -l full -q -W 78 -N /drnsprd/
This particular instance is one of the nfs mount points that is being backed up and shouldn't be, as you can see the -L option is selected. Where is this command line generated from and is there a file somewhere on the server or client that specifies what switches to use for the save command?
As far as I remember -LL is used by default by savegrp for eaqch saveset so I do not believe this is your issue. If you go to client and you run savefs -s bkupsrv -vpn check if you see your NFS point reported. If yes, then this is picked up by savefs from /etc/vfstab (Solaris) or /etc/fstab (other UNIX boxes).
RMAN backups should be done from db server itself - not NFS share so I'm lost a bit here how you have this set up.
Save command is generated depending on the settings on the group and client. Some of the values can be bypassed by setting save command in backup command field of the NSR client resource.
As far as I remember -LL is used by default by savegrp for eaqch saveset so I do not believe this is your issue. If you go to client and you run savefs -s bkupsrv -vpn check if you see your NFS point reported. If yes, then this is picked up by savefs from /etc/vfstab (Solaris) or /etc/fstab (other UNIX boxes).
yes it does pick them up.
The mention of the RMAN backups means: the RMAN backups are done on the server which then shares out the RMAN mount points so the DBA group can clone databases for development purposes. The reason I mention them is the last known change was the implementation of RMAN. It seems to be that only the RMAN nfs mount points are being picked up the other nfs mountpoints aren't.
RMAN backups should be done from db server itself - not NFS share so I'm lost a bit here how you have this set up.
Save command is generated depending on the settings on the group and client. Some of the values can be bypassed by setting save command in backup command field of the NSR client resource.
That seems to be the reason why this mnt point is picked up. Can you check (v)fstab for the entry for that file system and if there is any obvious difference between that one and other which are not being picked up?
The entries seem the same. However; this is the failover RMAN server and there are duplicate entries in vfstab with mount-at-boot option no. One is for the "local" filesystem while failed over, the other is the nfs mount from the regular server.
Am I right assuming you are then backin up "local" one? I guess you can't avoid that as it is seen being local except writing down skip directive for given mnt point.
Unless that will break something else you can do that as well... perhaps checking why do you have that in the first place would be good thing to check... given that you have no boot for it means it is not important to have it during boot up sequence mounted. Perhaps Oracle guys could give you more information if they did have any requirement for such entry.
ble1
4 Operator
•
14.4K Posts
0
April 13th, 2006 01:00
In out env. NFS mounted disks do not get backed up with saveset All. I believe that is also a design of NW unless you changed it with -L (scheduled) or -x (client initiated) switch for save command.
ble1
4 Operator
•
14.4K Posts
0
April 12th, 2006 15:00
Anyway, to avoid that (even that should not be the case in first place) you could write skip directive for mnt point on second machine.
DavidHampson
2 Intern
•
1.1K Posts
0
April 12th, 2006 17:00
You either explicitly define the savesets used (rather than using ALL) or use a directive so that nothing gets backed up from that file system.
DavidHampson
2 Intern
•
1.1K Posts
0
April 13th, 2006 10:00
The best thing to take this one forward is to run savefs -p on each client and see what the file systems are reported back as. Also confirm that these clients are being backed up with the ALL saveset.
lrawling
5 Posts
0
April 19th, 2006 11:00
ps -ef |grep save comes back with multiples of the command similar to the one here:
/usr/sbin/save -s bkupsvr -g scoop -LL -f - -m scoop -l full -q -W 78 -N /drnsprd/
This particular instance is one of the nfs mount points that is being backed up and shouldn't be, as you can see the -L option is selected.
Where is this command line generated from and is there a file somewhere on the server or client that specifies what switches to use for the save command?
ble1
4 Operator
•
14.4K Posts
0
April 19th, 2006 12:00
RMAN backups should be done from db server itself - not NFS share so I'm lost a bit here how you have this set up.
Save command is generated depending on the settings on the group and client. Some of the values can be bypassed by setting save command in backup command field of the NSR client resource.
lrawling
5 Posts
0
April 19th, 2006 13:00
savegrp for eaqch saveset so I do not believe this is
your issue. If you go to client and you run savefs
-s bkupsrv -vpn check if you see your NFS point
reported. If yes, then this is picked up by savefs
from /etc/vfstab (Solaris) or /etc/fstab (other UNIX
boxes).
yes it does pick them up.
The mention of the RMAN backups means: the RMAN backups are done on the server which then shares out the RMAN mount points so the DBA group can clone databases for development purposes. The reason I mention them is the last known change was the implementation of RMAN. It seems to be that only the RMAN nfs mount points are being picked up the other nfs mountpoints aren't.
not NFS share so I'm lost a bit here how you have
this set up.
Save command is generated depending on the settings
on the group and client. Some of the values can be
bypassed by setting save command in backup command
field of the NSR client resource.
ble1
4 Operator
•
14.4K Posts
0
April 19th, 2006 13:00
That seems to be the reason why this mnt point is picked up. Can you check (v)fstab for the entry for that file system and if there is any obvious difference between that one and other which are not being picked up?
lrawling
5 Posts
0
April 19th, 2006 14:00
lrawling
5 Posts
0
April 19th, 2006 15:00
ble1
4 Operator
•
14.4K Posts
0
April 19th, 2006 15:00
ble1
4 Operator
•
14.4K Posts
0
April 19th, 2006 16:00