We have 2 physical HP-UX 11.31 servers in a MC Serviceguard cluster. This is my situation;
Last week all backups were working properly, however I had the cluster File Systems fail over to ServerA from ServerB. I can backup ServerB, and the Cluster File Systems on ServerA, however I can't backup the local filesystems.
Any help would be appreciated.
I have seen 'savefs: nothing to save' being caused by the host's name not being included in the /etc/hosts file or by the dns not resolving the hostname correctly. You might want to check these.
Hello Bobby and thanks for your help,
I checked my host files on both ends and have no issues with resolution. I checked the short and long names, both in the hosts file and resolving and I have no issues resolving.
This all happened after I mounted the clustered file systems on the server. I can backup the clustered file systems by its clustered name, but can't backup the physical server.
run savefs -p -D9 and check the output.
'savefs -p -D9 > savefs_out 2>&1'
grep matchbyname savefs_out
This will tell you what is being compared to what. If you have a line that states that 'localhost' is being matched to your hostname, then it looks like a resolution issue. If not, then it's somthing else. Check also if the lcmap file is being invoked.
Thanks again for your help. When I grep the matchbyname, I get the following;
clu_hosts_matchbyname() comparing CLUSTERNAME and PHYSICALNAME
Also saw this;
How can I tell if the lcmap is being invoked?
Your matchbyname looks ok although I don't have a cluster here to test that to confirm. The lcmap will query the cluster
nodes for information on what to backup. You can search for the word 'lcmap' in the savefs output you have. If it looked for
this command and didn't find it, it will say so, and this means that the cluster is not being recognised by NetWorker. You
could check also your /etc/fstab - maybe compare this to the same file on a working system.
If this doesn't get you anywhere, you should probably open a Service Request for someone in support to have a look
at your configuration in detail.
taken from the cluster install guide 7.5:
The NetWorker client software must determine an owning host for any paths that it
saves. The NetWorker software determines which mount points an
MC/ServiceGuard or MC/LockManager package owns by the entries in the
.nsr_cluster file, located in the /etc/cmcluster/ directory. The .nsr_cluster file should
have an entry for the NetWorker shared mount point, which is owned by the
To configure the .nsr_cluster file:
1. Add the name and path of each mount point to the file in the following format:
published_ip_address is the address assigned to the package owning a
shared disk. IPv6 addresses must be enclosed in square brackets, as in the
Ensure that the ownership and access permissions for the .nsr_cluster file are
"read" for World.
3. Additional paths, preceded by colons, can be added as required. The following is
an example of a typical .nsr_cluster file:
/etc/cmcluster/.nsr_cluster file. networker:192.168.109.41:/vg011
Have you configured this? If not, networker will detect an mc/sg cluster and determine there are no savesets to be backed up.
I agree with what has been said above. I have 20+ boxes with exactly the same setup and usually the only reason why it would fail like that would be caused by .nsr_cluster not set correctly (or cluster check file not touched) or if machine had network interface which was not listed in alias list for given client.
Hello and thanks,
I am still having the issue. My .nsr_cluster works like a charm on my other 3 clustered nodes, however one is being a pain. The one thing that I did change was the world read permissions to the .nsr_cluster file. Still didn't work though.
Go to that box. Post following outputs:
- cat /etc/hosts
- cat /etc/resolv.conf
- cat /etc/cmcluster/.nsr_cluster
- ll /opt/networker/bin/NetWorker.clustersvr
- savefs -s <fqdn of the backup server> -vpn
From backup server do following:
echo print | nsradmin -p 390113 -i - -s <name of client resource in NW for client which is failing>
Post outputs from above please and we can check it out.