I want to make Dell/EMC support aware of a general issue which has now been due for some years. First time I discovered it with NTT 1.4.0. The problem is that for the routine NSRDDRCHK a configured DD will only be recognized if you deal with a NW/Linux server - there is no luck if you do it from a NW/Windows server. This problem persists up to the current NTT version 1.7.0.
Unfortunately, the examples listed in the User Guide only show the usage on a NW/Linux server - it might well be that the behavior has never been tested with NW/Windows.
Now - details from the log file point to a problem with name resolution and FQDN usage instead of an IP address. This is misleading as I do not use IP addresses to access the DD and that it obvioulsy does not matter for a NW/Linux server whether a short or a FQDN will be used at all.
Of course I could add screenshots here but the issue is easily reproduceable that it only takes you minutes to verify the behaviour.
As it is of general interest, I thought it might be useful to correct the issue for the benefit for all customers.
with all the best intentions from the internal developers that spend ther precious time on NTT, but it is a shame almost that actual real world examples do not seem to be taken into account how to use it. Even though I have been in close contact with their NTT team even, some things might not be implemented any time soon, that would make it more sense for us to be able to use it structurally.
We for one do not have a graphical desktop on any of our NW RHEL backups servers. Neither does NW NVE (virtual edition, with Suse under the hood). So certain functionality cannot be used as it requires one to have a browser to click on commands to have them run, even though these commands themselves can also be run on the cli (I extracted for example nsrpchck and use it on our Linux servers) it - to me - makes no sense for the tool to even require a desktop (unless you'd be running on windows that is).
Also the added functionality to run certain things remotely from windows to linux is not working as one would expect in a productional environment. I don't know aout others, but in our case there is in no way any direct root access possible, so if at all it should offer functionality to specific a specifc user id and then have it perform commands with sude on linux nw servers, so that have it running only under very controlled RBAC circumstances. Ideally without any passwords but using ssh public key authentication instead of toying away with passwords (even ssh public key authentication using ssh passphrase would make much more sense (in combination with keychain for example) than having to supply passwords in config files or having to input them when it is being run...
Also I see no need to have to install it on each and every NW server, I'd prefer to have also have a central method to deploy it on one central management server to performs all the checks towards all landscapes, so that it is easily scalable instead of requiring a deployment on all nw servers.
So yes I applaud NTT and what it tries to achieve - that is long overdue! - the same goes for nsrtools (which is made in spare time of Dell people), but Dell seems to be living at times in a developer-only world (and with one big flat network at that where everything is able to connect to everything and no separation of customer front-end and backup back-end netyworks) that is unlike our factual operational-world...
Unfortunately, I do not agree.
Of course you can argue about the pros and cons of a GUI and I fully agree that a GUI is not necessary in a bunch of cases. And if it follows standards like HTML5 there are a bunch of situations where you prefer an old-fashioned GUI which does not need paging and where in general a screen would not show 50% of useless and/or empty information. But once again - I am more an old-fashioned guy.
Sure - you can use the command line, but whatever someone offers, the customer expects it to work. Otherwise it will just be considered as a failure and the whole benefit is gone. And it will be used against you.
NetWorker interoperability has often been an issue in the past. Knowing the product for 30 years now (and still learning) two bugs just come into my mind:
As a customer, I am not interested where the issue has been implemented. As such I expect that at least QA should have some standard routines (hopefully automated) that such bugs will just be detected before the official release. But maybe customers today just have lower their expectations.
I don't object to a gui, however requiring a desktop whereas normal expectation should have been there wouldn't be one, is something else. My guess is that in almost all linux systems that are available in our landscapes (thousands of hosts) none would have a desktop. However some would still have a X-based gui, depending on the application deployed on it. Just like in the past, when we Xforwarded nwadmin to our windows desktop that was running some kind of X software.
With that another old bug comes to mind that was simply turned into a feature, is that from a specific version (was it nw7 or 8?) the nwrecover gui on (lin)ux stopped working, showing some lib error if memory serves me well. In the next release nwrecover was simply deprecated instead of being fixed.
Another one was that after no longer a windows system was required to remotely update windows systems when you had a linux backupserver, that it simply did not work. The man pages where still talking about requiring a proxy, whereas the command itself simply disn't work. So no nsrpush possibilities whatsoever. Or is that the one you were referring to?
Or the bug in nw9.x where nsrdr was actually not possible as the bootstrap backup only partly succeded?
Those should have been pretty simple to find put during testing, being part of daily maintenance... Developers should be forced to use their own product really for some time, before releasing it.