Start a Conversation

Unsolved

27 Posts

581

May 3rd, 2021 23:00

disk is not available 2-3 times a week

Hello all,

Several discs have been created in the Networker (NetWorker 19.1.0.3.Build.112):
# nsrmm -C
Data Domain disk Index.001 mounted on xxx.xxx.xx_Index, write enabled
Data Domain disk Oraclefc.001 mounted on xxx.xxx.xx_Oracle-fc, write enabled
Data Domain disk Dataeth.001 mounted on xxx.xxx.xx_Data-eth, write enabled
Data Domain disk FA.002 mounted on xxx.xxx.xx_FA, write enabled
Data Domain disk dd.001 mounted on xxx.xxx.xx_dd, write enabled
Data Domain disk Datafc.001 mounted on xxx.xxx.xx_Data-fc, write enabled
Data Domain disk Oracleeth.001 mounted on xxx.xxx.xx_Oracle-eth, write enabled

All disks except oracle are on one MTree in Data Domain.
Dataeth.001 disk is not available 2-3 times a week.
To continue working, the Dataeth.001 disk has to be remounted.
There are no problems with the rest of the disks.
How to get rid of this problem?

2.4K Posts

May 4th, 2021 03:00

All active devices are constantly monitored by nsrmmd. So if you check the daemon.raw file on the NW server, you might finf the time and the reason for the problem.

Besides NW it could also be a problem wit an old DDOS version.

 

May 9th, 2021 12:00

Nw19.1.0.3 is also almost 2 years old (released june 2019). Any specific reason to stick to that version?

There's simply too many fixes, even within the nw19.1 tree to sum up. 19.1.1.3 is from feb 2020, while 19.2.1.4 was released in jan 2021.

Not saying to upgrade for upgrades sake, but remaining at a two year old version doesn't read like regular patching to me?

We plan to update twice a year (if not more when a dsa or dta is recommended to be followed) and are rather defensive and run nw18.2 mostly, except for some environments running nw19.3. Only with the nw nve appliances we went for 19.4 rightaway.

Without any error, nor any ways that show how you setup thongs (for example explaining what that one oracle one is about that is not part of the same mtree? Is also see some eth and fc references ij the device names, so I assume you also have fibre connected dd as well besides network?

Also not clear if you use any nw storage nodes, to prevent the nw server from beimg overloaded in case clients cannot perform client direct backups and would have that performed by the nw storage node. We prefer to use dedicated vm's for that, one nw sn for each location.

No Events found!

Top