2.4K Posts

April 8th, 2015 08:00

Basically, the storage node installation is simply the server installation w/o the installation of the server package itself.

Let me add that it is a bit strange to setup a clustered storage node in general because

  - You have the storage node list so another one can take over

  - You have client direct so that the latest clients can write directly to the disk devices (I assume AFTD or DDBoost devices)

    So at least with disk devices you do not really need a powerful storage node any longer

2 Intern

 • 

128 Posts

April 8th, 2015 09:00

Hi,

Actually, this is a dedicated storage node because it has a huge amount of data. It hasn't disk devices, only physical tape library (client direct does not apply)

About the recommended procedure, I'm not sure about that. Clusterization scripts will try to check the server packages...

Any suggestion?

Thanks

2.4K Posts

April 8th, 2015 10:00

I have never done that in practice but I can think of this method:

  - Install a clustered client and on each node, add the SN software package.

    If you the define the remote device, the client will automatically spawn the appropriate nsrmmd process.

445 Posts

April 8th, 2015 10:00

Polska22,

This is typically not mentioned in the Guides as it was not supported or there was always doubt about what was allowed – I will check to see the current advice, however there were a few caveats to the different ways you can set this up. For definite only Active/passive cluster set up was allowed. For clients this did not matter as devices were not in play, for NetWorker server typically you would have AFTD rather than physical/virtual tape and it would be part of the cluster service so failed over with the package. The difficulty came when there was tape involved as described below.

Drives were typically presented to both nodes and effectively shared but only one node accessing at a time. With the invention of Virtual tape libraries this became less and less common as the number of drives required could be increased within software without physical hardware costs. So typically now it would be more common not to cluster the storage node but just cluster the application and use curphyhost setting for storage node value which means use devices from the node you are running on at the time.

For clustered storage node:

If you define the storage node as the cluster service name (as opposed to one of the physical servers in the cluster) the device paths to the drives have to be the same on each host otherwise on failover NetWorker will find different volumes in different drives and you end up in a mess. Getting the device tree the same on each is possible but can be tricky sometimes if servers are not identical setups for devices etc.

If you define storage nodes as the physical host names then automated failover does not work as one storage node will be down and have the volumes in drives and when other side starts it sees same drives but they are don’t have the volumes in, so manual intervention is required to get the right picture back into play.

You don’t say if you have physical library or not – if its virtual I would not cluster the storage node package form NetWorker but just define 2 storage nodes and set the virtual client to use which ever it is running on at the time.

Regards,

Bill Mason

1 Rookie

 • 

88 Posts

April 9th, 2015 01:00

HI

have you tried with "curphyhost" on storagenode tab on client properties?

"curphyhost" --> current physical host --> you must configure both nodes as storage node

regards,

-Nicola

No Events found!

Top