4 Operator

 • 

14.4K Posts

December 15th, 2005 10:00

Of course, you can also always setup one tape drive from device property to be read only :)

4 Operator

 • 

14.4K Posts

December 15th, 2005 10:00

David,

Max. Parallelism defines the maximum number of backup devices that a jukebox may use for inventory, verification and label purposes. NetWorker will always use whatever drives are available for standard read/write operations (ok, forget pool restrictions here if any). However, as the processes mentioned above will be executed by the nsrjb command, multiple of those commands must be executed at the same time. This is especially useful in scenarios, where you want to prematurely label a large number of backup media or if you must run an inventory on the whole library.

After the jukebox installation (jbconfig), this value will be automatically set to # of jukebox devices - 1, ensuring that there will be at least one device available for other purposes. So you may not even notice a slow performance if you have a larger number of drives installed in your jukebox. However, if you have a jukebox with only two or three devices, you may increase the performance dramatically by increasing the Max. Parallelism value.

What you ask is something that not many of us will do in practice, but you may have your own reasons. In short, number of clients which are sending backup streams toward storage node will send number of streams from client defined in client parallelism - default is 4. Group, which holds client(s) may restrict this with group parallelism - default is 0 which means it is not used. These streams now have last limit - server parallelism which by default is 8 I think. Then these streams (8 by default) are sent to tapes and they are load balanced by target session value (default 4) to each available tape.

Example:
- clientA has 4 streams
- clientB has 4 streams
- they are part of group1
- server patallelism is 8
- target session are 4

This is pretty much default.

When group kicks in each client will send 4 streams which is 8 in total. Server is accept that as it accepts 8 sessions; 4 will be sent to 1st drive and 2 to 2nd drive.

Things complicate now. If you have more sessions than server parallelism they will be assigned to certain nsrmmd in advance. Most probably you will have more sessions from client that client parallelism (usually this is default today) and so on. Then you will have more client, more data and things at the end will turn pretty much bad with setup which is not designed to work that way.

To get NetWorker to use just one tape at the time I see following approach: have client/server parallelism set to the same value which should be the same as target sessions, but I believe it would fail to work due to some other parameters involved. To avoid those, have always 1 tape labeled and all others blank with auto media management enabled so when one is full second one is labeled. In such case you would get what you look for.

On the other side that would lead to bad performance, multiplexing, bla bla and I would really suggest to stay away (far!) from such concept - it's bad and it's wrong.

Using two tapes at the time is not bad - it is recommended. When you build backup system your primary goal is to secure the data so you must have restore on your mind. Multiplexing data with more than 4 streams is very painful thing to do. Perhaps it will give you great performance during backup (well, it will), but then during restore you will feel the breath on your neck of those who want their backup back and are wondering why is it so slow. They will mention SLA and then they mention some things you don't wish to hear as well.

If you really really want this setup and you want this to work, get a disk for backup and then stage this to tape after. You won't multiplex data and you can easily achieve to use only one drive during the staging process (and the speed will be very nice).

41 Posts

December 15th, 2005 11:00

Like the previous reply... I just disabled one of the tape drives, so only one is available. Seems like a waste, since I can't leave a Full tape in one drive and a Non-Full tape in the other. Guess I'll just alternate enabled tape drives to spread the wear between them.

I just added a storage node with its own SDLT600 autoloader, so I don't need the 2 drives so much any more for speed.

Say, does this new software support site have a Request For Enhancement form?

--
David Strom

4 Operator

 • 

14.4K Posts

December 15th, 2005 11:00

Apparently there should be, but right now when I click on it I get "Bug/Feature Request information is unavailable for the product family you selected." (where Storage Software is the selected product family). I would assume you need to open the case and specify that you wish this to be RFE. Perhaps someone else knows more...

4 Operator

 • 

14.4K Posts

December 15th, 2005 12:00

Argh, I found it elsewhere... Use following link:
http://forms.legato.com/products/enhancement.cfm

41 Posts

December 15th, 2005 13:00

Just happened to talk on the phone to a Legato tech who was following up on a case that was due to be closed. .... And the answer is.... use the devices in the Pool configuration to limit how many tapes can be loaded at once. Could be problems if a bunch of pools whose backup windows overlap, but in a simple Full/Non-Full (and little-used Default) setup with 2 tape drives I can just "assign" Full to one drive, and Non-Full to the other and allow both to access the single tape drive in the storage node autoloader. Maybe switch after 6 months or a year to spread the wear & tear more evenly.

Any "gotchas" with this scheme?

1 Message

December 15th, 2005 13:00

David,
Have you tried assigning the drive to a specific Pool? Are you trying to assign 1 drive to a full volume pool and the second to an incremental? If so, you can restric drives to specific pools.

Ricardo

4 Operator

 • 

14.4K Posts

December 15th, 2005 14:00

If you have two pools you can do that. With more pools things would get complicated. Bare in mind that in past pools did work when it comes for backup, but not for restores - not sure if that changed recently.

Pools are usually used to group the data volumes and effective usage is based on retention so I would avoid your setup, but in your case that doesn't seem to be important anyway. Test your setup for a week or so and see how it goes.
No Events found!

Top