Start a Conversation

Unsolved

This post is more than 5 years old

2042

September 1st, 2014 04:00

Shared Library with EMC NetWorker and one Storage Node

Hello,

as of performance issues with our tape performance I bagan to excam the "Performance Optimization Planing Guide" followed by the "Configuring Tape Devices for EMC Networker" and found our bottelnek in the way the tape drives are attached to the storage node.

Actually all four tape drives and the autochanger are in the same zone and are connectet to a fibre switch in loop trough mode. The switch is connected in direct mode by one channel to the storage node.

The above documentation makes it quite clear that the max. transfer rate for all drives, in sum, would be 170 MB/s which I could confirm.

The output of the inquire command shows them fine:

scsidev@46.0.0:IBM     ULTRIUM-TD5     D8D4|Tape, \\.\Tape2147483646

scsidev@46.1.0:IBM     ULTRIUM-TD5     D8D4|Tape, \\.\Tape2147483645

scsidev@46.2.0:IBM     ULTRIUM-TD5     D8D4|Tape, \\.\Tape2147483644

scsidev@46.2.1:SPECTRA PYTHON          2000|Autochanger (Jukebox),

                                           S/N: 9112004E2F

                                           ATNN=SPECTRA PYTHON          9112004E2F

scsidev@46.3.0:IBM     ULTRIUM-TD5     D8D4|Tape, \\.\Tape2147483643

As you can see, they are all attached over a single channel.

By diging arround I found the "Shared Library with EMC NetWorker" documentation (white paper from July 2010)

This document shows how one autochanger with 4 tape drives could be connected by 2 storage nodes 2 drives attached to one storage node and 2 drives and the autochanger attached to another storage node.

Since this does nor represent our environment, we just have one storage node, I am not sure if this can also be achived with one Storage node.

In the first step I would like to attach one tape drive directly to the storage node, in loop trough mode, without using the switch.

So there wouldn't be any configuration issue on the switch/ data zone in case of an fallback.

So I would have 3 tape drives and the autochanger attached by one channel and one tape drive attached by one channel.

In the second step I would attach 2 more drives by 2 additional chanels directl to the storage node.

My questions are:

1.) Is this possible, it should I guess.

2.) Is it sufficent to use JBedit and when yes how?

3.) If I will have to use JBconfig, should I delete the Jukebox before?

To make it short... what would be the best way to get the four tape drives working, each with 170MB/s.

Regards

Robert

14.3K Posts

September 1st, 2014 14:00

Don't complicate - if you have several HBAs (or dual port HBA) simply do zoning properly and they will go over different bus.

5 Posts

September 1st, 2014 22:00

Thanks for your answer.

Will I have to configure anything else then the zoning?

14.3K Posts

September 2nd, 2014 04:00

No.  Once this is done you should see device connecting over two ports.  I'm not sure if SL has some sort of host management on their side (like IO blade) where you must do mapping or they provide pure and direct connectivity which only depends on zoning; if if clean then zoning devices is the only thing to do.

5 Posts

September 12th, 2014 00:00

Hrvoje, you were right, thanks.
putting 2 drives into a new zone and a "scan for devices" is all that was needed.
Unfortunately this did not solve my performance issue.
After doing some performance tests I am a bit confused...
All IP devices are using 10Gb, every connected switch confirms that they are connected by 10Gb.
To prove this I copied some large iso files around the network, from the storagenode to a client and vice versa. The performance was ok by arround 300- 400MB/s.
The backups are running with about 500- 600MB/s, I can see this on the Switch and in NMC on the DDBoost Device.
So I would say: there is no network issue.
This in turn leads to: the DD/ DDBoost is not able to read faster then 1Gb/s, because thats the speed I get out.
But.... thats impossible.
By the way:
DD/ DDboost

OS: 5.4.0.8-404909

Model: DD670

Networker 8.1.1.5

Thanks

14.3K Posts

September 12th, 2014 02:00

Unrelated to your topic, I think DDOS 5.4.2.1 is recommended version to be.

As for speed, this topic started with FC drives and now we are talking about 10Gbps ports and DD Boost which is mostly used over IP - are you talking about performance issue during read with FC DD Boost, IP DD Boost or something third?

5 Posts

September 12th, 2014 03:00

yes, I am talking about performance issue during read with FC DD Boost.

I just did a new test, where I backuped a smal server direktly to storagenode/ tape with 200 MB/s.

A backup from the same server to DD would run a bit faster at 250MB/s

A clone or stage from DD to storagenode/ tape is running just at 40-80 MB/s and the DD is doing nothing else..

Thats what I do not understand... because I can see no reason for this.

14.3K Posts

September 12th, 2014 04:00

I certainly see much better stats for IP.  I do not use DD Boost over FC so I would suggest to open a ticket with support and provide layout (including FC mapping) to check why do you see what you see.

No Events found!

Top