Just to put things into perspective, I looked up the LTO performance, and used the performance numbers that included the 2.5.1 to 1 data compression ratio.
With LTO-8, performance is 1180 MiB/sec, so with 1 tape drive, 150 TiB would take 37.1 hours
With LTO-9, performance is 1770 MiB/sec, so with 1 tape drive, 150 TiB would take 24.7 hours
With LTO-10, performance is 2750 MiB/sec, so with 1 tape drive, 150 TiB would take 15.9 hours
Note: LTO 9 and 10 are not available yet. The performance numbers come from:
I have set up 8 DDBoost device, each device has it's own pool.
At the moment, I configured the cloning using "Query mode" in NMC and chain them using simple Powershell scripts
I have set up 8 Query groups, one per backup pool. I launch 2 jobs per SN at a time, so four jobs at the same time. I tried to configure the drives to use 8 target sessions, but at the present time, the optimal number of concurrent sessions seems to be 4 to have good performance.
I can't ouput the maximum speed on a single LTO drive at the moment. If i run one job with multiple stream on 1 drive, i attain 550 MB/s. If i run two jobs, 1 on each drives, i get 350MB/s on each. Never more than 700-750MB/s.
LTO7 are rated 788MB/sec assuming 2:1 compression, so getting 550MB/sec is pretty good
If you use 2 drives and it drops to 350MB/sec each, then that tells me that in that situation, you probably do not have enough data feeding the tape drives to get it up to "burst mode" where it can then continuously write data. This shows that using more drives is not necessarily faster than using just one drive if you do not have enough data to keep the drives in burst mode.
Now cloning is single streamed, meaning that if there is one clone process running, then regardless of how many savesets are to be cloned, then only one saveset is cloned at a time. This is different than in backups to tape where is by default is multi streamed. This is why cloning savesets is much slower that backing up the same data.
To get multi streaming cloning, you would need to run multiple clone processes, each with one or more set of savesets to be cloned to the same volume.
I would suggest that you use native tools such as the Unix dd command to test the drive and see how fast you can get it to write data. For example, here a 1 GiB of zeros is written to /dev/nst0:
dd if=/dev/zero of=/dev/nst0 bs=512 count=2097152
(preload nst0 with a scratch tape before using the dd command)
You can then perform a similar test using NetWorker bigasm directive. Then compare the performance from each test
This will tell you how fast you can get the tape drives writing through the operating system and hardware. That would be your physical limitation on write performance in your environment.
wlee
263 Posts
0
January 24th, 2018 09:00
Just to put things into perspective, I looked up the LTO performance, and used the performance numbers that included the 2.5.1 to 1 data compression ratio.
With LTO-8, performance is 1180 MiB/sec, so with 1 tape drive, 150 TiB would take 37.1 hours
With LTO-9, performance is 1770 MiB/sec, so with 1 tape drive, 150 TiB would take 24.7 hours
With LTO-10, performance is 2750 MiB/sec, so with 1 tape drive, 150 TiB would take 15.9 hours
Note: LTO 9 and 10 are not available yet. The performance numbers come from:
Linear Tape Org Roadmap Adds Gen 9 & 10, Up to 120 TB
CarC8
77 Posts
0
January 24th, 2018 10:00
Hi Bingo,
Here's how I configured the backups:
For VMware backups:
I have set up 8 DDBoost device, each device has it's own pool.
At the moment, I configured the cloning using "Query mode" in NMC and chain them using simple Powershell scripts
I have set up 8 Query groups, one per backup pool. I launch 2 jobs per SN at a time, so four jobs at the same time. I tried to configure the drives to use 8 target sessions, but at the present time, the optimal number of concurrent sessions seems to be 4 to have good performance.
Thank you,
Chris
CarC8
77 Posts
0
January 24th, 2018 10:00
Hello Wallace,
The library is composed of 6x LTO7.
I can't ouput the maximum speed on a single LTO drive at the moment. If i run one job with multiple stream on 1 drive, i attain 550 MB/s. If i run two jobs, 1 on each drives, i get 350MB/s on each. Never more than 700-750MB/s.
But i do understand your point.
Thank you,
Chris
bingo.1
2.4K Posts
0
January 24th, 2018 10:00
I don't know how you run the backup.
Let me suggest that you have multiple disk devices per backup pool.
This distributes the backup jobs/save sreams to multiple disk volumes.
Next use scripting cloning - one job per DD device/volume.
This way you will end up automatically with multiple streams which you can clone in parallel.
wlee
263 Posts
0
January 24th, 2018 19:00
LTO7 are rated 788MB/sec assuming 2:1 compression, so getting 550MB/sec is pretty good
If you use 2 drives and it drops to 350MB/sec each, then that tells me that in that situation, you probably do not have enough data feeding the tape drives to get it up to "burst mode" where it can then continuously write data. This shows that using more drives is not necessarily faster than using just one drive if you do not have enough data to keep the drives in burst mode.
Now cloning is single streamed, meaning that if there is one clone process running, then regardless of how many savesets are to be cloned, then only one saveset is cloned at a time. This is different than in backups to tape where is by default is multi streamed. This is why cloning savesets is much slower that backing up the same data.
To get multi streaming cloning, you would need to run multiple clone processes, each with one or more set of savesets to be cloned to the same volume.
I would suggest that you use native tools such as the Unix dd command to test the drive and see how fast you can get it to write data. For example, here a 1 GiB of zeros is written to /dev/nst0:
(preload nst0 with a scratch tape before using the dd command)
You can then perform a similar test using NetWorker bigasm directive. Then compare the performance from each test
This will tell you how fast you can get the tape drives writing through the operating system and hardware. That would be your physical limitation on write performance in your environment.