Welcome to this Ask the Expert discussion. As any IBM Tivoli Storage Manager (TSM) administrator knows, there are often multiple ‘right’ ways to do things in TSM. The purpose of this session is to discuss the various ways to incorporate EMC Data Domain storage into your TSM environment, and to help provide recommendations from an alternate point of view to solve some of the challenges that can come up. Best practices will be presented and documentation referenced where available. Participants are welcomed to suggest alternate ideas and debate merits of various approaches.
Meet Your Expert:
Technical Account Manager at EMC
Nick has spent 18 years in the Backup and Recovery space; as a Backup Admin, Technical Sales Engineer, and Technical Account Manager, working with products from EMC, HP, IBM, Symantec, Sun, SpectraLogic and Oracle. He has participated in over 30 successful Disaster Recovery tests, with only one (fairly spectacular) failure (when the tapes arrived at the DR site with water in the tote), and somehow enjoyed the 40 hour shifts, living on soda and cold pizza, and that feeling of knowing you put everything back together where it belonged. Nick has expertise as an EMC Proven Professional Backup and a Recovery Associate (EMCBA). He has proficiency with EMC Data Domain products, from solution design to administration to support, with a strong focus around the VTL feature, and a long term expertise with IBM Tivoli Storage Manager, IBM tape libraries and drives. He also still has his pins from being an IBM Certified OS/2 Engineer.
This discussion takes place February 9 - 27. Get ready by bookmarking this page or signing up for e-mail notifications.
Share this event on Twitter or LinkedIn:
how many customers do you see in the field that use device class=file and actually mount two NFS exports from DD and let TSM write to both mount points simultaneously.
We were looking to get started on the 9th, but I like this question so I will jump in and get things started now.
While that setup isn't widespread, we do see that with some customers. There are a few different scenarios where I have seen this done:
Is anyone using any of these scenarios, and what are the results you are getting from that configuration?
i was considering using two mount points when defining my device class. TSM literature was saying that by using two mount points TSM would provide load-balancing, supposedly better link utilization of 2 x 10G links to Data Domain, versus 1 LACP port-channel on Data Domain. Ultimately i did not trust TSM to handle link related issues/failover and went with LACP port-channel on Data Domain.
Welcome to this Ask the Expert conversation. This event has now officially started. Thanks for those who posted their questions ahead of the start of the event and please continue the discussion. Everyone else is welcome to jump in and begin engaging our expert. Let's make this a productive and organize forum for everyone to benefit. Cheers!
1) How many TSM customers that you interact with are using VTL versus NFS/CIFS ?
2) How many TSM customers stop using disk storage for primary pool and create primary pool on DD. When we migrated from IBM 3584 to DD we still used a pretty big disk pool on DMX3 to handle the random workload of hundeds of clients kicking off backup jobs at the same time. Also by using disk pool as the primary pool we could easily schedule Data Domain DDOS upgrades knowing that we had enough space in the primary pool to handle Oracle archive logs backup while DD is being upgraded. What do you see folks do in the field ?
5 years ago there was only one large TSM shop that I know of that was not using the VTL feature on their Data Domain systems, but using devclass FILE over NFS mounts. Part of their reasoning was the flexibility using NFS mounts provided, where they could easily mount to another Data Domain and direct backups there if the first Data Domain was filling up. Managing an IP infrastructure being easier than managing FC was also part of the thought process, and lastly it was less expensive to expand their network environment than their FC environment to add the systems in. It worked well for them at that time, and continues to this day. They do have a little VTL in their environment as well for a few use cases where it made more sense, but they have been TSM on AIX to Data Domain over NFS since their first Data Domain system was installed.
Today we still see most TSM environments utilizing VTL, with maybe 20% as NFS only. We do see some hybrid environments, where a TSM server will have a Data Domain attached as VTL for data storage and also have an NFS mount to another Data Domain, to be used for TSM database backups and an emergency overflow if the primary Data Domain becomes full. As Data Domain is often being placed as a physical tape replacement, the natural fit is as a VTL.
As for using a devclass DISK pool in front of the devclass FILE storage on the Data Domain, that is very common, enough to call universal. For streaming workloads we will see the data go straight to the Data Domain, but anything like your situation with hundreds (or even just dozens) of clients doing simultaneous filesystem backups, having that buffer makes perfect sense. Your idea to write the Archive/Redo logs to devclass DISK is a great use case as well.
What I have seen is smaller DISK pools being used. It may be a more widespread phenomenon, but having enough devclass DISK storage to hold the night’s backups is unheard of with the customers I've been working with. They will allow migration to run during the backup window, and in one case have their HIGHMIG set to 0 at all times so data is immediately being written off to the Data Domain after it lands. I think this is mostly the result of the increase in disk performance to handle reads and writes at the same time, and the challenges in getting large amounts of disk (beyond the TSM database and log space) allocated.
I don’t know of a TSM customer using CIFS, but then there aren't a lot of large TSM Servers running on Windows. I’m sure there are some out there, but I have not worked with them.
i am surprised that NFS adoption in TSM world is so low. Even though we had plenty of FC infrastructure in place, we wanted to get to something simpler and NFS was it. And as you mentioned in your example, being able to mount two Data Domains (DD880 and DD890 in our case) gave us the option to change "next" pool for certain management classes when we would run out of capacity on one of the DDs. We ran TSM on AIX , 10g layer 2 connection between it and DD890. What also helped my performance on AIX was setting this parameter in dsmserv.opt ..I was able to get much better throughput to my DD.
We are no longer a TSM shop, Avamar has been very good for us
Thank you for the performance tip - that is something that will be helpful to a good number of customers.
We have one TSM customer who is moving from NFS to VTL - their network infrastructure is overloaded, and rather than going through the pain of a major upgrade there, they are offloading traffic to FC. It's a stop-gap measure at best, but will buy them some time before they have to perform the network upgrades.
I believe we will be seeing more customers moving from VTL to NFS in the future as they are doing equipment refreshes, as that is usually a point where they will review their architecture. I've yet to talk to a customer who isn't looking to cut costs, and reducing the FC footprint in their infrastructures seems to be a frequent place that is happening.
what kind of dedupe rates do you see on average with TSM/DD customers ? (open systems)