My goal is to migrate into a multi-tenant environment. Current config:
DD880 - all client backups are stored in /backup
DD890 - all client backups are store in /backup
DD9500 - brand new DD where i want to implement SMT.
I have created my tenants, tenant-units. Each tenant-unit will have one corresponding mtree. So the layout on DD9500 will look something like this (for each company):
Ok so now i need to migrate data from my existing DD880/890 into this layout. Well apparently we can't use directory replication for this exercise because directory migration requires that destination location starts with /backup and my tenant unit mtrees start with /data/col1. So how do I get there from here. One option i was thinking of was to go and setup directory replication to dd9500:/backup/cifs_share1. Once copy has completed, break replication and use fastcopy to copy the data from /data/col1/backup/cifs_share1 to /data/col1/hr cifs/cifs_share1 and then delete data from /data/col1/backup/cifs_share1. This is so convoluted but may be the only option to get into this SMT configuration.
Any comments, suggestions ?
As you said you cannot do directory replication to a different MTree. You can also replicate the existing data to /backup on DD9500 and let it sit there till it expires ? and if any recovery is required you can fetch it from there. You can set all new backups to go to SMT shares.
with 120 NFS clients and just as many CIFS shares "fetching" is not an option, yearly, monthly and weekly oracle exports have to be available right away. No one is going to wait for me to export them, then for system admin to mount them. I am afraid the option i provided is the only way to move forward.
Looks like you have a good plan in place. If at all it's an option, I would try to segregate CIFS/NFS shares on the current DD's in to their own mtrees prior to replicating them to new DD9500. Mtree replication performance is better compared to directory replication. In addition, I would spread the CIFS/NFS share data across multiple Mtrees instead of just one mtree per CIFS/NFS data type. For ex: 4 Mtrees for CIFS and 4 Mtrees for NFS.
i have a lot of customers and with 256 mtree limit i will run out of mtree pretty fast. Each business unit will have two mtrees, one for CIFS data and one for NFS.
Moving customers into mtree on source will force me to cause two outages for the customer, one to move them to an mtree on source DD and then another outage to cutover to new DD.