Start a Conversation

Unsolved

This post is more than 5 years old

3278

February 24th, 2013 10:00

Avamar Full R2R for Migration

All,

I have a customer that I just installed a Gen4 grid into.  Currently they are running on a Gen3.  I want to do a Full R2R Migration of the Gen3 to the Gen4.  I know the commands to perform this and have tested it and it works just fine.  The issue is that it is approximately 7TB of data of that needs to replicate.  I want to have this as a scheduled replication but if you try to configure it through EM for replication as a "full" the destination directory is the /Replicate/xxxx on the Gen4 grid.

If I run the nohup --replicate command from the CLI to replicate full R2R, based on the performance test and the amount of data to replicate it will take about 3.5 days to replicate it all.  The issue is that the grid can't be down for that long due to the customer needing to perform backups not to mention the maintenance/blackout windows still need to be able to run.

My question(s) is/are:

Can I modify through the Destination path that appears in the EM GUI?  Is this done by modifying the repl_cron.cfg file?

If not, do I need to create a custom replicate cron job that will run replication in the format of an R2R on a scheduled basis, say 2AM to 7AM each night until the grid is completely replicated and we perform the client cutover to the new grid?

Thanks in advance!

Jerry

91 Posts

February 28th, 2013 08:00

I'm fairly certain that replication handled by EM and a root-to-root replication are different processes.

Why can't you just let the root-to-root replication run until complete?

2K Posts

February 28th, 2013 10:00

I want to have this as a scheduled replication but if you try to configure it through EM for replication as a "full" the destination directory is the /Replicate/xxxx on the Gen4 grid.

You can't configure root-to-root replication through the EM GUI. Hopefully you didn't start standard replication after starting root-to-root or vice-versa -- this has the potential to create problems. If that's the case, please get in touch with support to make sure the CIDs on both systems are consistent.

If you want to avoid copying all the backups during the root-to-root, you can specify a --after flag in your replicate command to replicate, for example, only the last day or two days of backups. This will speed up the initial root-to-root replication. and get the target grid seeded relatively quickly so the clients can be failed over. After the cutover, you could re-run the root-to-root (with no --after flag) to migrate any remaining backups from the old system to the new.

I'm fairly certain that replication handled by EM and a root-to-root replication are different processes.

Same process, different "mode".

March 4th, 2013 09:00

Thanks for the responses.

After chatting with a few other folks internally at EMC to get some clarification on a few things I performed my root-to-root replication.  My main concerns were maintenance and backups while the replication was going on.  I started the nohup --replicate command with a switch that started it as a background process and sent the output to a log file.  By doing this as a background process, I was told that the grid would be able to recognize the process was going when it hit the maintenance window and pause the replication.  This was good.

In order to make sure I got the backups that took place while the replication was going on, I took Ian's advice and used the --after=TIMESTAMP and added it into the nohup --replicate command and replicated just the backups that ran while the replication was taking place.  This worked like a charm.

Once the replication was completed, I restored the server as the main backup server.  Then in order to not have to manually register all of the clients I performed the Change Hostname Procedure on the new grid to match the name of the old grid and had the customer update the IP address of the Hostname in DNS. 

This appears to have worked for the clients, however, somewhere in the migration/Hostname change, the Policies did not get moved over.  I am thinking at this time rather than open a case with support it will be quicker to recreate the retention, schedule, and dataset policies manually on the new grid.

I will update my notes in case anyone is interested.

Thanks,

Jerry

9 Posts

March 5th, 2013 05:00

Jerry, have a look at this document http://solutions.emc.com/emcsolutionview.asp?id=esg111139

It explains how to export and import groups fom one grid to another using the mccli group export/import commands

2K Posts

March 6th, 2013 07:00

It sounds like either the step where the most recent MCS flush is restored on the root-to-root replication target was missed or the wrong flush was restored. I would recommend having support give the replication target a quick once-over to make sure the mcdb and the GSAN user accounting system are in sync, otherwise the customer may encounter strange problems later.

2K Posts

March 6th, 2013 08:00

Glad to hear you got it sorted out. If you find an oversight in the documentation or procedure generator, I would encourage you to submit feedback through PowerLink / Service Center / the ProcGen itself.

March 6th, 2013 08:00

Ian,

That is correct.  The unfortunate thing is that in the documentation on doing a Server Migration  with a R2R, there is not a step or reminder that states to stop mcs on the destination server.     We did an mcserver.sh --restore with the last version right after the final migration of backup data and all policies and missing clients came over.

No Events found!

Top