Start a Conversation

Unsolved

This post is more than 5 years old

7195

January 14th, 2013 08:00

Clone NFS export settings to target cluster

Is there a way to quickly clone NFS export settings to the SyncIQ target cluster ?

Here is an example :

We have the following replication policy:

/ifs/data  ------>SyncIQ Policy------>/ifs/data (on the target cluster)

and following exports on the source:

/ifs/data/export1

/ifs/data/export2

we would like to easily clone these exports (with all of their client settings, user mappings etc) to the target cluster.

132 Posts

April 5th, 2013 10:00

You need professional services if you want support on the script.  Otherwise, it will be provided as-is with no guarantees.  Until this capability is rolled into the core OS, external scripts in general are not "officially" supported.  With a PS engagement, there would be a custom statement of work the would be drafted and that can include support for the script or whatever else is required.

132 Posts

April 5th, 2013 10:00

You'll have to get it from your EMC representative.  It hasn't been announced internally yet so it's not widely available.  Just a few instances in the wild for testing purposes.  I am hoping we'll have a first iteration ready for general use in a few weeks.

132 Posts

April 5th, 2013 10:00

SyncIQ replication would be a bit tricky.  Snapshots are easy and present in 7.0.  The issue with SyncIQ is what are you actually restoring?  In general it is used for data replication, but putting the same policies on the DR as the production cluster don't make much sense.  The targets are wrong.  Also what does it mean to restore a SyncIQ policy?  You need to have some idea of the state between the source and target clusters.  You can't really back that up since it's based on snapshots.  What you can backup is what the policy specifies but I'm struggling a bit with how/what would be the correct thing to do when "restoring" a sync policy.

7 Posts

April 5th, 2013 12:00

Yeah, we're smack in the middle of doing this ourselves. So far we're planning on leveraging SmartConnect Zone aliases in lieu of DNS C name shenanigans we initially thought of.

It should be sorta straight forward in practice.

Please note that this hasn't been fully tested and that I could be wrong or EMC Isilon has a better suggestion on how to do this.

Pre-setup:

1. Make sure Primary and Secondary clusters have SyncIQ jobs/shares we want to failover/back setup.

2. Setup SmartConnect Zone alias on DR cluster. If my Primary SmartConnect zone name is primary.local then I will set my Secondary side to be secondary.local but it will have primary.local as a zone alias.

secondary-1# isi networks modify pool --name=subnet_0:pool_0 --add-zone-alias=primary.local

So with the zone alias setup, we should be able to flip the SmartConnect service IP in the event of a failure.

Normal:

Primary SmartConnect service IP: 192.168.0.10

Secondary SmartConnect service IP: 192.168.0.11

Temporary SmartConnect service IP: 192.168.0.12

So, to completely failover, the theory is that...

1. Change Primary to SCZ IP to 192.168.0.12

2. Change Secondary SCZ IP to 192.168.0.10

3. Failover the SyncIQ jobs to the Secondary (target) cluster and check if successful

secondary-1# isi sync target allow_write

secondary-1# isi sync target report

In testing, setting allow_write on the target SyncIQ job will obviously make the job RW on the Secondary cluster and make the Primary cluster RO.

Still working out how clients will behave when we yank SCZ IPs. I suspect we might actually need to break the connections or somehow have them reset.

To reverse this process it goes something like this:

1. Prepare Primary (source) cluster

primary-1# isi sync resync prep

primary-1# isi sync policy report -N=0

2. Grab the new mirror job created by the resync and run it on Secondary (target) cluster

secondary-1# isi sync policy list

secondary-1# isi sync policy run

secondary-1# isi sync policy report -N=0

3. Grab new sync mirror job created from Secondary (target) cluster and make it writable on Primary (source) cluster

primary-1# isi sync target list

primary-1# isi sync target allow_write

primary-1# isi sync policy report

4. Setting Secondary cluster back to RO

secondary-1# isi sync resync prep

secondary-1# isi sync policy report -N=0

5. Set SCZ IP on Secondary from 192.168.0.10 to 192.168.0.11.

6. Set SCZ IP on Primary from 192.168.0.12 to 192.168.0.10

132 Posts

April 5th, 2013 13:00

For failing over connections my recommendation is as follows:

Assumptions in DNS:

primary.local (NS) -> sip-primary.local (A)

sip-primary.local -> 192.168.0.10

secondary.local (NS) -> sip-secondary.local (A)

sip-secondary.local -> 192.168.0.11

When failing over from primary to secondary, change the primary.local (NS) record to point to sip-secondary.local instead.

This is more "proper" in some sense since the IP address of the host never changes.  What we are changing is the destination of a zone delegation.

In terms of clients, in a real DR situation, your clients would be dead anyway.  If you want to do a DR test you might want to have a third A record like, sip-drtest.local and you could disable NFS/SMB on the source or you can find SMB sessions and disconnect them via CLI.

Using zone aliases is good.  Don't forget to create the SPNs on the DR cluster so that you don't get kerberos issues.  You will have to remove them after the DR or DR test.

2 Intern

 • 

467 Posts

April 6th, 2013 04:00

If not,  I have one in the works which reads directly from the gc database and makes and exact copy for our DR side...  all using ssh and completely hands off

306 Posts

April 9th, 2013 05:00

I'd love to see what you have so far... Just change anything that might be sensitive.

5 Practitioner

 • 

274.2K Posts

April 9th, 2013 11:00

Andrew is correct, you can really mess up the cluster when using those steps if they happened to be mis-typed or if you have a more complex setup. I simply adapted the method that I used to duplicate the notification settings from one cluster to another and modified it to pull the SMB information instead. It was very lightly tested and you should exercise extreme caution when running the SMB related clone. And as Andrew mentioned its not a supported method, its just an as-is thing.

(My comments are in reference to the google groups information in this thread)

April 15th, 2013 12:00

" I simply adapted the method that I used to duplicate the notification settings from one cluster to another and modified it to pull the SMB information instead."

Can you provide more specifics about the method?  I'm not sure how you duplicated the notification settings from one cluster to another.  Thanks!

5 Practitioner

 • 

274.2K Posts

April 16th, 2013 05:00

You can contact support directly and reference KB emc14003380 to obtain the official document.

Basically you setup one cluster how you want the alerts to be and copy the celog section of the main_config.gc and then import the copied data from main_config into the other cluster(s) main_config.

1 Rookie

 • 

3 Posts

May 21st, 2013 03:00

Any word on scripts that leverage the api to mirror/replicate:

smb shares

nfs exports

quota information

A script that does any of the above would be very much appeciated and can be used to be modified further.

132 Posts

May 22nd, 2013 21:00

The script is now available in a limited release.  If you want to try it out contact your account rep and have them ask professional services for the script.  Without a PS engagement, there is no support for the script but your rep can get the script for use.

132 Posts

July 8th, 2013 11:00

You can get the script but without a Professional Services engagement there is no guarantee of any support.  You will have to ask your sales rep to contact PS to get access to the script.

5 Practitioner

 • 

274.2K Posts

August 9th, 2013 07:00

Andrew,

does the tool you talk about already exist ?

Thx

Alex

132 Posts

August 9th, 2013 09:00

Yes, the tool already exists today.  It is used by people to do configuration backup/restore, DR, config change management and sometimes also bulk creation.

No Events found!

Top