Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3666

November 7th, 2013 03:00

SyncIQ fan out or cascade

Hi,

Do you know if syncIQ supports more than one ONEFS-Cluster as concurrent replica target?

Or is it possible to create a replica cascade?

Tks a lot

Flavio

99 Posts

January 27th, 2015 06:00

Folks,

It is not best practice at all - due to the workflow stated above - to run cascaded topolgies.  In fact, with OneFS, there is no need whatsoever to do this, unlike other multiple-file-system NAS devices.

Best practice is to set policies - potentially different (RPO) for target clusters B and C, and have A perform SIQ to both those clusters.  Trying to use a given cluster (say, B) as both a target and a source (for target C) is just not worth the administrative hassle to try to make it work flawlessly.  As they say in life, timing is everything - and especially with continuous policy in use from A to B, there is no reasonable way to accomplish it.

Keep it simple, folks.  A->B _and_ A->C.  Nice and easy  Complexity is the enemy - avoid it.

Cheers

Rob

November 9th, 2013 17:00

Please consider moving this question as-is (no need to recreate) to the proper forum for maximum visibility.  Questions written to the users' own "Discussions" space don't get the same amount of attention and can go unanswered for a long time.

You can do so by selecting "Move" under ACTIONS along the upper-right.  Then search for and select: "Isilon Support Forum".

Isilon Support Forum

Yes, here are the official statements regarding your inquiries:

1) fan-out/one-to-many (A -> B, and A -> C): supported

Keep in mind that you will be limited by the following maximums on cluster "A" which is now shared between two clusters (in other words the maximums don't double because you have 2 separate target clusters):

a) maximum 5 sessions running simultaneously

b) maximum 40 workers per session (per session maximum 8 workers per node)

c) maximum 200 workers total running at a time

2) cascading (A -> B -> C): you are able to configure it (the official word as I understand it is that it isn't entirely supported) with several requirements:

a) You will have to manage it so that A -> B and B -> C can *not* be running simultaneously; one must finish before the other is started.  This will of course increase your RPO.

b) This implies that you either need to manually start them or you schedule them far enough apart which may be difficult as change rate is often not constant.  Also variations in consumed cluster and/or networking resources will of course affect the time it takes to complete a refresh.

c) Also, on the cluster B, before you start B -> C (after A -> B completes) you will need to delete or rename a file: .tmp-working-dir (it will warn you).  Keep in mind though that the directory on cluster B will be in R/O mode so you will have to make it (temporarily) writable to be able to do so.

- With v7.0 and the introduction of the local target option: "Make writable" this would allow you to do so (without having to restart A -> B with a full copy or doing a diff-sync)

- However, if cluster B is running v6.5, it only has the option for the local target to "Break".  That would mean that a refresh from A -> B you will be running a diff-sync to copy just the changes since it last ran.



Just a few things to keep in mind in particular with cascading.

5 Practitioner

 • 

274.2K Posts

November 11th, 2013 06:00

Christopher,


I understand that is possible from a ONEFSCluster to treplicate via SyncIQ to many cluster, but I'm not sure if the same FileSystem can be replicated to different clusters concurrrently. Or a single FS can be replicated only on one destination?

Flavio

November 14th, 2013 01:00

Flaviovarriale wrote:

Christopher,


I'm not sure if the same FileSystem can be replicated to different clusters concurrrently. Or a single FS can be replicated only on one destination?

Yes, in a one-to-many configuration, the same directory (you mention FS but I know what you mean) can be concurrently replicating to more than one cluster.  It is in the cascading configuration (A -> B -> C where B is a target directory for A and then that same directory is a source for C) that you have to worry about any locking and timing as mentioned above.

300 Posts

January 27th, 2015 06:00

Hi,

knowing the limitations stated by Christopher  - how does SyncIQ behave with continuous replication introduced with OneFS 7.1 ?

so what happens if I have the following cascading setup:

1) A - continuous replication -> B - continuous replication -> C

2) A - continuous replication -> B - scheduled replication -> C

can the replications run simultaneous?

does it cause "tmp-working-dir" events?

2 Intern

 • 

20.4K Posts

January 27th, 2015 06:00

Rob,

any issues with running policies A>B and A>C concurrently ?

Thanks

99 Posts

January 27th, 2015 07:00

Of course, if there is no possible path between A and C, one must do otherwise.  I would only say in this day and age, it's hard to imagine no connectivity between any given two sites of an enterprise.  Ethernet ports are quite inexpensive, especially compared to the lack thereof and the impact on operations.

Nonetheless, should you find yourself in this situation, cascading is possible but you must be diligent in scheduling SIQ jobs to never overlap.  Doing this with A->B in continuous (10-second) mode is a practical impossibility.  Just managing the tmp-working-dir as above is, as the kids say, a PITA and not trivial to automate.

Best of luck in any event -

Rob

99 Posts

January 27th, 2015 07:00

None whatsoever.  In fact there is a slight benefit from read caching effects when running two SIQ jobs in parallel.  The only thing to be cognizant of is bandwidth consumed and the SIQ parallel job limit, but running two is no problem for the latter.

300 Posts

January 27th, 2015 07:00

Hi Rob,

thanks for the answer - but I have to disagree. There are scenarios where a cascaded replication is reasonable.

Think about having two CEs (Customer Edge) for the WAN access

- one for Cluster A (active)

- one for Cluster B (standby)

and Cluster C (standby) on a Remote site with a separate CE

BUT you have a dedicated LAN connection between Cluster A and Cluster B. Since you use one CE actively for userinteraction you don't want to double the traffic on that CE you use the other one (the one for Cluster B) for your replication traffic.

given specific needs for FSN and throughput and a small node-per-cluster relation you run short on eth-ports

2 Intern

 • 

178 Posts

June 5th, 2015 11:00

Is there any permanent solution, if the customer have requirement for cascade and wanted to implement ?

.tmp-work-directories are getting created and deleted automatically but its triggering alerts and tickets every time.  Please suggeset

Thanks,

Hari

300 Posts

June 8th, 2015 00:00

try to implement your cascade, in a way where the replications nearly don't interfere so that B->C is done when A->B starts and vis versa

That's the only possibility i know to lower the amount of tickets / alerts in a cascaded replication.

No Events found!

Top