Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3427

December 9th, 2013 19:00

SyncIQ workers maximum?

Good Evening,

I have a question about the worker maximum per node. From documentation i found that the max is 8 workers per node per policy, 40 workers max per policy, 200 workers max per cluster.

I decided to try setting a policy to 20 workers per node per policy and it worked fine and spread the insane amount of workers across the nodes. is the 8 workers just a best practice? Can someone give me some insight into this loophole? This was me trying across  two nodes so maybe that's why? Does anyone have any real documentation on why this is true? This test was performed on virtual nodes.

93 Posts

December 10th, 2013 22:00

Hi Antonio,

I wondered about that myself some time ago and did some research; there were some nasty bugs that arose when the number of workers was higher.  The numbers may now seem arbitrary, but they are considered "safe" to use without causing problems on the cluster.

Keep in mind that workers are competing with resources on each node; too many and the performance will suffer.

They should not be exceeded without a case being opened and the explicit direction of Support.

Cheers,

Matt


December 10th, 2013 13:00

This is a shameless thread bump. I've seriously looked everywhere i can think of and can't find anything that doesn't just give the hard maximums.

December 14th, 2013 16:00

Antonio,

Here also is a good white paper talking about SyncIQ and the workers:

https://support.emc.com/docu39531_White_Paper:_Best_Practices_for_Data_Replication_with_EMC_Isilon_SyncIQ.pdf

Under the section: "Limitations and restrictions" talks about the maximums.  Also there are Guidelines throughout the paper in regards to SyncIQ and its workers.

Furthermore, I would like to leave with you some scripts (native command in v7) if you want to see how many you actually have active:



v6.x

a) On source cluster (pworker)

isi_for_array -s "ps auwx | grep migr_pworker | egrep -v 'bandwidth|monitor|bin|grep'"

b) On target cluster (sworker)

isi_for_array -s "ps auwx | grep migr_sworker | egrep -v 'bandwidth|monitor|bin|grep'"

You can of course pipe to "wc -l" if you just want a total sum.

v7.x

isi sync jobs report -v -M workers

It provides a *wealth* of information about the activity of the workers.

No Events found!

Top