Start a Conversation

Unsolved

This post is more than 5 years old

3485

September 2nd, 2014 09:00

Excluding directories from SyncIQ after policy has been active?

I am moving some new apps to a directory that has been getting replicated (via SyncIQ running in Copy mode) for some months. I would like to exclude a certain work folder from replication, since it does not need to be replicated. However, the documentation says that changing the exclude/include settings will result in a full re-sync. That would take weeks, as we have many TB already replicated across a slow WAN link. Is there a way around this resync? It seems wasteful when all I am doing is telling SyncIQ to skip a folder. Should I be creating a new directory reserved for these sorts of working areas that will never get touched by SyncIQ? Some apps cannot be reconfigured in that manner. Any suggestions out there?

September 3rd, 2014 06:00

This sucks but it's the way it is.  I have a feature request that I submitted that is related - if an excluded directory is deleted in OneFS, the policy will refuse to run.

Please contact your account team and put in a feature request - this would be WELL received in the user community to be able to exclude directories after they've started syncing.  I know I would use it.

My application is all NFS and I'm creating new directories out of the sync path and then just mounting the path from the clients. To change the path once it's mounted though is a royal pain since the volume needs to be unmounted and that's something that may take weeks or months to schedule around here.

1.2K Posts

September 3rd, 2014 19:00

Have you checked with support for the existing "target aware" sync feature?

It scans both sides, and then transmits only the diffs;

takes lots of CPU but saves bandwidth compared to a "full sync".

Mentioned in the SyncIQ Betst Practices paper

http://www.emc.com/collateral/hardware/white-papers/h8224-replication-isilon-synciq-wp.pdf

-- Peter

122 Posts

December 22nd, 2014 20:00

Hello Davek,

Good day !!

Steps below performed in lab.

  1. Run the policy and ensure data is synchronized.
  2. Disable the policy.
  3. Snap the relevant directory to safe guard. (Am aware data won't be deleted until the data has been expired)
  4. Create a Tree Delete job, point to the relevant directory.
  5. Ensure the job completes successfully.
  6. Create a new sub directory of the same name and within the same directory structure as it was before.
  7. Edit the policy and create the exclusion.
  8. Reset the job state.
  9. Enable the policy.
  10. Run the policy.
  11. Monitor the target to ensure the data is kept intact.
  12. Once the sync has completed, and the remote directory has been verified is still intact.
  13. Remove the local, very large local snap.

Source Cluster

Thor-1# isi sync policies list -v

ID: 66c3fa44692c947cf5b11397e943ff65

Name: testing

Path: /ifs/home/sarath09

Action: sync

Enabled: Yes

Target: 10.111.187.225

Description:

Check Integrity: Yes

Source Include Directories: -

Source Exclude Directories: -

Source Subnet: -

Source Pool: -

      Source Match Criteria:

Target Path: /ifs/home

    Target Snapshot Archive: No

    Target Snapshot Pattern: SIQ_%{SrcCluster}_%{PolicyName}_%Y-%m-%d_%H-%M

Target Snapshot Expiration: Never

      Target Snapshot Alias: SIQ_%{SrcCluster}_%{PolicyName}

Target Detect Modifications: Yes

    Source Snapshot Archive: No

    Source Snapshot Pattern:

Source Snapshot Expiration: Never

Schedule: Manually scheduled

Log Level: notice

          Log Removed Files: No

Workers Per Node: 3

Report Max Age: 1Y

Report Max Count: 2000

Force Interface: No

    Restrict Target Network: No

Target Compare Initial Sync: No

Disable Stf: No

Expected Dataloss: No

Disable Fofb: No

         Disable File Split: No

Changelist creation enabled: No

Resolve: -

Last Job State: needs_attention

Last Started: 2014-12-09T09:05:46

Last Success: 2014-12-09T09:05:46

Password Set: No

Conflicted: No

Has Sync State: Yes

Thor-1# cd /ifs/home/sarath09

Thor-1# ls

dir_1   dir_10  dir_2 dir_3   dir_4   dir_5   dir_6 dir_7   dir_8   dir_9

Thor-1# ls -al

total 56

drwxr-xr-x    12 root  wheel  231 Dec  9 08:30 .

drwxrwxr-x     5 root wheel   70 Dec  9 08:29 ..

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_1

drwxr-xr-x     2 root wheel   25 Dec  9 08:30 dir_10

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_2

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_3

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_4

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_5

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_6

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_7

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_8

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_9

Thor-1# isi sync policies disable --all

Thor-1# isi snapshot snapshots create --name=synciq --path=/ifs/home/sarath09/dir_1 --expires=1Y

Thor-1# isi job jobs start TreeDelete --paths=/ifs/home/sarath09/dir_1

Started job [218]

Thor-1# isi job status

The job engine is running.

Running and queued jobs:

ID   Type State     Impact  Pri  Phase  Running Time

-----------------------------------------------------------

218  TreeDelete Succeeded Medium 4    1/1    34s

-----------------------------------------------------------

Thor-1# mkdir -p  /ifs/home/sarath09/dir_1

Thor-1# ls -al

total 56

drwxr-xr-x    12 root  wheel  231 Dec 10 03:34 .

drwxrwxr-x     5 root wheel   70 Dec  9 08:29 ..

drwxr-xr-x     2 root  wheel 0 Dec 10 03:34 dir_1

drwxr-xr-x     2 root wheel   25 Dec  9 08:30 dir_10

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_2

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_3

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_4

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_5

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_6

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_7

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_8

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_9

Thor-1# isi sync policies list

Name Path Action  Enabled  Target

----------------------------------------------------------

testing /ifs/home/sarath09 sync No       10.111.187.225

----------------------------------------------------------

Total: 1

Thor-1# isi sync policies modify testing --source-exclude-directories=/ifs/home/sarath09/dir_1

Changing root path, include/exclude paths, or predicates will result in a

full synchronization of all data.

Are you sure? (yes/[no]): yes

Thor-1# isi sync policies list -v

ID: 66c3fa44692c947cf5b11397e943ff65

Name: testing

Path: /ifs/home/sarath09

Action: sync

Enabled: No

Target: 10.111.187.225

Description:

Check Integrity: Yes

Source Include Directories: -

Source Exclude Directories: /ifs/home/sarath09/dir_1

Source Subnet: -

Source Pool: -

      Source Match Criteria:

Target Path: /ifs/home

    Target Snapshot Archive: No

    Target Snapshot Pattern: SIQ_%{SrcCluster}_%{PolicyName}_%Y-%m-%d_%H-%M

Target Snapshot Expiration: Never

      Target Snapshot Alias: SIQ_%{SrcCluster}_%{PolicyName}

Target Detect Modifications: Yes

    Source Snapshot Archive: No

    Source Snapshot Pattern:

Source Snapshot Expiration: Never

Schedule: Manually scheduled

Log Level: notice

          Log Removed Files: No

Workers Per Node: 3

Report Max Age: 1Y

Report Max Count: 2000

Force Interface: No

    Restrict Target Network: No

Target Compare Initial Sync: No

Disable Stf: No

Expected Dataloss: No

Disable Fofb: No

         Disable File Split: No

Changelist creation enabled: No

Resolve: -

Last Job State: needs_attention

Last Started: 2014-12-09T09:05:46

Last Success: 2014-12-09T09:05:46

Password Set: No

Conflicted: No

Has Sync State: No

Thor-1# isi sync policies reset --all

Resetting ALL policies so that they will ALL perform full replications

next time they run.  This could potentially be a VERY LARGE amount of

replication work.

Are you sure?? (yes/[no]): yes

Thor-1# isi sync jobs start --policy-name=testing

2014-12-10T04:50:33Z <3.6> Thor-1(id1) isi_migrate[29678]: coord[testing:1418186094]: Next job phase STF_PHASE_IDMAP_SEND

2014-12-10T04:50:42Z <3.6> Thor-1(id1) isi_migrate[29678]: coord[testing:1418186094]: Renamed snapshot 'SIQ-66c3fa44692c947cf5b11397e943ff65-new' to 'SIQ-66c3fa44692c947cf5b11397e943ff65-latest'

2014-12-10T04:50:48Z <3.6> Thor-1(id1) isi_migrate[29678]: coord[testing:1418186094]: Finished job 'testing' (66c3fa44692c947cf5b11397e943ff65) to 10.111.187.225 in 0h 15m 57s with status success and 0 checksum errors

Thor-1# pwd

/ifs/home/sarath09

Thor-1# ls -al

total 56

drwxr-xr-x    12 root  wheel  231 Dec 10 03:34 .

drwxrwxr-x     5 root wheel   70 Dec  9 08:29 ..

drwxr-xr-x     2 root wheel    0 Dec 10 03:34 dir_1

drwxr-xr-x     2 root wheel   25 Dec  9 08:30 dir_10

drwxr-xr-x     2 root  wheel 24 Dec  9 08:30 dir_2

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_3

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_4

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_5

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_6

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_7

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_8

drwxr-xr-x     2 root wheel   24 Dec  9 08:30 dir_9

Thor-1# ls -al dir_1

total 36

drwxr-xr-x     2 root wheel    0 Dec 10 03:34 .

drwxr-xr-x    12 root  wheel  231 Dec 10 03:34 ..

Target Cluster

corvair-3# cd /ifs/home

corvair-3# ls

dir_1   dir_10  dir_2 dir_3   dir_4   dir_5   dir_6 dir_7   dir_8   dir_9

corvair-3# ls -al

total 58

drwxr-xr-x    12 root  wheel  231 Dec  9 03:30 .

drwxrwxrwx     9 root  wheel 191 Dec  8 12:58 ..

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_1

drwxr-xr-x     2 root wheel   25 Dec  9 03:30 dir_10

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_2

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_3

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_4

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_5

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_6

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_7

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_8

drwxr-xr-x     2 root wheel   24 Dec  9 03:30 dir_9

After policy is completed still we have dir_1  with filr_1 in it

corvair-3# pwd

/ifs/home/dir_1

corvair-3# ls -al

total 350

drwxr-xr-x     2 root wheel 24 Dec  9 03:30 .

drwxr-xr-x    12 root  wheel 231 Dec  9 22:34 ..

-rw-r--r--     1 root  wheel 1073741926400 Dec  9 03:30 file_1

corvair-3# cd ..

corvair-3# ls

dir_1   dir_10  dir_2 dir_3   dir_4   dir_5   dir_6 dir_7   dir_8   dir_9

1 Rookie

 • 

20.4K Posts

December 23rd, 2014 21:00

chughh can you please explain the process. ?

122 Posts

December 25th, 2014 18:00

Hello,

Process is steps below.

Steps below performed in lab.

  1. Run the policy and ensure data is synchronized.
  2. Disable the policy.
  3. Snap the relevant directory to safe guard. (Am aware data won't be deleted until the data has been expired)
  4. Create a Tree Delete job, point to the relevant directory.
  5. Ensure the job completes successfully.
  6. Create a new sub directory of the same name and within the same directory structure as it was before.
  7. Edit the policy and create the exclusion.
  8. Reset the job state.
  9. Enable the policy.
  10. Run the policy.
  11. Monitor the target to ensure the data is kept intact.
  12. Once the sync has completed, and the remote directory has been verified is still intact.
  13. Remove the local, very large local snap.
No Events found!

Top