Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4952

November 26th, 2014 12:00

Isilon replication validation

Is there a way to validate replication between 2 Isilon clusters?  Management wants to be assured that files that reside on our primary cluster are being replicated to our secondary cluster.  Are there best practices?

65 Posts

November 26th, 2014 12:00

Hello,

Absolutly there is documentation on this issue. you can also sync a test directory the break the association on the target, allow writes on the target directory (as SyncIQ requires it to be read only for Sync purposes) then validatate that the information was replicated as expected.

https://support.emc.com/docu39531_White_Paper:_Best_Practices_for_Data_Replication_with_EMC_Isilon_SyncIQ.pdf?language=en_US

https://support.emc.com/docu45319_SyncIQ_2.5_Best_Practices_Guide.pdf?language=en_US

Hope that helps

3 Apprentice

 • 

592 Posts

December 1st, 2014 10:00

Will this output be ok?  isi sync target view (policy_name)


se-sandbox2-cluster-1# isi sync target view sync_test_111

Name: sync_test_111

Source: se-sandbox2-cluster

Target Path: /ifs/data/snap_test

Last Job State: finished

FOFB State: writes_disabled

Source Cluster GUID: 0007430798fa78d410534814be52b61ab572

Last Source Coordinator IP: 10.111.158.204

Legacy Policy: No

Last Update: 2014-11-11T18:01:14

450 Posts

December 1st, 2014 14:00

PDC,

By the way like the screen name (primary domain controller?).  Anyway, each SyncIQ policy has a

setting, which is enabled by default, so if you're looking for does the MD5 match on source and target then the answer is yes.  If instead you're looking for what is the status of this policy, and is it working properly as scheduled there is another way to handle it.  Since you seem pretty comfortable with the CLI, i'll give it to you as a simple for loop.  Note that the syntax varies from MR to MR, just a little bit.

isi01-1# for name in `isi sync policies list -a -z | awk -F ' ' '{print $1}'`

do

isi sync reports list --policy-name $name -l 5

done

Policy Name  Job ID  Start Time          End Time            Action  State

-----------------------------------------------------------------------------

foomatt      3       2014-12-01T17:22:33 2014-12-01T17:22:42 run     finished

foomatt      2       2014-11-11T03:08:50 2014-11-11T03:08:57 run     finished

foomatt      1       2014-11-11T03:07:47 2014-11-11T03:07:50 run     finished

-----------------------------------------------------------------------------

Total: 3

So you can see that on my lab cluster this loop would get the last 5 passes for each synciq policy configured on the cluster.  A state of 'finished' means that the job succeeded and because it's based on block-level incrementals that means the whole policy is in a good state.  This particular cluster only has one policy, and it has only been run 3 times so it's not a great example of the output you might be looking for.  Of course you can redirect this to a text file that is on an SMB share, and let your managers have access to that SMB share.  Would that cover your needs?

Hope this helps,

Chris Klosterman

Senior Solution Architect

EMC Isilon Offer & Enablement Team

chris.klosterman@emc.com

twitter: @croaking

6 Posts

December 1st, 2014 14:00

Thanks for the replies!  I should have been a little more clear, we are looking for a report that would run daily to show that what is on the source Isilon cluster has replicated to the target.  Something like what DPA does with Avamar showing successes/failures/Utilization percentage etc.  I am able to get a "yes, it ran successfully" report via the GUI or CLI using isi sync target view but that doesn't give the details that our management is seeking...

450 Posts

December 2nd, 2014 08:00

Some extra content to help clarify an offline question from PDC, which was around what each step of the command i listed did.

#This first command is just a simple one starting a loop, name is a variable, and so I’m asking for the names of all the synciq policies on the cluster.  The way it’ll work is that it’ll recurse through each one, and put the policy name into the variable called $name.  I only wrote it this way so that you could have a single block of text to check all your policies at once, rather than having to type in each one manually. If you only have one policy then this doesn’t help any, but if you have more than one, it’s an easy script to save as a .sh file and stick in cron so it’ll run automatically daily or something.

for name in `isi sync policies list -a -z | awk -F ' ' '{print $1}'`

#this starts the loop

do

#This just lists the last 5 passes of each syncIQ policy, I think your error came from the last few characters so I’m doing dash (L/lowercase) space 5, but you could make that 12, or 24 or whatever.  So if you had a policy that ran every 2 hours and you generated this report once per day, then you want info on the last 12 passes.  Truth be told only the last 1 incremental pass is important, because if it completes successfully, then you’re in a great position, because your block-level copy matches.

isi sync reports list --policy-name $name -l 5

#This says start over with the next $name, until we run out.

done

you could put the whole thing into a blank .sh file like this:

#!/bin/zsh

#filename: check_rep_status.sh

for name in `isi sync policies list -a -z | awk -F ' ' '{print $1}'`

do

isi sync reports list --policy-name $name -l 5 > /ifs/repstatuscheck/synciqstatus.$name.$(date +%Y-%m-%d).log

done

On node 1;

save that as /root/check_rep.status.sh

chmod +x /root/check_rep_status.sh

then:

touch /etc/local/crontab.local

vi /etc/local/crontab.local

and put in a cron entry to start the script every day.

For an introduction to cron, look here:

http://www.thegeekstuff.com/2009/06/15-practical-crontab-examples/

The output logs are put into a directory you see listed above, with 1 file per policy, per day.  You can change it around however you want. As I mentioned you could certainly share out this directory with an NFS export or a CIFS share. You can also change the format to output as a csv for importing into excel or something else. You have lots of options open to you.  2 things however:

  1. 1.      Isilon Support will support the CLI commands your using, but not the custom script
  2. 2.      crontab entries or the /root directory are specific to the node you’re running them on and are not cluster wide.  any onefs upgrade or node problems could potentially lose the script or the cron entries.  Reboots don’t count in that category.  So given that, backup your crontab and your script somewhere else, like maybe a script subdirectory where you store the output logs.

I gave you so much detail here because it sounds like you need to operationalize this; and as the old adage goes work smarter, not harder.

  

Chris Klosterman, ICSP, ICIE, CCNA, VCP

Email: chris.klosterman@emc.com

Senior Solution Architect

Offer and Enablement Team

EMC²| Isilon Storage Division

6 Posts

December 2nd, 2014 14:00

Thank you, we have a script in place and will let you know the results.

No Events found!

Top