There are other limits .. every RF port supports up to 8/16 groups (depending on code level) .. if you have 4 RF ports you can have up to 16 groups (using all 4 ports in every group) or up to 64 groups (but with only 1 RF port for every group).
In case you want more then 8/16 RDFG on a single processor you need an RPQ.
We used to have 1 single large group for SRDF/A which involved ALL replicated symdevs. The downside of this is when we needed to expand the group, the whole group needed to be suspended during the expansion. If we had devided the group in smaller groups, not all of the replication had to be brought down.
I guess the main reason for having 4 "tiny" SRDF/A groups is simply for ease of management. With SRDF/A all devices belonging to a rdfg, will be managed as a whole thing: you must split/failover/set async/set sync/set acp_disk all devices belonging to the same rdfg (as you noted).
Since you have different clusters on the R1 side (and possibly hosts on R2 site able to "see" and use R2 devices in case you need) having different RDFG (one for each cluster) allows you to failover a single clustero on the R2 box without affecting all other clusters.
Unfortunatly there aren't only "best practices" .. we have also customer requirements that we must meet.
Rob it depends also on how many groups you need.. Old codes had limits (64 RDFG for each box) that recently grew up to 256. When we built our first STAR we had 100+ hosts on R1 side and only 64 RDFG .. That becomes only 32 since STAR requires 2 groups for every box. Thus we throw all devices in a single RDFG
Now with a much higher limit (and also few hosts) we may choose a different approach .. Now.. 3 years (and 2 major release of code) later
So with our shiny new DMX-4 on 5773 code we could essentially setup our srdf/a groups at a failry granular level. Allowing us to perform actions on indivudal servers if necessary. I guess its like using the async rdf groups like device groups in many ways.
thanks for the prompt reply Stefano, so I guess we can't be that granular with the async RDF groups (e.g. a group for each cluster). I figure we'll have to lump them into larger groups and then if an indvidual cluster needs to be failed over we move it into it's own group temporarily.
Having 1 large group with many devices from different hosts is bad idea.
Best Practice is to have at least one group per host / custer system. That way you can enable "cosistency" protection for that SRDF/A group:
symrdf -g group_name -enable -nop
Whole point about SRDF/A replication is to have "consistent" state of all devices inside that group. That way you will be sure that application is consistent and recovery can be performed.
For larger applications, you can create even several groups per each host. Everything depends on application itself.
xe2sdc
4 Operator
•
2.8K Posts
0
May 13th, 2009 07:00
In case you want more then 8/16 RDFG on a single processor you need an RPQ.
RRR
4 Operator
•
5.7K Posts
0
May 13th, 2009 05:00
You'll always learn down the road, which is good
xe2sdc
4 Operator
•
2.8K Posts
0
May 13th, 2009 05:00
Since you have different clusters on the R1 side (and possibly hosts on R2 site able to "see" and use R2 devices in case you need) having different RDFG (one for each cluster) allows you to failover a single clustero on the R2 box without affecting all other clusters.
Unfortunatly there aren't only "best practices" .. we have also customer requirements that we must meet.
Message was edited by:
Stefano Del Corno
xe2sdc
4 Operator
•
2.8K Posts
0
May 13th, 2009 06:00
Now with a much higher limit (and also few hosts) we may choose a different approach .. Now.. 3 years (and 2 major release of code) later
BHORAN1
1 Rookie
•
41 Posts
0
May 13th, 2009 06:00
BHORAN1
1 Rookie
•
41 Posts
0
May 28th, 2009 03:00
xe2sdc
4 Operator
•
2.8K Posts
0
May 28th, 2009 04:00
xe2sdc
4 Operator
•
2.8K Posts
0
May 28th, 2009 04:00
BHORAN1
1 Rookie
•
41 Posts
0
May 28th, 2009 04:00
RPQ?
dynamox
9 Legend
•
20.4K Posts
0
May 30th, 2009 06:00
xe2sdc
4 Operator
•
2.8K Posts
0
June 1st, 2009 02:00
danailp1
1 Message
0
June 18th, 2009 06:00
Best Practice is to have at least one group per host / custer system. That way you can enable "cosistency" protection for that SRDF/A group:
symrdf -g group_name -enable -nop
Whole point about SRDF/A replication is to have "consistent" state of all devices inside that group. That way you will be sure that application is consistent and recovery can be performed.
For larger applications, you can create even several groups per each host. Everything depends on application itself.
MikeMac1
2 Intern
•
292 Posts
0
June 23rd, 2009 08:00