4 Operator

 • 

2.8K Posts

May 13th, 2009 07:00

There are other limits .. every RF port supports up to 8/16 groups (depending on code level) .. if you have 4 RF ports you can have up to 16 groups (using all 4 ports in every group) or up to 64 groups (but with only 1 RF port for every group).

In case you want more then 8/16 RDFG on a single processor you need an RPQ.

4 Operator

 • 

5.7K Posts

May 13th, 2009 05:00

We used to have 1 single large group for SRDF/A which involved ALL replicated symdevs. The downside of this is when we needed to expand the group, the whole group needed to be suspended during the expansion. If we had devided the group in smaller groups, not all of the replication had to be brought down.

You'll always learn down the road, which is good ;)

4 Operator

 • 

2.8K Posts

May 13th, 2009 05:00

I guess the main reason for having 4 "tiny" SRDF/A groups is simply for ease of management. With SRDF/A all devices belonging to a rdfg, will be managed as a whole thing: you must split/failover/set async/set sync/set acp_disk all devices belonging to the same rdfg (as you noted).

Since you have different clusters on the R1 side (and possibly hosts on R2 site able to "see" and use R2 devices in case you need) having different RDFG (one for each cluster) allows you to failover a single clustero on the R2 box without affecting all other clusters.

Unfortunatly there aren't only "best practices" .. we have also customer requirements that we must meet. :)

Message was edited by:
Stefano Del Corno

4 Operator

 • 

2.8K Posts

May 13th, 2009 06:00

Rob it depends also on how many groups you need.. Old codes had limits (64 RDFG for each box) that recently grew up to 256. When we built our first STAR we had 100+ hosts on R1 side and only 64 RDFG .. That becomes only 32 since STAR requires 2 groups for every box. Thus we throw all devices in a single RDFG :-)

Now with a much higher limit (and also few hosts) we may choose a different approach .. Now.. 3 years (and 2 major release of code) later :-)

1 Rookie

 • 

41 Posts

May 13th, 2009 06:00

So with our shiny new DMX-4 on 5773 code we could essentially setup our srdf/a groups at a failry granular level. Allowing us to perform actions on indivudal servers if necessary. I guess its like using the async rdf groups like device groups in many ways.

1 Rookie

 • 

41 Posts

May 28th, 2009 03:00

So what would our limit be on a DMX-4 running 5773 microcode? We currently use 2 FA's (one in each fabric) for replication.

4 Operator

 • 

2.8K Posts

May 28th, 2009 04:00

16 RDFG max without an RPQ. :-)

4 Operator

 • 

2.8K Posts

May 28th, 2009 04:00

RPQ = long and boring process that your sales team will carry on to qualify (and support) configurations outside allowed boundaries :D

1 Rookie

 • 

41 Posts

May 28th, 2009 04:00

thanks for the prompt reply Stefano, so I guess we can't be that granular with the async RDF groups (e.g. a group for each cluster). I figure we'll have to lump them into larger groups and then if an indvidual cluster needs to be failed over we move it into it's own group temporarily.

RPQ?

9 Legend

 • 

20.4K Posts

May 30th, 2009 06:00

RPQ = Request for Product Qualification

4 Operator

 • 

2.8K Posts

June 1st, 2009 02:00

Just about the same :D

1 Message

June 18th, 2009 06:00

Having 1 large group with many devices from different hosts is bad idea.

Best Practice is to have at least one group per host / custer system. That way you can enable "cosistency" protection for that SRDF/A group:

symrdf -g group_name -enable -nop

Whole point about SRDF/A replication is to have "consistent" state of all devices inside that group. That way you will be sure that application is consistent and recovery can be performed.

For larger applications, you can create even several groups per each host. Everything depends on application itself.

2 Intern

 • 

292 Posts

June 23rd, 2009 08:00

Danail, welcome to the forums. Be sure to introduce yourself in the coffee break area when you get a chance.
No Events found!

Top