Building a core/edge fabric means you use a pair of big directors (your core) where you connect storages and tapes (the things you want to share in your SAN/fabric) .. And you use small (departmental) switches to expand connectivity to your hosts.
Core/edge menas oversubscription. Core/edge means that you need many ISL between your edge and the core. And if you have an edge switch with 16 ports, using 2 ports to the core you'll have up to 7 hosts for every ISL (that's where oversubscription comes into play). The big advantage of core/edge architecture is that you can easily expand it, adding more ISL if you are putting too much pressure on your ISLs. You can plug a third ISL between a core and the edge without network disruption. You can have a different number of ISLs between each edge and its core depending on the workload generated from the hosts that the edge is hosting.
If you want to use Brocade Trunking you need to use ports from the same processor (a quad in small switches, 8 ports in bigger ones). If you don't use Trunking you can have uneven use of ISLs from a single edge to the core and adding another ISL may even be worst (it depends on the FSPF algorithm and on how the switches chooses which ISL to use for any given port)
I heared of a so called "open trunking" but never ever gave it a try
I'm not sure what you mean. Do you mean you have a fabric with multiple switches connected to eachother by >1 ISL each ? You can't do trunking / portchanneling to the DMX itself, only switches amonst eachother can do that.
What I would do is to make sure each FA port has the dedicated bandwidth it theoretically can fill up (2 or 4Gb) to is connected switch and from there on to each edge switch with as many ISLs are you think is wise when you consider oversubscription. So if you connect 10 DMX ports to 4Gb ports on a core switch (40Gb together) and you want to have an oversubscription of 1:20 (so max 200 hosts), you can attach this core switch to edges which will contain these 200 hosts. You'll need to devide the 40Gb by 200 hosts to see what bandwidth you'll need per host = 200Mb The number of ISLs between the core and the edges must then be able to transport all data to and from all hosts on each switch, so if you have a switch with 20 hosts on it, you'll need 20 x 200Mb = 4Gb, so 2 x 2Gb will do the trick or simply 1 x 4Gb (but there's no redundancy here).
If your oversubscription is 1:40 and you'll have 10 ports in a fabric, you can have 400 hosts. Each host represents 40Gb/400=100Mb. Use previous example to do the math.
This is all theoratical, because no hosts are identical and some hosts need 1Gb dedicated and some can deal with as little as 50Mb or so.
One of the difficult tasks when configuring a multi-switch fabric is to figure out how much traffic will flow from a switch to the other ..
Are you going to build a full mesh topology ?? Are you going to use ISLs only to manage the switches as a big fabric but trying to keep the traffic confined into every switch ?? Are you trying to setup a CORE/EDGE fabric ??
Choosing the number of ISL may be a real nightmare since some brand allows you to trunk multiple links to have better reliability and througput while other brands wont allow you to do so. In a word .. it depends .. What brand of switches are you using ?? I suppose you are using Brocade (since you posted in this area of the forums)..
The only general rule I'd suggest (even if I can find examples that breaks my own rule) is that each and every switche must have AT LEAST 2 ISLs (just for a little bit of redundancy) .. And it's better that the two links go to different peers in the mesh
I'm also adding another switch to an existing switch, and will be using 2 ports on both switches to function as the ISL link. I would like to know if there's a need to truck them, I know 1 of the benefits of trucking, being if 1 link goes down the aggregate still works with less bandwidth, but just to utilize 2 ports for the ISL link, do i absolutely have to truck them? I'm using Silkworm 4400 i think, and the 2nd switch only connects to the existing switch so we can attach more hosts to the SAN box, so it is core/edge...? Thanks for your help in advance!
this concept is called as ISL trunking...The main switch is called as the "core" and there can be one or more "edge" switches connected to talk to the core.
IMHO you are simply expanding your fabric .. You can call it a full mash topology, a core edge or whatever you want .. When you have only 2 switches, you can't really talk about a "topology" .. You have a bigger fabric made of 2 smaller switches
You can think about a "topology" when you have at least 3 switches .. and depending on the size of the switches and the kind of switches you are using, you can say if it's a core/edge, a full mesh or something else in between.
Using 2 switches is a "dash" topology .. a small line between two points
Using trunking will avoid fabric reconfiguration when an ISL goes down .. And you'll have a better use of available bandwidth. That's it.
I didn't make myself clear. I will be using 2 ports on each switch, therefore a total of 4 ports with 2 fiber cable connecting them. My question is do i have to truck the 2 separate links into a logical one, i believe it's called trucking, but maybe i got the terminology confused....
You don't have to, but I would certainly do it because of 2 reasons: - loadbalancing - failover
Loadbalancing: because the 2 ISL's will now transport about 50% each of all the load. If you don't trunk them, there's no way of telling what the load difference will be. Could be 90/10, maybe 45/55, but you can't be sure. Failover: if 1 of the 2 fails, all traffic will automatically be redirected to the remaining ISL, without any hickups
You mean why the load isn't evenly spread accross all ISL's ? Good question. In this case I must say that I don't really know why. I do know it's not 50/50 most of the time (or 33/33/33 or whatever) and that you don't have any influence in this.
come to think of it, doesn't 2 separate ISL links also have built-in quasi failover? i mean if 1 link fails, doesn't all traffic go through the surviving link anyway? also i heard somewhere if one has an enterprise license, buying a truncking license is not necessary? how do i find out whether my switches have that?
You are right. If 1 ISL fails, the remaining traffic eventually goes through the remaining links. In the enterprise license the trunking license is included indeed. At least the ones I had to deal with.
Try the command "licenceshow" or type help or ? at the commandline. If you're in the GUI, click on the "admin" button and go to the license tab.
Thank you RRR and everyone else for your help! May i ask why there isn't a pattern if the 2 links are not truncked?
AFAIK fabric protocol will assign half of the ports to the first ISL, and the remaining half of the ports to the second ISL, without looking at the actual workload of each port/ISL .. If you have a 16 ports switch and 2 ISL, maybe you'll have ports 1-7 on first ISL and ports 8-14 on the other ISL. (obviously only the source of the frames is used in choosing the ISL to run the frames through)
If you have ports 1 2 and 3 working and all other ports idle, you'll have 100% ios on the first ISL and 0% ios on second ISL.
If ISL 1 breaks, the FSPF will try to find another path between the switches. This will give a brief network disruption (due to the fabric reconfiguratoin) but nothing more.
xe2sdc
4 Operator
•
2.8K Posts
0
January 22nd, 2008 09:00
Core/edge menas oversubscription. Core/edge means that you need many ISL between your edge and the core. And if you have an edge switch with 16 ports, using 2 ports to the core you'll have up to 7 hosts for every ISL (that's where oversubscription comes into play). The big advantage of core/edge architecture is that you can easily expand it, adding more ISL if you are putting too much pressure on your ISLs. You can plug a third ISL between a core and the edge without network disruption. You can have a different number of ISLs between each edge and its core depending on the workload generated from the hosts that the edge is hosting.
If you want to use Brocade Trunking you need to use ports from the same processor (a quad in small switches, 8 ports in bigger ones). If you don't use Trunking you can have uneven use of ISLs from a single edge to the core and adding another ISL may even be worst (it depends on the FSPF algorithm and on how the switches chooses which ISL to use for any given port)
I heared of a so called "open trunking" but never ever gave it a try
Hope this helps
RRR
4 Operator
•
5.7K Posts
1
January 15th, 2008 04:00
Do you mean you have a fabric with multiple switches connected to eachother by >1 ISL each ? You can't do trunking / portchanneling to the DMX itself, only switches amonst eachother can do that.
What I would do is to make sure each FA port has the dedicated bandwidth it theoretically can fill up (2 or 4Gb) to is connected switch and from there on to each edge switch with as many ISLs are you think is wise when you consider oversubscription.
So if you connect 10 DMX ports to 4Gb ports on a core switch (40Gb together) and you want to have an oversubscription of 1:20 (so max 200 hosts), you can attach this core switch to edges which will contain these 200 hosts. You'll need to devide the 40Gb by 200 hosts to see what bandwidth you'll need per host = 200Mb
The number of ISLs between the core and the edges must then be able to transport all data to and from all hosts on each switch, so if you have a switch with 20 hosts on it, you'll need 20 x 200Mb = 4Gb, so 2 x 2Gb will do the trick or simply 1 x 4Gb (but there's no redundancy here).
If your oversubscription is 1:40 and you'll have 10 ports in a fabric, you can have 400 hosts. Each host represents 40Gb/400=100Mb. Use previous example to do the math.
This is all theoratical, because no hosts are identical and some hosts need 1Gb dedicated and some can deal with as little as 50Mb or so.
Did this example help you in any way ?
xe2sdc
4 Operator
•
2.8K Posts
0
January 21st, 2008 07:00
Are you going to build a full mesh topology ?? Are you going to use ISLs only to manage the switches as a big fabric but trying to keep the traffic confined into every switch ?? Are you trying to setup a CORE/EDGE fabric ??
Choosing the number of ISL may be a real nightmare since some brand allows you to trunk multiple links to have better reliability and througput while other brands wont allow you to do so. In a word .. it depends
The only general rule I'd suggest (even if I can find examples that breaks my own rule) is that each and every switche must have AT LEAST 2 ISLs (just for a little bit of redundancy) .. And it's better that the two links go to different peers in the mesh
Message was edited by:
Stefano Del Corno
SKT2
2 Intern
•
1.3K Posts
0
January 21st, 2008 17:00
tomrbc
80 Posts
0
February 1st, 2008 14:00
SKT2
2 Intern
•
1.3K Posts
0
February 1st, 2008 18:00
xe2sdc
4 Operator
•
2.8K Posts
0
February 2nd, 2008 02:00
You can think about a "topology" when you have at least 3 switches .. and depending on the size of the switches and the kind of switches you are using, you can say if it's a core/edge, a full mesh or something else in between.
Using 2 switches is a "dash" topology .. a small line between two points
Using trunking will avoid fabric reconfiguration when an ISL goes down .. And you'll have a better use of available bandwidth. That's it.
tomrbc
80 Posts
0
February 2nd, 2008 06:00
RRR
4 Operator
•
5.7K Posts
0
February 3rd, 2008 03:00
- loadbalancing
- failover
Loadbalancing: because the 2 ISL's will now transport about 50% each of all the load. If you don't trunk them, there's no way of telling what the load difference will be. Could be 90/10, maybe 45/55, but you can't be sure.
Failover: if 1 of the 2 fails, all traffic will automatically be redirected to the remaining ISL, without any hickups
tomrbc
80 Posts
0
February 4th, 2008 02:00
RRR
4 Operator
•
5.7K Posts
0
February 4th, 2008 04:00
Good question.
In this case I must say that I don't really know why. I do know it's not 50/50 most of the time (or 33/33/33 or whatever) and that you don't have any influence in this.
tomrbc
80 Posts
0
February 4th, 2008 06:00
RRR
4 Operator
•
5.7K Posts
0
February 4th, 2008 07:00
In the enterprise license the trunking license is included indeed. At least the ones I had to deal with.
Try the command "licenceshow" or type help or ? at the commandline. If you're in the GUI, click on the "admin" button and go to the license tab.
xe2sdc
4 Operator
•
2.8K Posts
0
February 4th, 2008 09:00
ask why there isn't a pattern if the 2 links are not
truncked?
AFAIK fabric protocol will assign half of the ports to the first ISL, and the remaining half of the ports to the second ISL, without looking at the actual workload of each port/ISL .. If you have a 16 ports switch and 2 ISL, maybe you'll have ports 1-7 on first ISL and ports 8-14 on the other ISL. (obviously only the source of the frames is used in choosing the ISL to run the frames through)
If you have ports 1 2 and 3 working and all other ports idle, you'll have 100% ios on the first ISL and 0% ios on second ISL.
If ISL 1 breaks, the FSPF will try to find another path between the switches. This will give a brief network disruption (due to the fabric reconfiguratoin) but nothing more.
tomrbc
80 Posts
0
February 4th, 2008 09:00