If you only have 1 array and you have 4 ports left over I would consider using those for the new DC. Is having 4 ports for this good enough ?
If you ISL the switches together, all traffic between both DC's goes accross the ISL's. On the other hand, if you create 2 x 4 ISL's (4 per trunk), performance is not a problem, since in each fabric 4 ports can never saturate the trunk, since this is equally "wide".
The advantage of ISL'ing the switches together is that every SP-port can be zoned to all hosts, so if you find out that certain ports are heavily used while other ports aren't, you could rezone the hosts to get a better load spread. If you dedicate ports to DC1 and other ports to DC2, you can't spread the load, since the DC1 ports aren't accesible by DC2 hosts....
I'm sure I created a big black cloud, but somehow I hope you can catch my drift here. I would create 4 ISL's between each switch in the same fabric.
The downside here is that you "loose" 4 x 4 ports for the trunks....
i would ISL them together just to have that flexibility but i also try to keep my biggest I/O generating systems on the same physical switch as my storage ports, other smaller systems i am ok with traversing an ISL. My 2 cents.
Indeed. I'm so used to having all licenses that I completely forgot about that
If you can move servers around I agree with Dynamox: put the heavy duty servers on the same switches as the storage array and put the lighter ones on the 2nd DC switches. You can save 1 or 2 ISL's there since you probably won't be using the bandwidth anyway
Actually I don`t have control over what servers goes where.. We already have some servers connected to the fiber switches connected directly to Storage Arry..
The new DC will have mix of high/low utilization servers. We already bought trunking licenses .
I think ISL'ing the switches together is the best option anyway and since you can't move servers around at all you'd have to connect them to a local switch. So traffic that goes accross the trunks can be either low or high bandwidth. I would focus on how many ISL's you will actually need between both DC's.
Some of this depends on how much more growth you think you might be seeing in the datacenters, but here is what I would do.
ISL the new switches to the old switches with two ISLs between each
Connect the new storage ports to the new switches
Try to maintain "locality" as much as possible as you grow
This means zoning hosts on a given switch to storage ports on that switch
Set aside 2 ports on each switch that will never be used until it is time for further fabric expansion. As soon as you run out of ports (not including the reserved ones) buy two new switches (one for each fabric) and ISL each one to each switch in the existing fabric. At this point you would want to look at migrating some of the lowest performance hosts out to the new switch so you can keep your highest performance requirements close to the "core"
I know this looks a bit further out than you are looking right now, but this is the time to consider it. When you get almost full on the new switches it will be harder to reconfigure if you need to. This midrange plan will start you on the path of a "dual-core core edge" fabric design which can probably meet your needs until you are ready to replace the two "core" switches with bigger directors. It is not as robust as having directors in the core, but it also is a fraction of the cost.
That's just what I would do though (and did do until someone coughed up money for DCX core replacements). I managed to grow the fabrics to 11 switches per fabric (2 core and 9 edge per fabric) with reasonable performance across the board. Keep in mind as well, if you have a high percentage of high performance hosts you may need to double the ISL numbers to keep the data flowing without congestion.
RRR
4 Operator
•
5.7K Posts
0
December 2nd, 2009 07:00
If you only have 1 array and you have 4 ports left over I would consider using those for the new DC. Is having 4 ports for this good enough ?
If you ISL the switches together, all traffic between both DC's goes accross the ISL's. On the other hand, if you create 2 x 4 ISL's (4 per trunk), performance is not a problem, since in each fabric 4 ports can never saturate the trunk, since this is equally "wide".
The advantage of ISL'ing the switches together is that every SP-port can be zoned to all hosts, so if you find out that certain ports are heavily used while other ports aren't, you could rezone the hosts to get a better load spread. If you dedicate ports to DC1 and other ports to DC2, you can't spread the load, since the DC1 ports aren't accesible by DC2 hosts....
I'm sure I created a big black cloud, but somehow I hope you can catch my drift here. I would create 4 ISL's between each switch in the same fabric.
The downside here is that you "loose" 4 x 4 ports for the trunks....
dynamox
9 Legend
•
20.4K Posts
0
December 2nd, 2009 08:00
dynamox
9 Legend
•
20.4K Posts
0
December 2nd, 2009 08:00
RRR
4 Operator
•
5.7K Posts
0
December 2nd, 2009 13:00
Indeed. I'm so used to having all licenses that I completely forgot about that
If you can move servers around I agree with Dynamox: put the heavy duty servers on the same switches as the storage array and put the lighter ones on the 2nd DC switches. You can save 1 or 2 ISL's there since you probably won't be using the bandwidth anyway
Tru64GUY
32 Posts
0
December 2nd, 2009 13:00
Actually I don`t have control over what servers goes where.. We already have some servers connected to the fiber switches connected directly to Storage Arry..
The new DC will have mix of high/low utilization servers. We already bought trunking licenses .
Tru64GUY
32 Posts
0
December 3rd, 2009 06:00
A0/B1/A2/A3 connects to switch1
B0/A1/B2/B3 Connects to swirtch2
All production servers use A0/B1 on switch1 side , A1/B0 on switch2 side. A2/B2 are reserved for TEST/DEV servers.
A3/B3 are used for mirrorView with other fabric on another building.
I am thinking about connecting Switch 3 to A4/B5/A6/B7 , switch 4 to B4/A5/B6/A7 .
RRR
4 Operator
•
5.7K Posts
0
December 3rd, 2009 07:00
Allen Ward
4 Operator
•
2.1K Posts
0
December 3rd, 2009 11:00
Some of this depends on how much more growth you think you might be seeing in the datacenters, but here is what I would do.
I know this looks a bit further out than you are looking right now, but this is the time to consider it. When you get almost full on the new switches it will be harder to reconfigure if you need to. This midrange plan will start you on the path of a "dual-core core edge" fabric design which can probably meet your needs until you are ready to replace the two "core" switches with bigger directors. It is not as robust as having directors in the core, but it also is a fraction of the cost.
That's just what I would do though (and did do until someone coughed up money for DCX core replacements). I managed to grow the fabrics to 11 switches per fabric (2 core and 9 edge per fabric) with reasonable performance across the board. Keep in mind as well, if you have a high percentage of high performance hosts you may need to double the ISL numbers to keep the data flowing without congestion.