This has been debated on many posts in years past. Assuming all host interfaces and all array interfaces are properly meshed, when looking at seeing how MPIO works, you will also see that it becomes pretty apparent that for smooth operation, those switches will need an interconnect of some sort. Now, add to that, the cross array communication communication that occurs when you have a pool with more than one member, and this interconnect becomes even more important. ...In fact, this interconnect has become more important than ever.
Even though stacking has one major disadvantage to LAGing, I do prefer to it being stacked (more details at: http://vmpete.com/2011/06/26/reworking-my-powerconnect-6200-switches-for-my-iscsi-san/ )
So what happens if you say, "nope, I'm not going to create an interconnect. Onward!" Well typically, you'll get randomly dropping iSCSI connections from the host (whether it be via the hypervisor, or in guest initiators), among other things. How do I know? ...I've had an interconnect fail on me, and the results were exactly that. (described here: http://vmpete.com/2012/10/06/diagnosing-a-failed-iscsi-switch-interconnect-in-a-vsphere-environment/ )
So stick with the best practices, and stack them. Most importantly, make sure the configurations are rock solid, and they will be very stable switches.
you are gaining the ability to provision switch dependent LAG/LACP port channels towards your host/Servers.
what you lose is a "no lights out" maintenance window. i.e. when using split/stand-alone switches, you can perform upgrades/maintainance on them one at a time. when they are stack, you lose the ability to do "hitless" maintenance. since the switches in a stack present single active control plane, an event such as a firmware upgrade will incur a necessary reboot into the new revision, and both of them will go down when that happens.
If however, you have switches that can offer Multi- Chassis-LAG/vPC/VLT, then you can enjoy both the benefits i.e. build downstream LAGs and still have two active, distinct control planes. the trade-off is cost, as this feature set is available on higher end models that cost significantly more - whether Cisco Nexus, F10 s4810 & z9K, or Brocade VCS, Extreme etc.
In my experience, it boils down to individual customer case & priorities. i have rolled out plenty of both versions for different customers, i would not say one is more valid that the other - just down to the requirements and constraints of your own unique case. Just ensure you have provided redundancy at port level, controller level, and onwards - switch and link level, which either of these can achieve.
I agree with richteR13 in that is always boils down to individual customer cases and priorities as long as redundancy is provided at all the various levels. To your point about higher end models offering downstream LAGs as well as two active control planes having steeper costs is also an unfortunate truth. As with such things, balancing cost and benefit is the tough part. Forrester created a Total Economic Impact report for Brocade VCS Fabrics which might be helpful though it is only for Brocade VCS and doesn't cover any of the others you mentioned. The direct link to the report is below:
sketchy00
203 Posts
0
January 9th, 2014 20:00
This has been debated on many posts in years past. Assuming all host interfaces and all array interfaces are properly meshed, when looking at seeing how MPIO works, you will also see that it becomes pretty apparent that for smooth operation, those switches will need an interconnect of some sort. Now, add to that, the cross array communication communication that occurs when you have a pool with more than one member, and this interconnect becomes even more important. ...In fact, this interconnect has become more important than ever.
Even though stacking has one major disadvantage to LAGing, I do prefer to it being stacked (more details at: http://vmpete.com/2011/06/26/reworking-my-powerconnect-6200-switches-for-my-iscsi-san/ )
So what happens if you say, "nope, I'm not going to create an interconnect. Onward!" Well typically, you'll get randomly dropping iSCSI connections from the host (whether it be via the hypervisor, or in guest initiators), among other things. How do I know? ...I've had an interconnect fail on me, and the results were exactly that. (described here: http://vmpete.com/2012/10/06/diagnosing-a-failed-iscsi-switch-interconnect-in-a-vsphere-environment/ )
So stick with the best practices, and stack them. Most importantly, make sure the configurations are rock solid, and they will be very stable switches.
- Pete
richteR13
1 Message
0
January 11th, 2014 16:00
Hi there
in very high level terms, via stacking,
If however, you have switches that can offer Multi- Chassis-LAG/vPC/VLT, then you can enjoy both the benefits i.e. build downstream LAGs and still have two active, distinct control planes. the trade-off is cost, as this feature set is available on higher end models that cost significantly more - whether Cisco Nexus, F10 s4810 & z9K, or Brocade VCS, Extreme etc.
In my experience, it boils down to individual customer case & priorities. i have rolled out plenty of both versions for different customers, i would not say one is more valid that the other - just down to the requirements and constraints of your own unique case. Just ensure you have provided redundancy at port level, controller level, and onwards - switch and link level, which either of these can achieve.
cheers
Martin2341
26 Posts
0
January 16th, 2014 13:00
Hello,
I agree with richteR13 in that is always boils down to individual customer cases and priorities as long as redundancy is provided at all the various levels. To your point about higher end models offering downstream LAGs as well as two active control planes having steeper costs is also an unfortunate truth. As with such things, balancing cost and benefit is the tough part. Forrester created a Total Economic Impact report for Brocade VCS Fabrics which might be helpful though it is only for Brocade VCS and doesn't cover any of the others you mentioned. The direct link to the report is below:
http://www.brocade.com/downloads/documents/white_papers/tei-of-brocade-etherfabric-wp.pdf