Start a Conversation

Unsolved

This post is more than 5 years old

12464

January 5th, 2012 10:00

New PS6100XS implementation - network configuration

Hi All,

I'm very excited to be setting up a PS6100XS along with a pair of 24 port (HP 2910al) switches and 3 servers as VM hosts that each have 4 NICs. We'll be running ESXi v5. Compared with our old SAN, an MSA2012i, this setup is going to just scream.

I'm an IT generalist, so I know the basic concepts behind VLANs, STP, trunking, etc., but have no idea what the best way to go about configuring the network would be. My main goals are resiliency and performance (in that order).

  1. I've heard that STP is required for EqualLogic SANs. I'm assuming that's true, but if it's not and there's some other way to achieve protection against a switch failing, I'm all ears.
  2. There seem to be many different STP implementations, but RSTP seems to be widely favored. Is that the best way to go for us?
  3. The switches can be stacked, so that there is only one IP address for management. I'm not clear on whether this is compatible with STP or not. If it is, any thoughts on the benefits/drawbacks to this?
  4. Any special considerations around implementing MEM with STP?
  5. How many ports would you recommend trunking between the two switches? 
  6. How exactly should I be allocating ports? I'm thinking it would be ports 1-2 on controller 1 on the SAN would go to switch 1 and ports 3-4 on controller 1 would go to switch 2. For controller 2 this would be flip-flopped. I'm not clear on what to do with the management ports - I assume both controller's management ports go to switch 1 since switch 2 only comes alive if switch 1 dies, right? 
  7. On the servers, same question, but fewer ports. I assume ports 1-2 on each server go to switch 1 and ports 3-4 go to switch 2.
  8. If I do it this way, do the ports on the server work together so that if there's heavy SAN traffic it can use both ports?
Sorry for all the (probably dumb) questions - I appreciate any feedback. 
Thanks,
-Jake

4 Operator

 • 

9.3K Posts

January 5th, 2012 10:00

4 NICs isn't enough for this type of setup (unless 2 or more of them are 10Gbit or your "4 NICs" doesn't include iSCSI HBAs or NICs).

NIC requirements:

- 2 dedicated Gbit NICs for iSCSI

- 1 (or 2) dedicated Gbit NICs for vMotion

- 2 or more NICs for VM Network access

You can put vMotion across the LAN connection, but that could significantly impact LAN access performance for your VMs when you're doing a vMotion migration.

I usually recommend at least 6 NICs to start with, and if you want to use 4 NICs for iSCSI (as the PS6100-series has 4 active iSCSI ports), you'd want to start with 8 NICs.

5 Practitioner

 • 

274.2K Posts

January 5th, 2012 10:00

Re: RSTP, Rapid Spanning Tree Protocol tells the switch to bring up the port very quickly after a state change.  Since the array doesn't support trunking/bonding, etc there's no need for the switch to check first to see if it's connected to another switch or end node.  This is important when you have a CM failover, if the ports don't come right back up you can lose connections to your servers.

So the other questions regarding STP/RSTP don't apply.  MEM and Mgmt have nothing special to do with STP.

Stacking is preferred over trunking.  It's more efficient and typically much faster.  

There are documents on the EQL website that show how to properly cable the array.  But yes you want to alternate them between the switches.  

On the hosts you need to setup MPIO to get the proper balance between the available ISCSI NICs you are using.

Regards,

61 Posts

January 5th, 2012 11:00

We actually aren't licensed for vMotion, but that's still good to know. If/when we upgrade to Essentials Plus (which has vMotion) I'll budget for additional NICs. That said, I'm confused because I thought it was possible to configure the NICs to trunk together so that they didn't need to be dedicated for one type of traffic or another, and I also thought that with STP you essentially lose the benefit of the NICs plugged into the backup switch.

5 Practitioner

 • 

274.2K Posts

January 5th, 2012 11:00

DevMgr is correct, you need to have more ports to help compensate in this type of configuration, but customers with at least 8x GbE NICs have done this w/o a problem.  You need to carefully set up the switches is where this usually falls down.  For the cost of another dual port or quad GbE card, that's typically a much better solution.   Of much also depends on the kind of work loads we're talking about.  Heavy I/O applications like Exchange/SQL or more CPU I/O like hosting applications.

5 Practitioner

 • 

274.2K Posts

January 5th, 2012 11:00

It is possible to do what you are suggesting.  It would involve trunking ports and creating VLANs.   Which is not a simple task.

However, even then 4x NICs is cutting it close in that case.  6 to 8 would still be recommended.   You don't want iSCSI which is very bursty to take a lion share of your LAN side traffic for example.  

4 Operator

 • 

9.3K Posts

January 5th, 2012 11:00

iSCSI traffic should be isolated from LAN traffic for performance and security reasons. The easiest way to do this is with dedicated switches (not connected to your LAN other than maybe a management port) and NICs. However, you can also use VLANs, but with Gbit NICs, even with VLANs, your performance will suffer (on the iSCSI and LAN side both most likely).

No Events found!

Top