Start a Conversation

Unsolved

This post is more than 5 years old

34687

January 29th, 2013 08:00

Designing Network for my new Equallogic SAN

I have received shipment of my new PS6500ES (YAY!) along with 2 PowerConnect 6224 switches. I am not a networking guy.

One thing that I don't understand is how I will connect to the 2 switches and the Equallogic array itself on order to manage them.

My iSCSI traffic will all be addressed as such:

  • IP Addresses: 192.168.0.1-192.168.0.62
  • Subnet Mask: 255.255.255.192 (/26)
  • Default Gateway: 192.168.0.1

However, my production IP network is more like:

  • Example IP Address: 10.100.114.X
  • Subnet Mask: 255.255.252.0
  • Default Gateway: 10.100.115.254

I know some networking stuff, but evidently not enough to wrap my head around how I will connect both of my PowerConnect switches (no management port) and the Equallogic SAN (which has 4 ports on the active controller, and none of them are labeled management) to my production IP network so I can connect, and so these items can use SNMP, SMTP, etc, but at the same time prevent iSCSI traffic from going out onto my network.

Here's how I'm planning on setting up my switch ports:

  • Ports 1-4 on both switches: EQL SAN ports
  • Ports 5-20 on both switches: iSCSI initiator connections (5 VMware ESXi 5 hosts and one Windows Server 2008R2 server)
  • Ports 22 and 24 on both switches: LAG Uplink (is two enough?)
  • Ports 23: Connect this to a switch on my production network so that I can access the web interfaces of the 2 switches and the Equallogic SAN for management.
I should add that the SAN will eventually be replicating over my production IP network (and over a WAN link).
Will this work? Will I need to use VLANs?

7 Technologist

 • 

729 Posts

January 29th, 2013 08:00

Please review the information on this link first:

en.community.dell.com/.../2632.storage-infrastructure-and-solutions-team-publications.aspx

At the top of the page is some additional links:

•EqualLogic Configuration Guide (current: v13.4) <- Summary view of everything associated with setting up the SAN network

•EqualLogic Compatibility Matrix

•Rapid EqualLogic Configuration Portal  <- Good place to start!

•Switch Configuration Guides

You should get 90% of the information you need from here.

On the Switch setup, you will need to ensure that a iSCSI vLAN is created, it will need to be large enough to host all the iSCSI interfaces from both the Hosts (your ESX host iSCSI vmnic#'s), and the array.

Also, you should concider using the 2 switches only for your iSCSI traffic as dedicated switches for just iSCSI and/or ESX inter-communication traffic (create a seperate vLAN for both).  Then use a seperate switch to connect the LAN interfaces of your ESX host to the production network.

-joe

9 Posts

January 29th, 2013 09:00

Yes, my ESX host's production network links are on different switches. Thanks for the links!

7 Technologist

 • 

729 Posts

January 29th, 2013 09:00

Regarding the LAGs…

We do recommend using the stacking cables, but if you need to use the lag, I would go with at least 4 to start.  You should also space them across the ASIC of the switch.  I believe that this is split between ports 1-12 and 13-24 (but double check this), for example two of the LAGs on port 01/02 (or 11/12) and two on 13/14 (or 23/24).

-joe

203 Posts

January 29th, 2013 12:00

I'm going to throw in a few shamless plugs that happen to be my posts, but I'd be passing them on even if they weren't.  You should find them helpful.

Reworking my PowerConnect 6200 switches for my iSCSI SAN

vmpete.com/.../reworking-my-powerconnect-6200-switches-for-my-iscsi-san

Replication with an EqualLogic SAN; Part 1

vmpete.com/.../replication-with-an-equallogic-san-part-1

9.3K Posts

January 30th, 2013 09:00

Your info on the first link isn't correct. If a port is to be part of a port-channel, it should only have the port-channel membership configuration on it, and no vlan settings or so (those only go on the port-channel itself).

Step 7: Configure/assign Port 1 as part of the management channel-group:
interface ethernet 1/g1
switchport access vlan 10
channel-group 1 mode auto
exit
interface ethernet 2/g1
switchport access vlan 10
channel-group 1 mode auto
exit

As for stacking vs using a LAG; the SAN can do a firmware upgrade with minimal to no downtime, updating cluster nodes (VMware or Microsoft) can be updated without downtime for the cluster itself, but in a stack, doing a switch firmware upgrade requires 100% downtime for everything using SAN storage.

By changing the stacking ports to ethernet mode and using them as a 20Gbit LAG (in the case of the 6200-series), you can update 1 switch and reboot while the other switch keeps everything operational. Once the first switch is back up and running you can then update the second switch.

In this case I would also change the port-channel to be a trunk port instead of an access port and allow vlan 10 and 100 (from your article/link) on this trunk.

Commands:

config

stack

stack-port 1/xg1 ethernet

stack-port 1/xg2 ethernet

stack-port 2/xg1 ethernet

stack-port 2/xg2 ethernet

[power cycle the switches (not just reload/reboot)]

interface range ethernet 1/xg1-1/xg2

channel-group 1 mode auto

exit

interface port-channel 1

switchport mode trunk

switchport trunk allowed vlan add 10,100

exit

24 Posts

January 31st, 2013 12:00

This is very interesting.  Can someone explain why "stacking" is better than the ethernet LAG on the xg ports as described above?  

We use stacking (dual 6224s) and can't upgrade without downtime, as mentioned above.  It would be great to avoid that, but what do we give up in return?

24 Posts

January 31st, 2013 13:00

Thanks.  I did read your document, by the way.  I guess I didn't read it closely enough.

203 Posts

January 31st, 2013 13:00

Tony Ansley from Dell has presented on this topic for the past couple of years at the Dell Storage Forum.  He should know, as he has written several of the reference architectures.  Anyway, just as I mentioned in my blog post vmpete.com/.../reworking-my-powerconnect-6200-switches-for-my-iscsi-san you give up the reliability/stability of stacking, because all of a sudden you have to introduce moving target standards on trunking via a LAG between switches.  The LAG also doesn't scale well if you plan to introduce more arrays.  Don't get me wrong, it drives me nuts that firmware updates on the stack of these switches requires downtime.  But I have used both configurations, and the stacking arrangement, when configured correctly, will pay off in the long run.

The PowerConnect and the EqualLogic teams would both agree that there is a greater potential for problems LAGing.  When it comes to the storage back end, Its worth going with the most bomb-proof arrangement.

203 Posts

January 31st, 2013 13:00

No doubt a good guestion though.  It has been quite a spirited discussion over the past few years.  As inter-array communication continues to increase, the interconnect between the two switches is more important than ever.

5 Practitioner

 • 

274.2K Posts

January 31st, 2013 14:00

It's not just inter array traffic that will use the switch ISL.  A server connected to switch B, might need to access a member port on switch A.  Especially when you use HIT or MEM.  Since each NIC creates an iSCSI session to every member having data for that volume.

No Events found!

Top