Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

15982

October 22nd, 2013 07:00

Adding switch to iSCSI network

Hi,

We have an iSCSI network for our EQL boxes vSphere 5.1 hosts. The network is 192.168.0.0 /24, we are using PC5548 switches and we have three EQL Groups. Everything is setup following best practices from DELL and everything is working kind of ok.

Since we are experiencing bad performance from our virtualized SQL servers and troubleshooting has led to the switches as suspicious culprit I want to try using other switches to find out.

So I'm planning of adding a pair of cisco switches parallel to the PC5548 and connecting one host and one PS Group to those (where the SQL VM's are residing) and see the difference but my question is can I use the same network 192.168.0.0. /24 even if the cisco and dell switches are not physically connected. Will vMotion work?

4 Operator

 • 

1.8K Posts

October 28th, 2013 11:00

With 10 Arrays you shouldnt use entry level switches. The interlink trunk has to be 80% of eql bandwidh. You may consider to buy Dell Force10 switchs with low latency and large buffers.

Regards,

Joerg

5 Practitioner

 • 

274.2K Posts

October 28th, 2013 11:00

Overall, I still ike the 3750G over the 5400.   You will need to trunk at least 8 ports for that many arrays.  That's a heavy load on those switches.  

If possible a newer, more capable switch should be used.   I need to check the ECM, but I don't believe the 5400 is qualfied on such a large environment.

5 Practitioner

 • 

274.2K Posts

October 22nd, 2013 08:00

The PC5548 has know scaling issues, so replacing those is a good bet to improve things.

What kind of Cisco switches are you planning on using?  Not all Cisco switches are suitable for iSCSI SANs.

Re: network IP.    If you can completely isolate the environments, then yes.  L2 switches aren't concerned with IP addresses.

Has the entire ESX environment been checked for best practices?  

This PDF covers how to configure ESX to work properly with EQL iSCSI storage.

en.community.dell.com/.../20434601.aspx

110 Posts

October 23rd, 2013 03:00

Yes, the hosts and switches have as far as I know been properly configured with Jumbo frames, disabling LRO and Delayed ACK, using DELL MEM etc..

I realized I can not islolate since some VM's then would not be able to have access to volumes on the other storage groups. So, I'm wondering what happen if I add the other switch to the iSCSI environment, if I just go ahead and connect the whatever switch I have to the PC5548 and setting those ports to NOT portfast? Should I trunk since I'm using vlan 500 on the PC5548?

As for the switch to use I have a Cisco 3750 and a PC5448, which one do you recommend?

Thanks

5 Practitioner

 • 

274.2K Posts

October 23rd, 2013 08:00

What model of 3750?   There's a huge difference between 3750G, E and X models.  It's needs the most current FW installed to resolve a flowcontrol issue.

Moving to a new switch environment live is very tricky.  Getting spanning-tree correct, VLANs, etc..

My personal experience is that shutting down and making the change is safest.  

If you decide to go live, do so carefully.   Trunk the 3750's, and put a test server on there and do testing from there to all the devices on the SAN.

Then move a port over and verify that it still works. Step-by-step and verify connectivity each time.

The links below have more info on setting up the Cisco IOS switches.  Check out the Equallogic Configuration Guide.

 

EqualLogic

EqualLogic Compatibility Matrix

EqualLogic Configuration Guide

Rapid EqualLogic Configuration Portal

EqualLogic Best Practices Whitepapers

EqualLogic Best Practices ESX

 

 Regards,

110 Posts

October 28th, 2013 08:00

Hi,

The cisco  switches are 3750G , they are not listed in the compatibility matrix so I will go forward with the PC5400 even if they are discontinued they are listed on the compatibility matrix.

5 Practitioner

 • 

274.2K Posts

October 28th, 2013 08:00

RE: Cisco 3750G.  That's a tough call.  The 3750G is at least stackable and overall is a better switch compared to the 55xx.   The "G" is an older model, slower stack compared to E, or X models, though still faster than trunking ports.  Downside is that all packets are broadcast over the stacking bus.

The PC55xx was OK'd but later it was found to have issues as you scaled up the load.   Which sounds like what you are experiencing now.

What's the total number of servers and arrays?

110 Posts

October 28th, 2013 11:00

I'm planning on switching to PC5400 not PC5500. PC5500 is what I'm using now they are not listed in the compatibility matrix. The PC5400 is.

How is the PC5400 compared to Cisco 3750G then?

We have four host with multiple HBAs and 10 arrays divided in three groups

No Events found!

Top