Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1703

March 8th, 2018 05:00

4210 Moving from 1GBit Copper to 10GBit SFP+ Fiber

We have a PS4210 for Hyper-V usage that is currently connected through 2 old Dell 5424 (that isn't supported for iSCSI usage with the EQL).

We moving to a HPE 3810M SFP+. I already ordered 4 Dell Transceivers for the EQL side. I hope the 407-BBOU are the correct ones. On the HPE side we use HPE X132 (LC SR).

We already use the dedicated management ports for management.

What do I have to do when moving from GBit to 10GBit?

Just shut down all I/O traffic, unplug the GBit and plugin the transceiver?

As I see in the Group Manager there are no other eth Interfaces the SFP+, so I don't have to reconfige IPs?

Is there anything different to configure on the switch side for 10GBit?

I still can use 1GBit from Hyper-V Hosts to the switches for the SAN traffic, till I replace the hosts?

 

5 Practitioner

 • 

274.2K Posts

March 8th, 2018 07:00

Hello,

 re: Transceiver.  I always try to make sure the transceiver is compatible with the switch not the array. 

Re: The 10GbE ports and the GbE ports share the same IP.  So there's nothing to reconfigure, but only one or the other should be connected at any one time.  So shutting down the servers, then changing all the cables, do ping tests to confirm everything is OK before starting up the servers.

 I don't believe that switch was tested by Dell, so I can't really advise you on it.  You might want to check with HP for their iSCSI recommendations.  Sometimes there are buffer or QoS settings that can enhance iSCSI performance.

 GbE to 10GbE is never ideal.   That's a very serious mismatch.   On reads it could be possible for the array to overload a switch ports buffer. 

 Regards,

Don

51 Posts

March 9th, 2018 01:00

I always thought only the other way (10GbE on the host and GbE on the SAN) would be an issue?

If it is that serious I'll better get the new servers ready before making the move, so I match the speed on both sides.

 

 

5 Practitioner

 • 

274.2K Posts

March 9th, 2018 03:00

Hello,

 It's more serious to have servers at 10GbE and storage at GbE, since they combine together.  But the ultimate issue is the same, a potential mismatch, that the switch will never be able to properly buffer.  It's less likely to happen with 10GbE storage and GbE servers.

 If you monitor you current I/O pattern that will help you decide how fast you need to move.  I.e. looking at SANHQ I/O history.

 It's something to be aware of and again some switches allow you to modify the buffering so that you can mitigate some of it.  For example, years ago older HP switches split the buffers in to four queues. iSCSI could only access one so that meant 1/4 of the available buffers.  HP had a CLI command to align them all into one queue.  Other switches allow you to borrow unused port cache. 

 Regards,

Don

No Events found!

Top