Start a Conversation

Unsolved

This post is more than 5 years old

4248

November 30th, 2015 20:00

VNX5400 iSCSI on a single subnet

Hi All,

What are potential issues if I am using a single VLAN for iSCSI traffic on a VNX5400 with WMware 5.1 hosts? I know that it is not a recommended best practice, because all the initiators will be connecting to the same target ports, but there is nothing I can do to separate iSCSI traffic into two VLANs at the moment.

Appreciate your input.

65 Posts

November 30th, 2015 22:00

Hello,

There would be a potential impact on performance and/or stable connectivity, you may face problems such as frequent iSCSI host path disconnects.

From https://support.emc.com/kb/41172:

VMware does not support more than one initiator or HBA for that host on each subnet and the Microsoft iSCSI Software Initiator default configuration ignores additional cards on the same subnet. For Microsoft iSCSI Initiator, you must select the Advanced selection when logging on to each port and you must select both the Source and Destination IP addresses, do not select "Default" for any of the three choices.

Another potential issue, if only one subnet is used, that host will auto-discover and use all the available paths to the storage. This will then allow a larger number of simultaneous I/O and potentially lead to the SP port queues becoming overloaded, when large numbers of hosts are connected. In Fibre Channel, this would be addressed by zoning to a limited number of SP ports and restricting the execution throttle (see 53727), but this can be harder to regulate in iSCSI.

And in https://support.emc.com/kb/71615:

Make sure only one IQN will log into one storage processor iSCSI port per SP. The best way to avoid duplicate logins is to use separate IP subnets (or VLANs) for the iSCSI Ports per SP, also for the iSCSI HBA in the hosts.

Keep in mind the separate subnets / VLANs are per Storage Processor (a shared subnet between 2 ports, each on a separate SP is a good configuration).

If there is no way for you to set up the configuration as mentioned in the 2 KB articles above, then until it is possible, I would consider setting up each host to have only 2 paths to the VNX (1 to each SP). This would ensure a stable configuration at the cost of path redundancy.

I hope this helps.

Good luck, and post back for any further questions.

Adham

8.6K Posts

December 1st, 2015 01:00

Uneven distribution of traffic on the interfaces – could lead to lower performance

December 1st, 2015 05:00

Thank you.

I wonder what number are they referring to when they are saying 'when large number of hosts are connected' - 5, 10, 20?

Any idea? What if I only have 5, or 10, is this considered a large number of hosts?

65 Posts

December 1st, 2015 23:00

I wouldn't worry about this number at the moment. In any case, it's not a specific number; there are many variables which could determine this based on your configuration.

It's more of a case where the higher the number of hosts (with the incorrect configuration of having 1 subnet on different FE ports on an SP), the more likely you'll have problems.

4.5K Posts

December 2nd, 2015 07:00

For now you should use one NIC on the hosts with one path to SPA and one path SPB on the same subnet.

If you have two NIC's you could configure each on a different subnet rather than using VLANs to segregate the NIC's to different ports on each SP.. 

When using iSCSI you'll want to monitor the switches you're using. iSCSI requires using "Enterprise Class" switches so you should have good reporting available on the switches to monitor for congestion (the worst issue). Make sure your switches can handle the bandwith (wire speed of the backplane).

On the switches enable flow control (pause frames) for best performance.

On the hosts remember to disable TCP Delayed ACK - see KB article 71615 - this KB will list all the articles for iSCSI. There are a couple for Windows/ESX/Linux hosts:

On the Windows or VMware ESX host server check whether the DELAYED ACK is disabled. To change the delayed ACK settings for (this may also apply to Linux hosts):

a. VMware ESX host server, see emc191777 - Why is ESX performance slow when using iSCSI?
b. Windows, see emc150702 - Recommended TCP/IP settings for Microsoft iSCSI configurations to fix slow performance
c. For IBM AIX, please see the following link:
http://www-01.ibm.com/support/knowledgecenter/SS3JRN_7.2.0/com.ibm.itm.doc/ITM62_Deploy94.htm%23wq110


glen

December 3rd, 2015 07:00

And if the hosts have two iSCSI NICs that are both already configured on a single iSCSI subnet (because they are connected to a different storage), and the hosts do not have any additional NICs, then I am screwed with this config

I have an existing ESXi 5.1 environment consisting of 8 hosts attached to EqualLogic. Each host has two NICs - NIC1 and NIC2 - configured for iSCSI. There is only a single vlan - VLAN_A - on a single subnet configured on the switches for iSCSI traffic. All the host NICs and EqualLogic iSCSI ports are connected to that single iSCSI VLAN_A.


I have to somehow stick a VNX into this, following the best practices, so that the data could be migrated from Equallogic with a minimum impact to production. As the end result I want to have the following:

  • two separate iSCSI VLANs on separate subnets: VLAN_A and VLAN_B
  • each host will be connected to both vlans: NIC1 on VLAN_A and NIC2 on VLAN_B
  • Storage ports SPA0 and SPB1 will be connected to VLAN_A, Storage ports SPA1 and SPB0 will be connected to VLAN_B


Seems easy, doesn’t it?

I see two ways of achieving this:

Option 1.

Configure one more vlan - VLAN_B - for iSCSI on a separate subnet. Connect SPA1 and SPB0 ports to it.

Connect only two ports from VNX - SPA0 and SPB1 - to that single existing iSCSI VLAN_A, leave the other SP ports unplugged. In this case, both server initiators will log into the same ports, EMC best practice wants to avoid it, but I have no way to avoid it, since they are connected to EqualLogic. What issues can potentially occur if I do this?

Perform data migration from EqualLogic to VNX using storage vMotion.

After the data migration is complete, leave host iSCSI NIC1 connected to VLAN_A, disconnect host iSCSI NIC2 from VLAN_A, configure NIC2 with an IP address of VLAN_B and move it to VLAN_B. Then rescan the storage on the hosts, they will log in to the array from NIC2, and I am all set. Any potential issues here: storage connectivity disruption, etc.?

Option 2.

Configure one more vlan - VLAN_B - for iSCSI on a separate subnet. Connect SPA1 and SPB0 to it.

Connect only two ports from VNX - SPA0 and SPB1 - to that single existing iSCSI VLAN_A, leave the other SP ports unplugged. In this case, both server initiators will log into the same storage ports, EMC best practice wants to avoid it, but I have no way to avoid it, since they are connected to EqualLogic.

If the servers have an available NIC, configure this unused server NIC - NIC3 - on each host with the IP address for VLAN_B, connect it to VLAN_B. Rescan the storage on the hosts, they will log in to the array from three NICs: two on VLAN_A, one on VLAN_B. What issues can potentially occur if I do this?

Perform data migration from EqualLogic to VNX using storage vMotion.

Leave host iSCSI NIC1 connected to VLAN_A, host iSCSI NIC3 connected to VLAN_B, disconnect host iSCSI NIC2 from VLAN_A, decommission or repurpose NIC2. Rescan the storage on the hosts, to make sure that they only have 4 paths to VNX.

 

It all seems logical to me, however I have not done this types of manipulations with iSCSI. Do you think any of them make sense? Which one of these two options will be a lesser evil, less disruptive, less impact to performance, or is less likely to cause unexpected behavior? What do you think are pros and cons for either one of them?

  Are there other ways to achieve this? Please share.

4.5K Posts

December 7th, 2015 11:00

Can the Equallogic use just one NIC on each host? Does it require wto NIC's?

The VNX can use one host NIC - just SPA0/SPB0 ports connected.

glen

December 7th, 2015 16:00

That might be an idea. I have not considered that. I am sure I can run EqualLogic connected to the hosts through only one host NIC on VLAN_A. However EqualLogic is currently a production array, so I am not sure what kind of impact will there be on production VMs and on the data migration if I disconnect one NIC on each host from EqualLogic, it will take time to move everything to VNX.

4.5K Posts

December 9th, 2015 10:00

I'm not a switch expert but could you create a new VLAN using port numbers? For example, take the switch port from one NIC and link it to the two SP ports (SPA and SPB) in a VLAN?

On the hosts isn't there a way to configure a vswitch to just connect to the VNX SP ports using one NIC?

glen

No Events found!

Top