Unsolved

This post is more than 5 years old

1 Rookie

 • 

30 Posts

2655

September 2nd, 2014 07:00

VMware - iSCSI - Port Binding Question VMware - Subnet Multi Subnet

Hey,

I've been thinking of this recently and would like some input from those with similar thoughts, experience and opinions.  Regarding KB 2038869  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869 which explains the recommended and revised iSCSI multipathing design and standards.

I have worked across most storage platforms using different fabrics.  With regard to iSCSI and VMware port binding being recommended for use with only one subnet.  I may be wrong but I believe this can be contextual and only if true routing is taking place for the storage network which is not recommended anyway.

I have implemented many storage solutions where a storage system has 2 SP with 2 subnets used between each for target networks. e.g SP1 p1 192.168.1/24, p2 192.168.5/24.  SP2 p1 192.168.1/24, p2 192.168.5/24.  From ESXi's I've configured two separate iSCSI vswitches with one vmnic respective to each target subnet.  Dual redundancy from ESXi's to switches (switching redundancy configs etc) to Storage throughout desgin. I have used port binding with these multiple subnets easily with expected 4 paths results per device in Storage AA mode.  Path failover and failback works perfect with redundancy at all layers.

From the article it recommends no binding for different subnets and routing.  But bear in mind you can have communication from VMware iSCSI vmk ports to SAN with different subnets by use of vlan switch ports and dedicated switches without routing just fine.  Also if two iSCSI vmnics per vswitch are used for each vmkernel then I don't see why it shouldn't be even better to make use by path failover of both the vmnics configured by port binding.

I have read the disadvantages of what can happen, but personally have never experienced this.

I would appreciate your thoughts on this. Thank you

45 Posts

September 3rd, 2014 04:00

we see this a lot in support.

the vmware kb is backed up by new emc doc:

https://support.emc.com/docu53338_VMware_ESX_5.x_iSCSI_NMP_Connectivity_Guide_for_VNX.pdf?language=en_US

using multi-subnet leads to intermittent connectivity issues, which is around the whole broadcast domain thing.

keep in mind, the same applies to ALL vendors.

Dell / IBM & everyone else also recommend 1 subnet with iscsi port binding.

35 Posts

September 4th, 2014 04:00

Hello-

I'm working with a client to help configure VNX block and connect ESXi over iSCSI. I suggested him to use multiple subnet as in the below example.

iSCSi.png

But he said he already has his 2 NICs configured on same subnet which are being used for different storage and that he prefers to stay that way. So assuming that we are going with one subnet, can you guys tell how to configure the switches? Do I need to use vLANs?

1 Rookie

 • 

30 Posts

September 22nd, 2014 06:00

Hey,

Thanks for you info.  It hasn't always been the norm with all storage vendors but  actually a recommendation from some as there are many different type of storage OS and series and design structure.  Over the past 2-3 years it has been evolving with not every vendor recommending the same thing.  I believe it's actually now becoming a standard which is good.  As mentioned I have implemented design many a time with multi-subnet and binding and have never experienced any issues whatsoever.

I recon best bet is to default to the virtualization vendor document and storage vendor documents and use discretion.

1 Rookie

 • 

4 Posts

February 1st, 2025 00:09

@sullybags​ or anyone else what is the 'whole broadcast domain thing' - like, technically, what is the issue?

Moderator

 • 

7.7K Posts

February 3rd, 2025 15:59

Hello ChalkyLad,

A VMware broadcast domain consists of various components such as Virtual Switches, Physical Switches, VLANs, and Subnets that are all part of the broadcast domain at layer 2. In this context, BUM (Broadcast, Unknown Unicast, Multicast) traffic refers to layer 2 traffic that must leave the virtual switch. When considering unicast traffic within a VMware environment, it involves known MAC sources and known MAC destinations from the perspective of the MAC address table on the switch (virtual or physical). This distinction is crucial for understanding how traffic flows within the VMware broadcast domain.

1 Rookie

 • 

4 Posts

February 3rd, 2025 18:07

@DELL-Sam L​ sure, but in the above post, someone said ‘using multi-subnet leads to intermittent connectivity issues, which is around the whole broadcast domain thing’, so I should clarify my question is about that - why does multi-subnet lead to intermittent connectivity issues?  Why is broadcast domain relevant to that particular issue?

Moderator

 • 

7.7K Posts

February 3rd, 2025 22:00

Hello ChalkyLad,

Having multi-subnet in a lot of cases that I have seen cause latency and connection issues. In some cases that I have seen it has been timeout setting that has resolved some customers issues in other case it has been different things.

1 Rookie

 • 

4 Posts

February 4th, 2025 22:12

So anything specific to multi-subnet port-binding or not?

Moderator

 • 

7.7K Posts

February 5th, 2025 14:57

Hello ChalkyLad,

Which VNX system do you have and what is your current OE?

1 Rookie

 • 

4 Posts

February 5th, 2025 18:00

The OP question, and mine, is broader than that.

Moderator

 • 

9.4K Posts

February 5th, 2025 18:12

These articles from Vmware have more information. https://dell.to/3Q6mJKk and https://dell.to/3Q6mDlW and https://dell.to/4jILlX7

Top