Start a Conversation

Unsolved

This post is more than 5 years old

W

2487

September 2nd, 2014 07:00

VMware - iSCSI - Port Binding Question VMware - Subnet Multi Subnet

Hey,

I've been thinking of this recently and would like some input from those with similar thoughts, experience and opinions.  Regarding KB 2038869  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869 which explains the recommended and revised iSCSI multipathing design and standards.

I have worked across most storage platforms using different fabrics.  With regard to iSCSI and VMware port binding being recommended for use with only one subnet.  I may be wrong but I believe this can be contextual and only if true routing is taking place for the storage network which is not recommended anyway.

I have implemented many storage solutions where a storage system has 2 SP with 2 subnets used between each for target networks. e.g SP1 p1 192.168.1/24, p2 192.168.5/24.  SP2 p1 192.168.1/24, p2 192.168.5/24.  From ESXi's I've configured two separate iSCSI vswitches with one vmnic respective to each target subnet.  Dual redundancy from ESXi's to switches (switching redundancy configs etc) to Storage throughout desgin. I have used port binding with these multiple subnets easily with expected 4 paths results per device in Storage AA mode.  Path failover and failback works perfect with redundancy at all layers.

From the article it recommends no binding for different subnets and routing.  But bear in mind you can have communication from VMware iSCSI vmk ports to SAN with different subnets by use of vlan switch ports and dedicated switches without routing just fine.  Also if two iSCSI vmnics per vswitch are used for each vmkernel then I don't see why it shouldn't be even better to make use by path failover of both the vmnics configured by port binding.

I have read the disadvantages of what can happen, but personally have never experienced this.

I would appreciate your thoughts on this. Thank you

45 Posts

September 3rd, 2014 04:00

we see this a lot in support.

the vmware kb is backed up by new emc doc:

https://support.emc.com/docu53338_VMware_ESX_5.x_iSCSI_NMP_Connectivity_Guide_for_VNX.pdf?language=en_US

using multi-subnet leads to intermittent connectivity issues, which is around the whole broadcast domain thing.

keep in mind, the same applies to ALL vendors.

Dell / IBM & everyone else also recommend 1 subnet with iscsi port binding.

35 Posts

September 4th, 2014 04:00

Hello-

I'm working with a client to help configure VNX block and connect ESXi over iSCSI. I suggested him to use multiple subnet as in the below example.

iSCSi.png

But he said he already has his 2 NICs configured on same subnet which are being used for different storage and that he prefers to stay that way. So assuming that we are going with one subnet, can you guys tell how to configure the switches? Do I need to use vLANs?

30 Posts

September 22nd, 2014 06:00

Hey,

Thanks for you info.  It hasn't always been the norm with all storage vendors but  actually a recommendation from some as there are many different type of storage OS and series and design structure.  Over the past 2-3 years it has been evolving with not every vendor recommending the same thing.  I believe it's actually now becoming a standard which is good.  As mentioned I have implemented design many a time with multi-subnet and binding and have never experienced any issues whatsoever.

I recon best bet is to default to the virtualization vendor document and storage vendor documents and use discretion.

No Events found!

Top