Highlighted
sbuerger
2 Bronze

vSwitch setup for iSCSI from inside VM through ESXi

Jump to solution

Hi,

in my esx 4 setup I had a vswitch for every vmk, and I just made two port groups on two of the switches for Guest iSCSI.

Now with the MEM on ESXi 5 it makes one switch and I make 2 port groups for Guest iSCSI.

What is the best practice for the vmnic setup for this two port groups?

vmnic4+5 = Dual Port Intel
vmnic6+7 = Dual Port Intel

vmnic4+6 = pswitch1
vmnic5+7 = pswitch2


Should I make just vmnic4 for vSAN1 and vmnic7 for vSAN2?
Should I set some of the vmnics as standby? What with the other settings for the port group?

0 Kudos
1 Solution

Accepted Solutions

Re: vSwitch setup for iSCSI from inside VM through ESXi

Jump to solution

My suggestion has been to use one vSwitch and assign one Physical NIC to one VM network portgroup, with the other NIC on standby.   Then do reverse for other VM network portgroup.    

My reason is very simple.  If each VM network portgroup only has one pNIC then on failure you are relying on the Guest VM MPIO driver to detect the issue and move all IO over to remaining path.   With Windows/Linux anyway that involves a pausing of IO on all links until a timeout is reached.  Then IO will resume on the surviving path.   If however, you let the vSwitch handle it, the MPIO driver is unaware and keeps running.   When the link is fixed, data just flows over the restored link as before.

The case for not doing that is in a test or QA lab where you want the Guest VM to behave just like a physical server.

Regards,

Don

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro

2 Replies
sbuerger
2 Bronze

Re: vSwitch setup for iSCSI from inside VM through ESXi

Jump to solution

Okay, found something through the thread in communities.vmware.com/.../259695

en.community.dell.com/.../data-drives-in-vmware.aspx

If that still is best practice I'll go with that setup...

0 Kudos

Re: vSwitch setup for iSCSI from inside VM through ESXi

Jump to solution

My suggestion has been to use one vSwitch and assign one Physical NIC to one VM network portgroup, with the other NIC on standby.   Then do reverse for other VM network portgroup.    

My reason is very simple.  If each VM network portgroup only has one pNIC then on failure you are relying on the Guest VM MPIO driver to detect the issue and move all IO over to remaining path.   With Windows/Linux anyway that involves a pausing of IO on all links until a timeout is reached.  Then IO will resume on the surviving path.   If however, you let the vSwitch handle it, the MPIO driver is unaware and keeps running.   When the link is fixed, data just flows over the restored link as before.

The case for not doing that is in a test or QA lab where you want the Guest VM to behave just like a physical server.

Regards,

Don

Social Media and Community Professional
#IWork4Dell
Get Support on Twitter - @dellcarespro