Start a Conversation

Unsolved

This post is more than 5 years old

B

16247

March 1st, 2014 08:00

in-Guest iSCSI considerations when moving to converged network

Currently we have 1Gb networking and subsequently have multiple NICs on any give box devoted to specific traffic. For instance, our standard build assigns to two 1Gb NICs to vmk iSCSI traffic and two other separate NICs to in-guest direct to iSCSI traffic. Now we are moving to 10Gb converged networks and will be running vmk iSCSI and in-host iSCSI over the same physical NICs.

Is there a best practice for this in VMWare? Should we use separate port groups? Same port groups? Any Network IO Control concerns? Any special tweaks to anything?

Much of this concern stems from a problem we had year or so back where in-guest iSCSI initiators would lose their disks and we had to jump through a bunch of hoops setting VMNET3 buffers, ring sizes, disabling TCP delayed awk in vSwitches and 100 other things. Obviously this can't happen again, so I am being extra cautious as we move to this converged infrastructure. Any advice or best practice white papers greatly appreciated.

EDIT: Said vDS in places where I meant port groups.

March 1st, 2014 08:00

I think I mispoke. I meant to say a single vDS, but multiple port groups, yes…. including port groups for  VM traffic, vMotion, vmk heartbeat, etc. However, I don't follow you on the different VLANs. Our iSCSI lives in a single VLAN. So are you just saying to put the in-guest iSCSI on one port group and then the vmk iSCSI on another port group that all have access to the same underlying physical iSCSI VLAN?

We had looked at nPAR, but it seems like more effort than it was worth considering what vDS gives, but I may reconsider that based on your recommendation.

4 Operator

 • 

1.7K Posts

March 1st, 2014 08:00

A single pNIC only can assign to a single vSS/vDS as uplink. So separate Switches for ESXi traffic and VM Guest is out of the game until you add more pNICs.

Create extra VM Network Portgroups for your VM Guest ISCSI and use different VLANs. Both, vSS and vDS allowing traffic shaping when needed but only vDS allows it in ingres und egress. NIOC only works with vDS and Traffic shaping works on a different level depending on your version of vSphere.

Check:

blogs.vmware.com/.../vsphere-5-1-network-io-control-nioc-architecture-old-and-new.html

We have both kind of setups which means Storage over vDS or vSS. If possible  place your SAN on vSS because is have seen weird vDS behaviour in the past.  If you use vDS please be sure that your vCenter isnt running in the same vSphere Cluster it controls.

Maybe your pNICs supports nPAR or similar (depends on Vendor) which is a very smart solution for your problem.

Regards

Joerg

No Events found!

Top