We have a lot of vm`s with luns directly connected to the vm. All that is on a group of eql's in one pool.
2x ps4000 500GB raid 10
1x ps6100xv 600GB raid50
firmware 5.1.2 (and no please dont tell me to go to 5.2 epa we all know what happens with new major releases of eql:) they are getting pulled!! 🙂 )...
C = VMDK on EQL
D = iSCSI lun directly connect through the ms iscsi initiator (MPIO on 2 nics).
so what are the best settings within windows??
jumbo frames (just heared there is a bug on vmxnet3...)?
disableon the nic: Client for MS networks, QoS Packet Scheduler, File and Printer Sharing for MS Networks, IPv6,Link-layer topoly (2x)
What advanced settings do you guys use?
Disable Delayed ACKs per MS KB http://support.microsoft.com/kb/328890
Disable Windows Auto-tuning per KB (netsh int tcp set global autotuninglevel=disabled).
Disable Nagle on iSCSI NICs.
TCP Chimney Offload set to ‘automatic’.
For hosts with multiple vCPU, enable RSS on virtual adapter and guest (netsh int tcp set global rss=enabled).
Enable interrupt moderation on virtual adapter
Disable flow control in guest, must be enable on physical NICs
MPIO setting: Least Queue Depth
is that all?
Hdejongh - EQL HiTT kit? I am assuming it is installed. Also on the IPV4 - disable DNS
Just one question though what prompted you to use the pass-through / RDM to the VM rather than letting the VMWare host own and present it to the VM as a VDMK? Size? Performance?
yeah dns is also enabled sorry forgot to mention that but's thats more a best practice in general, not only for eql?
Pass-through is better then RDM cause then i can use the HIT kit with EQL hw lvl snapshots...
I keep mine pretty simple, as in the defaults that are set with the installation of the HIT/ME. I've seen no need or indication to tweak any of the settings, and have good performance. Its cool to tune, but beware of unintendd consequences.
Personally, I still use standard frames. If you go with jumbos, you will want to make sure that vSwitch is fully tested at that rate. Some have suggested making a seperate vswitch for the guest attached volumes to ride over, as opposed to having them connect over the same vswitch that has the iSCSI vmkernel ports. Up to you
I also still used e1000 vNICs on most of my systems running guest attached systems. And yes, turn off everything on the properties of that vNIC minus IPv4. And ensure your binding order is correct. (LAN interface first, then your two iSCSI NICs next). Remember that updating virtual hardware can reset this as it will often make new vNICs for you, so beware.
I keep close tabs on my guest attached volumes with SANHQ, which is one of the reasons why I'm such a big fan of them for SQL, Exchange, and even flat file storage servers.