Sorry I should have mentioned that my NFS connections to my hosts are based on dedicated BONDED nics (LACP) to my 3750's. I originally had my ESXi connections to the 3750's for the NFS subnet was bonded (etherchannel group # mode on) with Load Algo of (Route based on IP hash). I made the change on those connections to be not bonded, and changed the VMware load algo to (Route based on originating Virtual ID) the default. for NFS performance it didn't change.
The iSCSI network are still as specified separate subnets, each on a different switch (seperated via vlan tagging), No jumbo frames enabled (still on my todo list)
Also, I noticed I forgot to mentioned Why I was blown away by making the IOPs change from 1000 to 1. My svMotion speed over iSCSI went from 140 MB/s to over 230 MB/s. Th eincrease in performance for that was mind blowing. Which is why I was sad to see VM performance of I/O still being as poor as it was. See attached pastebin for test results.
I'd appreciate some input from others. After more testing, and googling. I found my results are very similar to this persons results. The only difference is I'm running all my Test on a Windows Server 2012, with no A/V installed. I even tried skipping the hypervisor layer by adding the VNXe iSCSI disc directly to the Windows VM (applying proper MPIO) and I get the same poor performance on Random I/O.
Zewwy
13 Posts
0
November 2nd, 2016 09:00
Sorry I should have mentioned that my NFS connections to my hosts are based on dedicated BONDED nics (LACP) to my 3750's. I originally had my ESXi connections to the 3750's for the NFS subnet was bonded (etherchannel group # mode on) with Load Algo of (Route based on IP hash). I made the change on those connections to be not bonded, and changed the VMware load algo to (Route based on originating Virtual ID) the default. for NFS performance it didn't change.
The iSCSI network are still as specified separate subnets, each on a different switch (seperated via vlan tagging), No jumbo frames enabled (still on my todo list)
Also, I noticed I forgot to mentioned Why I was blown away by making the IOPs change from 1000 to 1. My svMotion speed over iSCSI went from 140 MB/s to over 230 MB/s. Th eincrease in performance for that was mind blowing. Which is why I was sad to see VM performance of I/O still being as poor as it was. See attached pastebin for test results.
Zewwy
13 Posts
0
November 7th, 2016 09:00
I'd appreciate some input from others. After more testing, and googling. I found my results are very similar to this persons results. The only difference is I'm running all my Test on a Windows Server 2012, with no A/V installed. I even tried skipping the hypervisor layer by adding the VNXe iSCSI disc directly to the Windows VM (applying proper MPIO) and I get the same poor performance on Random I/O.
https://www.reddit.com/r/sysadmin/comments/2zwqpy/slow_iscsi_randomwrites_on_esx_55_to_netapp/
Zewwy
13 Posts
0
November 9th, 2016 10:00
Bump Bump Bump... Anyone... someone has to be a storage expert at EMC.....