Start a Conversation

Unsolved

This post is more than 5 years old

20468

September 9th, 2013 12:00

ESXi Host - Number of NICs / Best Practice

Hey folks,

Just a quick question regarding number of NICs one should have on an ESXi host. I currently have 2 R910's with 8 NICs each, so I have dedicated 2 NICs for iSCSI for my current PS4000. This has worked well.

I will be adding a PS6100 to the group in the coming weeks, so I was planning on expanding the number of iSCSI NICs on each host to 4. My question is - is there any benefit to adding more (eg. 6)? I know network will probably not be the bottleneck here, but is there any disadvantage otherwise? I have NICs to spare, so I'm just curious what others are doing...

 

7 Technologist

 • 

729 Posts

September 10th, 2013 05:00

At least two iSCSI SAN ports per host are required for fully redundant SAN connectivity, as your group grows you can increase this number. SANHQ can give you a good idea when you need to add more ports to the host.

Also, keep in mind the trunk or lag between switches, if you increase the group member count, you will also need to increase the bandwidth between redundant switches if using a trunk or lag. The rule of thumb here is to have a minimum of two lagged ports per member (and add additional as needed if performance is less then expected, again SANHQ can help determining this). If you are using a dedicated stacking cable (which are usually 10G or higher), so adding an additional member to your group shouldn't pose any issue at this time with your interswitch connection).

This link has other useful information; http://en.community.dell.com/techcenter/storage/w/wiki/3615.rapid-equallogic-configuration-portal-by-sis.aspx

Also ensure you have followed our VMware best practices: http://en.community.dell.com/techcenter/extras/m/white_papers/20434601.aspx

-joe

5 Practitioner

 • 

274.2K Posts

September 10th, 2013 08:00

Hello,

 To add to what Joe said, iSCSI, especially with ESX rarely saturates 2x GbE interfaces.  Since ESX typically generates more random I/O you will usually max out the storage I/O capacity before you max out the GbE interfaces on the servers.  Since you have more GbE links than storage ports, this becomes even more true.

 As Joe covered it's the SAN switches that typically are the issue when you add more EQL storage arrays, especially the interswitch trunks.

 I would also suggest that you review the best practices guide for ESX with EQL, there are default ESX settings that can result in reduced performance with EQL storage.

Regards,

 

4 Posts

September 10th, 2013 08:00

Thanks for the guidance. We're stacking two Cisco 3850 switches with their 40G stacking ports, so the interswitch connectivity should be fine there. As others have mentioned, chances are we're not going to be maxing out 4GbE, so anything more than that is probably a waste.

No Events found!

Top