When searching for the Dell customized VMWare images on the website only M610 and M620 were on the compatibility list for ESXi 5.x. Wich image did you use?
The blades only got 8GB each as of now. I know it's nothing, but they will get upgraded if I decide to run more VMs. Firmware is 6.0.8 so should be okay.
Each blade has 2 PCIe mezzanine slots. You can put network cards in here, but they will only be able to connect to the B or C fabric, depending on which PCIe slot you put the card in. This means that you'll need to populate the B and/or C fabric on the M1000e with switches or passthrough modules.
I don't remember exactly what NICs the M600's are compatible with, but if you get any quad port NICs for those servers, the only way to get all 4 ports to work is to use either M6348 or MXL switches (not sure if there's a compatible Cisco switch option) in both sides of the fabric (e.g. B1 + B2 or C1 + C2). If you get passthrough modules, M6220's or M8024's, you will only be able to use 2 of the 4 ports of that NIC (the other 2 will show disconnected in VMware).
You typically do not want to use public traffic with iSCSI traffic.
Thus a host will need at least 2 NICs.
This should be true for VMs, they should not be using the physically NICs that are using the iSCSI for ESX to the iSCSI volumes. However with low NIC resource available in small environments it can be done. Just note that iSCSI traffic is typically always active because this is the connect used to connect a Hard drive to a host instead of a fast scsi, fibre, ide or ATA connection it is using the NIC(s).
VMs can also use iscsi connections to the array, again if you have the hardware resource they should have their own NICs for iscsi traffic. Think of VM as physical machines and use the same rules.
It many advances in drivers and OS features ( excluding teaming), you can share the NICs bandwidth in Virtual Environments.
With respect to the switches, you should setup VLANs. Your iSCSI traffic will be on a different VLAN, thus again the need for Host to have multiple NICs.
You should also make sure that your switches and blades have the latest drivers and firmware.
Make note that VMWare has had issues for long time with heart beat and how the heartbeat is used.
I was not aware I could insert extra NICs in the blades. I will see if I can pickup something affordable and switch or passthrough modules for the fabrics in the M1000e.
Would it be recommended to just use the official VMWare ESXi Image or the Dell customized one even though it don't mention M600 in the compatibility list?
Jstorgaard
4 Posts
0
February 24th, 2014 05:00
When searching for the Dell customized VMWare images on the website only M610 and M620 were on the compatibility list for ESXi 5.x. Wich image did you use?
The blades only got 8GB each as of now. I know it's nothing, but they will get upgraded if I decide to run more VMs. Firmware is 6.0.8 so should be okay.
How can I add more NICs to the blades?
Dev Mgr
4 Operator
•
9.3K Posts
0
February 24th, 2014 07:00
Each blade has 2 PCIe mezzanine slots. You can put network cards in here, but they will only be able to connect to the B or C fabric, depending on which PCIe slot you put the card in. This means that you'll need to populate the B and/or C fabric on the M1000e with switches or passthrough modules.
I don't remember exactly what NICs the M600's are compatible with, but if you get any quad port NICs for those servers, the only way to get all 4 ports to work is to use either M6348 or MXL switches (not sure if there's a compatible Cisco switch option) in both sides of the fabric (e.g. B1 + B2 or C1 + C2). If you get passthrough modules, M6220's or M8024's, you will only be able to use 2 of the 4 ports of that NIC (the other 2 will show disconnected in VMware).
Origin3k
4 Operator
•
2.4K Posts
1
February 24th, 2014 08:00
The M600 is on the VMware HCL and its certified for 5.1 and 5.5.
Regards,
Joerg
wadet5k
24 Posts
1
February 25th, 2014 03:00
You typically do not want to use public traffic with iSCSI traffic.
Thus a host will need at least 2 NICs.
This should be true for VMs, they should not be using the physically NICs that are using the iSCSI for ESX to the iSCSI volumes. However with low NIC resource available in small environments it can be done. Just note that iSCSI traffic is typically always active because this is the connect used to connect a Hard drive to a host instead of a fast scsi, fibre, ide or ATA connection it is using the NIC(s).
VMs can also use iscsi connections to the array, again if you have the hardware resource they should have their own NICs for iscsi traffic. Think of VM as physical machines and use the same rules.
It many advances in drivers and OS features ( excluding teaming), you can share the NICs bandwidth in Virtual Environments.
With respect to the switches, you should setup VLANs. Your iSCSI traffic will be on a different VLAN, thus again the need for Host to have multiple NICs.
You should also make sure that your switches and blades have the latest drivers and firmware.
Make note that VMWare has had issues for long time with heart beat and how the heartbeat is used.
kb.vmware.com/.../search.do
Data corruption with VMware has dated back since at least 2009 and other version of ESX.
Dell and Vmware have worked together to reduce these issue and you should also review:
kb.vmware.com/.../search.do
Jstorgaard
4 Posts
0
February 26th, 2014 03:00
I was not aware I could insert extra NICs in the blades. I will see if I can pickup something affordable and switch or passthrough modules for the fabrics in the M1000e.
Thanks everyone!
Jstorgaard
4 Posts
0
February 26th, 2014 03:00
Would it be recommended to just use the official VMWare ESXi Image or the Dell customized one even though it don't mention M600 in the compatibility list?