Unsolved
This post is more than 5 years old
6 Posts
0
18029
September 5th, 2008 04:00
M1000e/M600 with VMWare ESX 3.5
Hi All,
We have now taken delivery of our shiney new M1000e and was wondering if anyone out there had set these up with ESX3.5. We have purchased 4 x M6220 switches and hoping to get the leads to allow stacking them.
We VMware you are recommended to use different physical NIC's for the Virtual Machines, the service console and the VMotion interface. But, this configuration doesn't really allow for resilience when you have 4 nics.
With the 1955 chassis we were even more limited to only having 2 NIC's. in this case we created one vSwitch, added all the vmotion, service console and VM to the vSwitch and assigned both physical adaptors to this. The virtual switch did some very basic load balancing across the pNIC's and we had a solution. Then from the dell switch we configured the nofification of uplink failure and used etherchannel to team to uplinks to our Cisco core switch
The big question is how do we now reliably configure the system for both performance and reliability when we have more options and i was wondering what configurations people have used?
Thanks in advance
Hywel
We have now taken delivery of our shiney new M1000e and was wondering if anyone out there had set these up with ESX3.5. We have purchased 4 x M6220 switches and hoping to get the leads to allow stacking them.
We VMware you are recommended to use different physical NIC's for the Virtual Machines, the service console and the VMotion interface. But, this configuration doesn't really allow for resilience when you have 4 nics.
With the 1955 chassis we were even more limited to only having 2 NIC's. in this case we created one vSwitch, added all the vmotion, service console and VM to the vSwitch and assigned both physical adaptors to this. The virtual switch did some very basic load balancing across the pNIC's and we had a solution. Then from the dell switch we configured the nofification of uplink failure and used etherchannel to team to uplinks to our Cisco core switch
The big question is how do we now reliably configure the system for both performance and reliability when we have more options and i was wondering what configurations people have used?
Thanks in advance
Hywel
No Events found!



Bala Chandrasek
57 Posts
0
September 24th, 2008 10:00
First, why do you have only 4 M6220 switches? MD1000e can have 6 fabrics. So I am assuming you have Fibre Channel.
If you do not use HA, here is the recommended configurations
NIC 0 - Fabric A1 - Service console
NIC 1 - Fabric A2 - VMotion
NIC 2 and NIC 3 will be teamed - Fabrics B1 and B2 for VMs
If you want redundancy or have VMware HA:
NIC 0 and NIC 1 will be teamed - Fabric A1 and Fabric A2 - shared for both VMotion and service console. It is recommended to have VMotion is a separate VLAN. For service console you can have NIC 0 as the primary NIC and NIC 1 as the primary NIC. That way they will use different NICs unless there is a failure.
NIC 2 and NIC 3 will be teamed - Fabrics B1 and B2 for VMs
Hope this helps.
Bala.
Dell Inc
JohanHoeke
2 Posts
0
October 9th, 2008 15:00
tia,
Johan