Unsolved

This post is more than 5 years old

3388

July 30th, 2008 08:00

Vmware esx 3.5 on M600 blades configuration help needed.

My company is about to order 8 M600 blades/1000 chasis for ESX 3.5 hosts farm.
We currently have Dell/EMC FC SAN and going to add iSCSI SAN like equalogic.

M600 blades have 2 NICs onboard.. ESX configuration calls for 8 NICs (2 iSCSI, 2 Production Network, 2 Service console, 2 Vmotion/HA/DRS
There are 2 expansion slots, 1 for FC HBA, 1 for dual port NIC. this will give me 4 NICs ports, rather than 8 recommended by Vmware.

1) Will 4 NIC ports suffice for full blown ESX configuration for 50+ VMs?
2) Will we need a second set of ethernet switches? current config has 2 redundant ethernet and FC switches.
3) Is it necessary to have a cisco switch for Equalogic? I heard Cisco is certified, not sure about Dell switches

Any help appreciated.

112 Posts

July 30th, 2008 10:00

Six or Eight NICs for ESX as you have described here is a best practice or high availabilty configuration. It is not a requirement. I would agree that you need 2 for iSCSI Network and 2 for Production Network. Depending on what you are going to do through the service console - it is most likely going to be low network traffic volume. You could have the service console share the same NICs as your producion network. The same can be true for VMotion - do you anticipate doing lots of Vmotion?

The sticky part is that you are going to have fibre channel as well as iSCSI needs on the same blade servers. Things would be better if you could somehow get everything onto one type of storage. Are you using the Dell|EMC combo arrays that can do FC or iSCSI?

50 VMs per server or 50 across a number of blades?

You will need ethernet IO module switches to correspond with each expansion slot - in order to use the expansion slots on the blades you must have a corresponding IO module in the chassis.

I'm not aware of certification of switches for EqualLogic.

Todd

July 30th, 2008 11:00

Thanks Todd, very helpful..

We have 50 VMs on 8 1955 blades currently, migrating to 8 quad-core M600 and thinking of re-using current blades @ DR site.
Current setup is AX100/AX150/EMC Celerra SANs, all Fiber.
Going emc @ DR site is cost prohibitive for us. So iSCSI like Equaligic @ primary site and DR site would fit the bill perfectly..

Major decision is whether to go all 15k SAS iSCSI here and @ DR, or somehow implement existing EMC FC SAN for critical stuff.


No Events found!

Top