I’m building High Available solution using Hyper-V and Microsoft Fail over cluster. My target is to double all hardware to eliminate single point of failure. I know how to provide iSCSI network redundancy, heart beat network redundancy. I also know how to set up firewall redundancy.
Unfortunately weak point in my plan is public network redundancy. I tried to find any solution for eliminating single switch and NIC point of failure (as far as I know I can use teaming due not compatibility with Hyper-V issue). In my opinion it is quite probable that one NIC or switch will be down (especially that I met Cisco switch completely stopped to work). Only advice I’ve got is to provide second, stand bay network and when I will find NIC or switch be down I can manually change virtual switch to another adapter (unfortunately in my case it has to be automatic process).
However I have still hope someone can give me answer for this question.
a nice idea to use the MS Hyper-V although I`m not sure if MS released their failover (called Live migration yet without powering down the VM , only saw a web presentation and the info that failoverof VM`s will be released with next version) yet, so why don`t take a closer look on VMWARE Infrastructure ? also a price alternative is XEN For sure VMWARE its expensive, but i provides everything you need; V-Motion in case of one ESX server will be out of game,full cluster support, teaming of nics with failover/Teaming function, very good possibility to harden your network, a very good ISCSI Support (with sw and/or HW initiator), to provide full failover and multipathing use 6 nics per ESX host, 2 for ISCSI, 2 for redundant connection V-Motion/console and 2 for the VM`s (or only 1 for managing (Yes I agree to Scotts recomend on other thread)) , use 2 switches with good asics (I`m often offer PC 5424 or 62xx series for middle enviroment, but for smaller a PC 27xx series will handle this job also). Connect the servers crossed to 2 switches, connect the both switches via stack (PC 62xx series) or uplink them via optical, connect the iscsi storage crossed to the switches (for 2 ISCSI storages make sure you have got a storage which has replication feature i.e. AX 4/5i or Equallogic), especially the Equallogic has very good features for failover scenarios and asynchronious replication (very suitable for divided rooms). Seperate the storage network from productive network and you are good to go. also a good point is to provide a clear backup strategy. If you check the Equallogic you will find out, its expensive, but worth to look , many many many useful features and easy to use as well. Regardless of used hardware I`m really not sure if you can provide a full redundant virtualized enviroment with MS Hyper-V , but please post your results, your intentions are very interesting. Good luck and keep posting results thks alot Marcus
Sry forgot to mention if using an ESX a failover microsoft cluster does not make sense anymore , HA and V-Motion from VMWARE will handle the failover scenario, but more expensive to implement, maybe XEN will provide similar features but not sure, I´m too "VMWARE lized" *g
As mentioned before please post your results, it looks like a very very interesting project
I can see you are very “VMware lized” 🙂 The problem is price. Microsoft is just free and VMWARE cost a lot. I don’t need live migration and Microsoft fail over cluster provide me high availability similar like V-Motion. Only problem I can find is high availability of network adapters. It is native in WMvare and it is not native in Windows Server. Microsoft base network teaming on OEM manufactures. Unfortunately in some reason DELL as well as HP doesn’t support this for HYPER-V. I found answers from some people they installed NIC teaming and it is working fine, however some had problems as well. I would like to find answer what should I do to fix this problem….
I do not believe that there is currently a workaround. The following Dell document has some details about the inability to team either Broadcom or Intel NICs and then use them with Hyper-V virtual networking. The info is on page 15 - http://support.dell.com/support/edocs/software/ws2k8/en/Hyper-V/IIG.pdf