PowerFlex: vMotion to specific ESXi hosts fails with error "Failed to receive migration"

Summary: vMotion between ESXi hosts fails due to configuration mismatch of VHV setting.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

Example of configuration difference between two ESXi hosts:

[root@esxi01:/etc] cat /etc/vmware/config
libdir = "/usr/lib/vmware"
authd.proxy.nfc = "vmware-hostd:ha-nfc"
authd.proxy.nfcssl = "vmware-hostd:ha-nfcssl"
authd.proxy.vpxa-nfcssl = "vmware-vpxa:vpxa-nfcssl"
authd.proxy.vpxa-nfc = "vmware-vpxa:vpxa-nfc"
authd.fullpath = "/sbin/authd"
vhv.enable = "TRUE"

[root@esxi02:/etc] cat /etc/vmware/config
libdir = "/usr/lib/vmware"
authd.proxy.nfc = "vmware-hostd:ha-nfc"
authd.proxy.nfcssl = "vmware-hostd:ha-nfcssl"
authd.proxy.vpxa-nfcssl = "vmware-vpxa:vpxa-nfcssl"
authd.proxy.vpxa-nfc = "vmware-vpxa:vpxa-nfc"
authd.fullpath = "/sbin/authd"

  When attempting to migrate VMs to a specific ESXi host using vMotion, the migration fails and generates the error "Failed to receive migration."  

Cause

vMotion is failing due to a configuration mismatch between ESXi hosts in the environment. Virtual Hardware-Assisted Virtualization (VHV) may be enabled on some hosts, and disabled on others. This mismatch causes vMotion to fail between the two groups of hosts. This issue can be confirmed by reviewing the "vmware.log" file associated with the running VM, and the "/var/log/hostd.log" file on the source/destination ESXi hosts.

 
HOSTD.LOG:
YYYY-MM-DDTHH:MM:SS.707Z warning hostd[3E9C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52564a82ba326e84-801d1a338d7d6fbc/7cb5cb5a-3b00-46cf-40eb-3cfdfe0f1d40/VIRTUAL_MACHINE.vmx] Failed to find activation record, event user unknown.
YYYY-MM-DDTHH:MM:SS.708Z info hostd[3E9C2B70] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 131 : Error message on VIRTUAL_MACHINE on target_esxi.fqdn.com in ha-datacenter: Configuration mismatch: The virtual machine cannot be restored because the snapshot was taken with VHV enabled. To restore, set vhv.enable to true.
YYYY-MM-DDTHH:MM:SS.709Z info hostd[40040B70] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 132 : Deleted ports in the vSphere Distributed Switch  in ha-datacenter.
YYYY-MM-DDTHH:MM:SS.710Z info hostd[2BDE2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52564a82ba326e84-801d1a338d7d6fbc/7cb5cb5a-3b00-46cf-40eb-3cfdfe0f1d40/VIRTUAL_MACHINE.vmx] Answered question 1938160
YYYY-MM-DDTHH:MM:SS.710Z warning hostd[2BDE2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:52564a82ba326e84-801d1a338d7d6fbc/7cb5cb5a-3b00-46cf-40eb-3cfdfe0f1d40/VIRTUAL_MACHINE.vmx] Failed to find activation record, event user unknown.
YYYY-MM-DDTHH:MM:SS.710Z info hostd[2BDE2B70] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 133 : Error message on VIRTUAL_MACHINE on target_esxi.fqdn.com in ha-datacenter: Failed to receive migration.
-->

VMWARE.LOG:
YYYY-MM-DDTHH:MM:SS.407Z| vmx| I125: Msg_Post: Error
YYYY-MM-DDTHH:MM:SS.407Z| vmx| I125: [msg.cpuid.vhv.enablemismatch] Configuration mismatch: The virtual machine cannot be restored because the snapshot was taken with VHV enabled. To restore, set vhv.enable to true.
YYYY-MM-DDTHH:MM:SS.407Z| vmx| I125: ----------------------------------------
YYYY-MM-DDTHH:MM:SS.409Z| vmx| I125: Vigor_MessageRevoke: message 'msg.cpuid.vhv.enablemismatch' (seq 1946687) is revoked
YYYY-MM-DDTHH:MM:SS.409Z| vmx| I125: MigrateSetStateFinished: type=2 new state=12
YYYY-MM-DDTHH:MM:SS.409Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
YYYY-MM-DDTHH:MM:SS.409Z| vmx| I125: Migrate: Caching migration error message list:
YYYY-MM-DDTHH:MM:SS.409Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
YYYY-MM-DDTHH:MM:SS.410Z| vmx| I125: Msg_Post: Error
YYYY-MM-DDTHH:MM:SS.410Z| vmx| I125: [msg.checkpoint.migration.failedReceive] Failed to receive migration.
YYYY-MM-DDTHH:MM:SS.410Z| vmx| I125: ----------------------------------------

Resolution

Based on the below VMware KB Article, it is suggested to disable VHV across all ESXi hosts. If any type of nested virtualization is occurring in the environment (running ESXi as a VM), this configuration change would impact the nested VMs.

Support for running ESXi as a nested virtualization solution


To disable VHV, perform the following steps:

  1. Place the ESXi host into Maintenance Mode
  2. SSH to the ESXi host
  3. Navigate to path /etc/vmware/
  4. Back up the existing configuration file by running the command "cp configuration config.bak"
  5. Edit the existing configuration file "configuration" and remove the line  vhv.enable = TRUE 
  6. Reboot the ESXi host


NOTE: While disabling VHV in an ESXi environment, vMotion will only migrate VMs between hosts with the same VHV configuration setting. Downtime will most likely be required for VMs running on the ESXi hosts with VHV enabled.

Example:

  • ESXi 1/2/3 have VHV enabled, ESXi 4/5/6 have VHV disabled. ESXi 1/2/3 each has VMs running on them.
  • ESXi 3 enters Maintenance Mode and migrates VMs to ESXi 2. VHV is disabled on ESXi 3.
  • ESXi 2 enters Maintenance Mode and migrates VMs to ESXi 1. VHV is disabled on ESXi 2.
  • ESXi 1 is unable to enter Maintenance Mode as no running VMs can vMotion to the remaining hosts due to the VHV configuration difference. VMs must be powered off temporarily at this point.

Affected Products

PowerFlex rack, ScaleIO
Article Properties
Article Number: 000032908
Article Type: Solution
Last Modified: 30 Sept 2025
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.