Unsolved
This post is more than 5 years old
61 Posts
3
7547
NPIV, VNX, Hyper-V, virtual HBA
RAfa is correct, and your assumptions are exactly right. Supported guests are Windows 2008, 2008R2, and 2012.
“Proper zoning” is very important. There are two WWNs associated with each vHBA. They must both be zoned to the same target. If that is not done, it will work properly in production, but Live Migration will fail.
More info here.
From: Novo, Rafael
Sent: Friday, February 15, 2013 9:18 AM
To: ElAsir, Zahi; mSpecialists
Subject: Re: Hyper-V - NPIV - VNX
We suport it.
Just for some naming adjustment:
- HBA should support NPIV
- FC Switch should support NPV
And I don't have top of my head all supported Guest OSes
Sent from mobile device
From: ElAsir, Zahi
Sent: Friday, February 15, 2013 09:03 AM
To: mSpecialists
Subject: Hyper-V - NPIV - VNX
Hi All,
My customer is asking if we support “Virtual Fibre Channel” on Windows Serer 2012 Hyper-V.
If I’m not mistaken, this is the same feature that allows a virtual server on VMware to get direct access to the LUN (RDM).
For this technology to function, the following pre-requisites must be met:
- The Physical host must be running Windows Server 2012
- The Physical HBA on the server must support virtual Fiber Channel (vFC) and it must be running the latest driver/firmware version
- The Storage network must support NPIV (N-Port ID Virtualization). This is usually a configuration performed on the storage switch(es).
- Proper targeting/zoning must be performed, to allow a LUN on the SAN to be access by the VM’s WWPN
- The Virtual Machine must be running Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012.
I have B series brocade switches which already support NPIV.
I just wanted to check if there are any considerations that we have to have from a Storage Perspective (VNX5300).
Thank you
Regards,
Zahi El Asir | Senior Systems Engineer
EMC2 Middle East
EMC2 Bldg - Dubai Internet City
PO Box 500166 - Dubai, UAE
Mobile: +971 55 4700879(UAE) / +961 3 952804(LEB)
Office Direct: +971 4 4240378
admingirl
614 Posts
0
October 28th, 2013 12:00
Hi,
I am having the hardest time getting this to work. My A set shows up and so after I manually add the B set nothing happens.
In each VSAN I have four sp connections (2 for SPA, 2 for SPB)
My question is do the vhbas have to be zoned to all SPs?
Thanks in advance,
Admingirl
sddc_guy
159 Posts
0
October 28th, 2013 21:00
Every SP ( not SP port, only the ones you want to connect to ) needs to be zoned.
for Instance:
you have to NPIV vHBAS, NPIV0 and NPIV1, where you use NPIV0 in Fabric A.
you have to SP´s and use one Port per SP in Fabric A: SPA0 and SPB1
You need to have the Following Zones created in Fabric A:
Since the B WWPN is inactive, it will not show ap ath th VNX SP´s, thus you have to register them Manually or do a Live Migration to register !
Please note that NPIV is not supported on 2012 HA VM´s ...
dynamox
20.4K Posts
0
October 29th, 2013 05:00
Karsten,
how can you do a live migration before the B side is registered ? Can you also provide more details regarding "NPIV is not supported on 2012 HA VMs"
Thank you
mmcghee1
18 Posts
0
October 29th, 2013 06:00
I apologize if this is obvious, but just to be clear. The B side virtual HBAs were manually registered, and zoned. Were they also connected to the same storage group as the A side?
Thanks,
Mike
admingirl
614 Posts
0
October 29th, 2013 06:00
Thank you Karsten and Dynamox.
I already did that(manually registered the HBASs) and then zoned them on the B side. It still did not work when we tried failover.
That is why I reached out again.
Benita
admingirl
614 Posts
0
October 29th, 2013 07:00
Yep, they are all connected to the same storage group.
Here is the question I have. We have two separate VSANs. I manually added all of the vhbas to each VSAN regardless of whether or not the A address set was originally zoned there. This was per one
tech here who said he did this before in a past life on a brocade and that is what he did. Is this a mistake?
Thanks again.
Benita
mmcghee1
18 Posts
0
October 29th, 2013 09:00
One thing you could try in order to troubleshoot is to swap the A and B address set WWPNs. With the VM powered off go to the settings for each adapter and swap the WWPN like the image below. The next time the VM is powered up, it will use the other WWPN. Hopefully this will tell you if it's a zoning issue for the VM addresses, or perhaps a problem with a specific cluster node.
I also wrote a script that does the same thing. Replace the $vmname variable with your VM. Use at your own risk:
$vmname = "FCPTSMIS"
function SwapWWPN {
$vmhba = Get-VMFibreChannelHba $vmname
foreach ($hba in $vmhba){
Set-VMFibreChannelHba $hba -NewWorldWidePortNameSetA $hba.WorldWidePortNameSetB -NewWorldWidePortNameSetB $hba.WorldWidePortNameSetA
}
}
function CurrentWWPN {
$vmhba = Get-VMFibreChannelHba $vmname
foreach ($hba in $vmhba){
write-host -BackgroundColor Blue "Current WWPN"
$hba.SanName
write-host "Address Set A" $hba.WorldWidePortNameSetA
write-host "Address Set B" $hba.WorldWidePortNameSetB
}
}
#SwapWWPN
CurrentWWPN
mmcghee1
18 Posts
0
October 29th, 2013 09:00
If I understand correctly, you'll end up with addresses that won't log in to a given zone given their physical connectivity, but I don't think that will cause your current issue. To make sure we're on the same page please see the image below. Both the A address set and the B address set for a given Virtual Adapter will go through the same Hyper-V Virtual SAN and therefore the same Physical Adapter(s) and same Fabric. For example, address set A and B for Virtual Adapter A would be zoned through Fabric A (or VSAN A in your case) to the desired SP Ports. If you added address set A and B from Virtual Adapter B to Fabric (VSAN) A, that wouldn't hurt anything, but it also wouldn't help.
sknair89
1 Message
0
October 30th, 2014 00:00
Thanks all , this helps me lot
thomas_texier
2 Posts
0
February 18th, 2016 01:00
Hi,
I did all zoning, registered vHBAs, etc.
LUN access from guest OS is working and live migration as well.
Just a cosmetic issue: since some vHBAs are used only for live migration, this means they are almost never used (not logged in to the storage array except during live migration). The VNX array shows the following warning:
"The following hosts have initiators that no longer have an active connection to the storage system:( VMNAME).
The following hosts have initiators that no longer have an active connection to the storage system:( VMNAME).
Troubleshoot any inactive connections or deregister all connections that are no longer in use.
0x724b"
is there a way to avoid this warning since it's totally normal ?
thank you for you replies.
regards,
dynamox
20.4K Posts
0
February 21st, 2016 19:00
not that i know off, they are not logged-in on my either until they failover the guest.