We are trying to setup an AIX LPAR to boot off of EMC storage that it connects to via NPIV. The virtualization layer is handled via redundant Virtual I/O (VIO) servers that each have a virtual FCS HBA mapped to a local physical port. Each virtual FC adapter has two WWPNs so we can fail these LPARs over to a different physical server using Live Partition Mobility.
Once everything was setup we forced the WWPNs to login to the fabric using the chnportlogin command on the Hardware Management Console (HMC). At this point we should (and can) see these WWPNs logged into the fabric, and need to be able to assign LUNs to these WWPNs. I am not seeing the WWPNs logged into the array, should these be manually created?
So when you look at the zones you created you see the WWPN's of the EMC storage and the AIX WWPN's logged in, correct?
Make sure you run the emc_cfgmgr script once you finish the zoning and then check the connectivity status.
We are experiencing the same problem - we also see the wwns on the SAN switch, they are zoned, but they do not show up on the Clariion array. To the OP, did you ever get a resolution?
And everyone keep in mind, this is for a new lpar, there is no AIX OS installed yet, so there is no where to run cfgmgr and emc_cfgmgr commands.
Does the AIX installer itself have some option to scan for disks, which should then force an array login?
I am more familiar with Solaris & HPUX and know that they both do.
There are two different ways to scan for disks on an AIX partition prior to booting the lpar. One is to go into SMS mode (maintenance mode) and scan on the adapters, the other is to go into firmware maintenance mode, isssue an ioinfo command, and select the adapter.
SMS mode doesn't push the wwns to the CLARiiON, whereby the firmware method does. However, it only does it for the one adapter that you're currently scanning on. If you have a separate SAN team, you have to scan one, have them register it, then move onto the other. This is painful for both the AIX admin and the SAN team. Also, this does nothing as far as the other wwn in the pair (the one for LPM), as it is never recognized by the CLARiiON using either of the two methods.
This leave us with having to manually register the wwns on the CLARiiON, which is prone to errors, and also there is no indication which SP port should be assigned.
Very frustrating, to say the least.
we have the exact same issue with IBM P770 virtual PWWns , not able to see in wwn's on the vnx array after zoning.
scanning using firmware method on the IBM side fixed it , we are able to see the wwn's on the storage array for manully registering and provisioning boot from san LUN(os lun).
We've had similar issues and in general a tough time keeping our NPIV based LPARs logged into our VNX. Did you ever find a better way to perform the zoning/masking? How has the stability been?
Over the next 6 months, we'll be moving to a VNX as well - so I'll let you know what we find out.
On the CLARiiON side of things - I've written a korn shell script I run on the VIOServers that produces a NAVICLI script to register all the WWNs on the CLARiiON. This includes the LPM WWNs. The korn shell script also runs the chnportlogin on our HMC to log all the ports into the fabric so they can be easily zoned. I hand the NAVICLI script off to our Storage administrators.
We've had no issues, and no stability problems with NPIV. And we've used this method for existing as well as new lpars.
We are using the internal drives to boot the physical servers then the virtuals boot off the SAN. This configuartion seems to work fine and we are leaving it this way.