Unsolved

This post is more than 5 years old

7 Posts

7255

September 6th, 2010 05:00

NPIV / IBM VIO / Clariion ???

Hi

IBM P770 Server

     2 x VIO Server each with 2 x IBM 5735 HBAs

2 x Brocade 5300 Switches (2 fabrics) running firmware v6.3.0c

EMC Clariion CX3-80

Brocade DCFM v10.4.2

The 4 HBAs are connected to the 2 switches and their physical addresses appear on the fabrics as initiators.

A virtual wwpn (vwwpn) has been created on each HBA (using NPIV). These vwwpn addresses have been assigned to an LPAR.

The vwwpn addresses do appear on the fabrics (using portshow) - however by default in DCFM they appear as storage (as opposed to hosts).

portWwn of device(s) connected:
c0:50:76:02:d5:f8:00:06                   (This is the configured virtual address)
10:00:00:00:c9:b0:10:58                  (This is the HBA physical address)

I changed the device types for these virtual addresses to be initiators (within DCFM).

A zone has been created on one of the switches and includes a single vwwpn address and a single port on the EMC storage array.

At this point I am unable to see the vwwpn on the Clariion in Navisphere connectivity status. My hunch is this is something to do with it being a target as opposed to an initiator.

Can anyone shed any light please?

Thanks

4 Operator

 • 

2.1K Posts

September 7th, 2010 11:00

I'm afraid I can't shed any light on the NPIV issues, but I can provide an alternative way of handling your storage which may be easier to manage over all. We looked at the options when initially implementing VIO and decided on something a bit simpler to configure.

Each VIO LPAR on the physical server has it's own HBAs. Only the VIO LPARS actually see the SAN storage as such. That way only the physical WWNs are ever used. The storage for the client LPARs is presented to the VIO LPARS. The VIO LPARs handle the SAN connectivity and multi-pathing at that level. They then treat the presented devices/LUNs (we do this for CLARiiON and Symm) as local storage and pass them through to the Client LPARs which just use native MPIO for the multi-pathing (since devices are presented through two VIO LPARs.

Everyone gets what they need and we remove a layer of management difficulty. So far we have not run into any issues with this approach and we dodged the bullet on the NPIV configuration.

7 Posts

September 8th, 2010 00:00

Hi Allen

Couldn't agree with you more!

We have been using VIO servers for a number of years and traditionally have always configured them exactly in the way you describe. Nice and simple.

Just recently we purchased a shiny new IBM P770 and the reseller advised us that "everybody now uses NPIV and boots from SAN" so that is what we have been trying to do (for 4 weeks!).

I appreciate your answer just to confirm that everybody doesn't use NPIV.

Many thanks

Tony

4 Operator

 • 

4.5K Posts

September 8th, 2010 09:00

Tony,

Have you had any luck getting the boot from san working? I noticed another thread here that seemed to indicate that someone has this working.

https://community.emc.com/message/498151#498151

glen

7 Posts

September 9th, 2010 00:00

We have abandoned NPIV for now and gone back to the old style physical HBA addresses.

Boot from SAN works a treat!

Thanks

T

5 Posts

September 9th, 2010 02:00

Too bad,  You was closed to the end !!!

You will not see the NPIV wwpn in Navisphere... with CX3-80

You must manualliy create the Initiator Record in Navisphere, and specify manually the npiv address:

c0:50:76:02:d5:f8:00:06:c0:50:76:02:d5:f8:00:06    ( "npiv_address":"npiv_address" )

other parameters remains as usual.

Good Luck !

4 Operator

 • 

4.5K Posts

September 9th, 2010 07:00

Is that npiv WWN the address that you configure to use for boot from san? Did you get boot from san working with NPIV? Did you also use this with VIO and LPARs?

You can probably tell that I'm not AIX aware - just trying to get an idea of the amount of work involved in getting this configured and correctly working.

glen

4 Operator

 • 

4.5K Posts

September 9th, 2010 09:00

Andre,

Thanks for the information.

Is VIO involved in your setup - i.e., is the VIO sitting between the LPAR and the array?

glen

5 Posts

September 9th, 2010 09:00

Yes I have just setup a test machine, and validate the following:

Boot from SAN, with PowerPath 5.3 SP1

Live Partition Mobility with that LPAR.

We have a 750 with same 5735 HBAs, Old McData  DS4700, and CX3-20.

I was oblige to manually create Fabric Zoning on Switch + register HBAs manually on CX3-20.

as NPIV wwpn was not visibles on our switches and CX

To prevent problem use a single path to system lun for AIX installation, IE single SAN attachment and only one zone to the Boot LUN's default SP.

I may send you steps later as I must leave the office now...

andre  

5 Posts

September 10th, 2010 02:00

Sure, they are involved for Network and FC connectivity.

NPIV Creation Process need following steps:

1- Create Virtual Server Fibre Channel on both VIOS

  

2- Create Virtual Server Fibre Channel on LPAR (Client)

3- Map Server Virtual FC to Virtual FC on LPAR (Client) on only one VIOS

    lsmap -all -npiv

    lsdev -type adapter

    cfgmgr   (to detect new VFC adapters)

    lsdev -type adapter

    lsmap -all -npiv

    vfcmap -vadapter vfchost2 -fcp fcs0

    vfcmap -vadapter vfchost3 -fcp fcs1

    lsmap -all -npiv

  

4- Start LPAR in SMS and scan devices, then NPIV WWPN can be seen in the LPAR's VFC properties.

    You will see 2 WWPNs for each VFC:

          the first one is for normal operation

          the second is used only and end of LPM (Live Partition Mobility)

          collect those WWPN in a file.

5- Then look in your FC switchs logs, you will see events specifying Port and WWNs.

      Create a Single Zone on your FC switch to the Boot LUN's default SP of your CX3

6- Using Connectivity Status in Navisphere, use New to manually register with following fields:

    Initiator Name = c0:50:76:02:d5:f8:00:06:c0:50:76:02:d5:f8:00:06   (as my previous post)

    SP-Port: Same as your zoning (in step 5)

    Type Clariion Open

    I'm using Failover Mode 3

    Specify Host and IP as required

7- Create a Storage Group, add the Host (from step 6) and the Boot LUN.

8- Restart your LPAR and begin AIX install (I'm using a NIM server)...

9- After OS and additionals MPIO/Powerpath setup, and Navisphere Agent install you will be able to Map, Zone, and connect to Storage Group the remaining paths.

Hope this will be usefull !!!

andre

5 Posts

September 10th, 2010 03:00

Erratum,

They was an error in my previous post

For step 3 the idea is to map the Server VFCs to the physical HBA ports, not server VFC to LPAR VFC.

You may chose to map just one port at this time.

To prevent errors, I have use rules for numbering:

All LPAR:

Virtuals adapters numbers 1x connect to VIOS1

Virtuals adapters numbers 2x connect to VIOS2

VIOS:

Virtual Adapter numbers 2x refers to VSCSI adapters

Virtual Adapter numbers 4x refers to VFC adapters

4 Operator

 • 

4.5K Posts

September 10th, 2010 08:00

Andre - thanks for the detailed post - much appreciated.

glen

4 Operator

 • 

2.1K Posts

September 15th, 2010 13:00

I understand the overall advantages of NPIV in the right situation, but in this case I'm not sure I can really see the value outweighing the additional effort.

Can anyone put in simple terms how NPIV is better than the physical way of handling SAN storage for VIO?

5 Posts

September 15th, 2010 23:00

At SAN level, HBA WWPNs are used to address hosts.

When you are using Virtualisation technologies as LPAR for Power (or VMware or others), all traffic from all LPARs (or VM) use the same HBAs.

So you cannot differentiate traffic for a particular LPAR, as all traffic is related to VIOS's HBAs, not LPAR's HBAs.

This is not a problem until you want to use technology such as:

-SAN QoS

-Some CDP products.

-VSAN

As far as I know, the only 2 ways to have a unique WWPN for a LPAR are:

1- Use dedicate HBAs for each LPARs

2- Use NPIV (http://en.wikipedia.org/wiki/NPIV)

As you can see, it's just a question of money.

andre

No Events found!

Top