Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

8858

February 28th, 2012 03:00

Map powerpath devices back to vplex devices

Is it possible to map back the Logical Device name in powerpath back to a VPLEX device?

For example:

[root@hl-testdb-slo-01 ~]# powermt display dev=emcpowerq

Pseudo name=emcpowerq

Invista ID=CKM00112500392

Logical device ID=6000144000000010E049C870716CB4D0

state=alive; policy=ADaptive; priority=0; queued-IOs=0;

VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll

Name                                      VPD83 ID                                  Capacity  Use        Vendor  IO      Type         Thin

----------------------------------------  ----------------------------------------  --------  ---------  ------  Status  -----------  Rebuild

----------------------------------------  ----------------------------------------  --------  ---------  ------  ------  -----------  -------

Clar0364_lun_502                          VPD83T3:60060160246021009480241ee350e111  560M      used       DGC     alive   normal       false

[snip]

27 Posts

February 28th, 2012 07:00

Hi Scott,

The WWN seen in powerpath actually maps to the VPLEX Virtual Volume (seen in the storage view for that host).  From there you can use the GUI to map back to the storage volume or use the CLI command "drill-down" to find out what the Virtual Volume (VV) is composed of: 

     drill-down -o   

             will list the components of the specified VV

     drill-down -v  

            will list components of the storage view an all the components of each VV in the storage view

4 Posts

February 28th, 2012 05:00

Thanks for the reply Ankur.

However in our environment, the Logical Device ID does not map to the back-end CLARiiON LUN.  What you are describing is what I was expecting but this is not what we are seeing.

286 Posts

February 28th, 2012 05:00

Yes, that number 6000144000000010E049C870716CB4D0 is actually the wwn of the backend LUN. Ill give you an example

in the GUI and Unisphere. Below is a storage volume similar to what you have. Notice the digits after the VPD83T3

ScreenHunter_01 Feb. 28 07.16.gif

If I look at the VNX stroage group that I created for VPLEX

ScreenHunter_02 Feb. 28 07.19.gif

One can see the exact same ID.

Let me know if this is what you are looking for, thanks

4 Posts

February 28th, 2012 07:00

Andrew,

In the GUI, under components of selected storage view, you have to tick the VPD ID checkbox and then it shows the WWNs.

Thanks for pointing me in the right direction!


Scott

February 28th, 2012 08:00

That’s how I do it. Very quick and have never had issue with mapping back accurately.

Jeff Wade

7 Posts

May 23rd, 2012 09:00

Is it possible to get the VPD ID for a virtual volume within the VPLEX CLI?

It is viewable in the GUI via the VPD ID checkbox, but I was curious if we can get that info from the VPLEX CLI.

7 Posts

May 26th, 2012 06:00

Found the command to see the VPD ID for virtual volumes:

VPlexcli:/> export storage-view map SV_nj1pmer07                                                                             

VPD83T3:6000144000000010e004539f294af480 vvol_nj1pvnx1_LUN2_BLK01_nj1pmer07

VPD83T3:6000144000000010e004539f294af4be vvol_cx4_LUN139_RG17_nj1pmer07

VPD83T3:6000144000000010e004539f294af4bc vvol_cx4_LUN138_RG39_nj1pmer07

7 Posts

May 31st, 2012 13:00

Similarly, this works as well.

VPlexcli:/clusters/cluster-1/exports/storage-views> ll

Name          Operational  initiator-ports  virtual-volumes                                                                        port name, enabled, export status

------------  Status       ---------------  -------------------------------------------------------------------------------------  ----------------------------------

------------  -----------  ---------------  -------------------------------------------------------------------------------------  ----------------------------------

SV_nj1pmer07  ok           nj1pmer07_hba1,  (0,vvol_nj1pvnx1_LUN2_BLK01_nj1pmer07,VPD83T3:6000144000000010e004539f294af480,100G),  P0000000046E07DDB-A0-FC01,true,ok,

                           nj1pmer07_hba2   (1,vvol_cx4_LUN139_RG17_nj1pmer07,VPD83T3:6000144000000010e004539f294af4be,100G),      P0000000046E07F06-A0-FC00,true,ok,

                                            (2,vvol_cx4_LUN138_RG39_nj1pmer07,VPD83T3:6000144000000010e004539f294af4bc,100G)       P0000000046F07DDB-B0-FC00,true,ok,

                                                                                                                                   P0000000046F07F06-B0-FC01,true,ok

SV_nj1pmer99  ok           nj1pmer99_hba1,  (0,vvol_nj1pvnx1_LUN3_BLK01_nj1pmer99,VPD83T3:6000144000000010e004539f294af4a1,200G)   P0000000046E07DDB-A0-FC01,true,ok,

                           nj1pmer99_hba2                                                                                          P0000000046E07F06-A0-FC00,true,ok,

                                                                                                                                   P0000000046F07DDB-B0-FC00,true,ok,

                                                                                                                                   P0000000046F07F06-B0-FC01,true,ok

4 Posts

June 4th, 2012 14:00

ll command will list the details however if a storage view has more than 7 luns the "ll" command will show only first 7 devices and say 20 more.. The below command will list all the volumes and lun position, virtual vol, vpid and size of the device.

VPlexcli:/clusters/cluster-1/exports/storage-views> ls -f njcAVLPDB01_SV

/clusters/cluster-1/exports/storage-views/njcAVLPDB01_SV:
Name                      Value
------------------------  -------------------------------------------------------------------------------
controller-tag            -
initiators                [njcAVLPDB01_PCI20_27C8_SAN_B, njcAVLPDB01_PCI4_2948_SAN_A]
operational-status        ok
port-name-enabled-status  [P000000003CA00E13-A0-FC01,true,ok, P000000003CA00E8D-A1-FC00,true,ok,
                          P000000003CB00E13-B0-FC00,true,ok, P000000003CB00E8D-B1-FC01,true,ok]
ports                     [P000000003CA00E13-A0-FC01, P000000003CA00E8D-A1-FC00,
                          P000000003CB00E13-B0-FC00, P000000003CB00E8D-B1-FC01]
virtual-volumes           [(0,device_Symm3111_0ACD_1_vol,VPD83T3:6000144000000010a00e132a968b42cb,149G),
                          (1,device_Symm3111_0ACF_1_vol,VPD83T3:6000144000000010a00e132a968b410c,248G),
                          (2,device_Symm3111_0AE1_1_vol,VPD83T3:6000144000000010a00e132a968b4b03,49.7G),
                          (3,device_Symm3111_0AE2_1_vol,VPD83T3:6000144000000010a00e132a968b4b05,49.7G),
                          (4,device_Symm3111_0AE3_1_vol,VPD83T3:6000144000000010a00e132a968b4b07,49.7G),
                          (5,device_Symm3111_0AE4_1_vol,VPD83T3:6000144000000010a00e132a968b4b09,49.7G),

61 Posts

June 14th, 2013 01:00

hi,

is there a way I can find the VV name by the vpd id shown in the powerpath.

I have removed the vv from the Storage view so it will not show up in the storage view..

drill down will accept the name of the VV but not the vpd id.

right now I have only the vpd id of a device which is not part of any storage view.

help me on this pls.

89 Posts

January 24th, 2017 09:00

Alternatively, for let's say a virtual-volume that isn't part of a view (yet), you can use this rather cryptic searching capabilities of the "ls" command.  For example, the command below will search the "virtual-volumes" context of cluster-1 for a match on the volume "VPD83T3:6000144000000010f001ba1ae27bf219".

VPlexcli:/clusters/cluster-1/virtual-volumes> ls /clusters/cluster-1/virtual-volumes/$d\=* where $d::vpd-id \== VPD83T3:6000144000000010f001ba1ae27bf219

/clusters/cluster-1/virtual-volumes/sb1_lr0_0000_vol:

Name                        Value

--------------------------  ----------------------------------------

block-count                 21495808

block-size                  4K

cache-mode                  synchronous

capacity                    82G

consistency-group           -

expandable                  true

expandable-capacity         0B

expansion-method            storage-volume

expansion-status            -

health-indications          []

health-state                ok

locality                    local

operational-status          ok

recoverpoint-protection-at  []

recoverpoint-usage          -

scsi-release-delay          0

service-status              running

storage-array-family        symmetrix

storage-tier                -

supporting-device           sb1_lr0_0000

system-id                   sb1_lr0_0000_vol

thin-capable                false

thin-enabled                unavailable

volume-type                 virtual-volume

vpd-id                      VPD83T3:6000144000000010f001ba1ae27bf219

It will print the matches on the string.  If it can't find the volume, it will look like this (I changed one digit on the ID):

VPlexcli:/clusters/cluster-1/virtual-volumes> ls /clusters/cluster-1/virtual-volumes/$d\=* where $d::vpd-id \== VPD83T3:6000144000000010f001ba1ae27bf210

ls:  No context found for '/clusters/cluster-1/virtual-volumes/$d=* where $d::vpd-id \== VPD83T3:6000144000000010f001ba1ae27bf210'

Hope that helps,

Gary

89 Posts

January 24th, 2017 09:00

It appears that the command "export storage-view find -v " will provide what you need.

Example.  Here's my storage-view:

/clusters/cluster-1/exports/storage-views/cluster-1-sb1-view-0:

Name                      Value

------------------------  ------------------------------------------------------------------

caw-enabled               true

controller-tag            -

initiators                [perf-dmbuj001-port0, perf-dmbuj001-port1, perf-dmbuj001-port2,

                          perf-dmbuj001-port3]

operational-status        ok

port-name-enabled-status  [P0000000043E001BA-A0-FC00,true,ok,

                          P0000000043E001BA-A0-FC01,true,ok,

                          P0000000043F001BA-B0-FC00,true,ok,

                          P0000000043F001BA-B0-FC01,true,ok]

ports                     [P0000000043E001BA-A0-FC00, P0000000043E001BA-A0-FC01,

                          P0000000043F001BA-B0-FC00, P0000000043F001BA-B0-FC01]

virtual-volumes           (0,sb1_lr0_0000_vol,VPD83T3:6000144000000010f001ba1ae27bf219,82G),

                          (1,sb1_lr0_0001_vol,VPD83T3:6000144000000010f001ba1ae27bf21a,82G),

                          (2,sb1_lr0_0002_vol,VPD83T3:6000144000000010f001ba1ae27bf21b,82G),

                          (3,sb1_lr0_0003_vol,VPD83T3:6000144000000010f001ba1ae27bf21c,82G),

                          (4,sb1_lr0_0004_vol,VPD83T3:6000144000000010f001ba1ae27bf21d,82G),

                          (5,sb1_lr0_0005_vol,VPD83T3:6000144000000010f001ba1ae27bf21e,82G),

                          (6,sb1_lr0_0006_vol,VPD83T3:6000144000000010f001ba1ae27bf21f,82G),

                          (7,sb1_lr0_0007_vol,VPD83T3:6000144000000010f001ba1ae27bf220,82G),

                          (8,sb1_lr0_0008_vol,VPD83T3:6000144000000010f001ba1ae27bf221,82G),

                          (9,sb1_lr0_0009_vol,VPD83T3:6000144000000010f001ba1ae27bf222,82G),

                          ... (50 total)

write-same-16-enabled     true

xcopy-enabled             true

This gives me the whole mapping:

VPlexcli:/> export storage-view map cluster-1-sb1-view-0

VPD83T3:6000144000000010f001ba1ae27bf219 sb1_lr0_0000_vol

VPD83T3:6000144000000010f001ba1ae27bf21a sb1_lr0_0001_vol

VPD83T3:6000144000000010f001ba1ae27bf21b sb1_lr0_0002_vol

VPD83T3:6000144000000010f001ba1ae27bf21c sb1_lr0_0003_vol

VPD83T3:6000144000000010f001ba1ae27bf21d sb1_lr0_0004_vol

VPD83T3:6000144000000010f001ba1ae27bf21e sb1_lr0_0005_vol

VPD83T3:6000144000000010f001ba1ae27bf21f sb1_lr0_0006_vol

VPD83T3:6000144000000010f001ba1ae27bf220 sb1_lr0_0007_vol

VPD83T3:6000144000000010f001ba1ae27bf221 sb1_lr0_0008_vol

VPD83T3:6000144000000010f001ba1ae27bf222 sb1_lr0_0009_vol

VPD83T3:6000144000000010f001ba1ae27bf223 sb1_lr0_0010_vol

VPD83T3:6000144000000010f001ba1ae27bf224 sb1_lr0_0011_vol

VPD83T3:6000144000000010f001ba1ae27bf225 sb1_lr0_0012_vol

VPD83T3:6000144000000010f001ba1ae27bf226 sb1_lr0_0013_vol

VPD83T3:6000144000000010f001ba1ae27bf227 sb1_lr0_0014_vol

VPD83T3:6000144000000010f001ba1ae27bf228 sb1_lr0_0015_vol

VPD83T3:6000144000000010f001ba1ae27bf229 sb1_lr0_0016_vol

VPD83T3:6000144000000010f001ba1ae27bf22a sb1_lr0_0017_vol

VPD83T3:6000144000000010f001ba1ae27bf22b sb1_lr0_0018_vol

VPD83T3:6000144000000010f001ba1ae27bf22c sb1_lr0_0019_vol

VPD83T3:6000144000000010f001ba1ae27bf22d sb1_lr0_0020_vol

VPD83T3:6000144000000010f001ba1ae27bf22e sb1_lr0_0021_vol

VPD83T3:6000144000000010f001ba1ae27bf22f sb1_lr0_0022_vol

VPD83T3:6000144000000010f001ba1ae27bf230 sb1_lr0_0023_vol

VPD83T3:6000144000000010f001ba1ae27bf231 sb1_lr0_0024_vol

VPD83T3:6000144000000010f001ba1ae27bf232 sb1_lr0_0025_vol

VPD83T3:6000144000000010f001ba1ae27bf233 sb1_lr0_0026_vol

VPD83T3:6000144000000010f001ba1ae27bf234 sb1_lr0_0027_vol

VPD83T3:6000144000000010f001ba1ae27bf235 sb1_lr0_0028_vol

VPD83T3:6000144000000010f001ba1ae27bf236 sb1_lr0_0029_vol

VPD83T3:6000144000000010f001ba1ae27bf237 sb1_lr0_0030_vol

VPD83T3:6000144000000010f001ba1ae27bf238 sb1_lr0_0031_vol

VPD83T3:6000144000000010f001ba1ae27bf239 sb1_lr0_0032_vol

VPD83T3:6000144000000010f001ba1ae27bf23a sb1_lr0_0033_vol

VPD83T3:6000144000000010f001ba1ae27bf23b sb1_lr0_0034_vol

VPD83T3:6000144000000010f001ba1ae27bf23c sb1_lr0_0035_vol

VPD83T3:6000144000000010f001ba1ae27bf23d sb1_lr0_0036_vol

VPD83T3:6000144000000010f001ba1ae27bf23e sb1_lr0_0037_vol

VPD83T3:6000144000000010f001ba1ae27bf23f sb1_lr0_0038_vol

VPD83T3:6000144000000010f001ba1ae27bf240 sb1_lr0_0039_vol

VPD83T3:6000144000000010f001ba1ae27bf241 sb1_lr0_0040_vol

VPD83T3:6000144000000010f001ba1ae27bf242 sb1_lr0_0041_vol

VPD83T3:6000144000000010f001ba1ae27bf243 sb1_lr0_0042_vol

VPD83T3:6000144000000010f001ba1ae27bf244 sb1_lr0_0043_vol

VPD83T3:6000144000000010f001ba1ae27bf245 sb1_lr0_0044_vol

VPD83T3:6000144000000010f001ba1ae27bf246 sb1_lr0_0045_vol

VPD83T3:6000144000000010f001ba1ae27bf247 sb1_lr0_0046_vol

VPD83T3:6000144000000010f001ba1ae27bf248 sb1_lr0_0047_vol

VPD83T3:6000144000000010f001ba1ae27bf249 sb1_lr0_0048_vol

VPD83T3:6000144000000010f001ba1ae27bf24a sb1_lr0_0049_vol

VPlexcli:/> export storage-view find --help

synopsis: find [ ]

Find shows only the views satisfying your criteria. This is mostly useful for mega systems with many views and many exported volumes.

options (* = required):

  -h | --help

          Displays the usage for this command.

  --verbose

          Provides more output during command execution.  This may not have any effect for some commands.

  -v | --volume=

          Find the views exporting a given volume, by name, VPD83 identifier, or a name pattern with wildcards.

  -l | --lun=

          Find the views exporting given LUN number.

  -i | --initiator=

          Find the views including a given initiator. May contain wildcards.

  -f | --free-lun

          Find the next free LUN number for all views.

  -c | --cluster=

          Cluster to search for views

I do need to give it the right cluster to resolve to:

VPlexcli:/> export storage-view find -v VPD83T3:6000144000000010f001ba1ae27bf219

export storage-view find:  Evaluation of < > failed.

cause:                     Command execution failed.

cause:                     Current context does not resolve to a cluster.

CD into the cluster I want to search on:

VPlexcli:/> cd /clusters/cluster-1/

VPlexcli:/clusters/cluster-1> export storage-view find -v VPD83T3:6000144000000010f001ba1ae27bf219

Views exporting volume VPD83T3:6000144000000010f001ba1ae27bf219:

        View cluster-1-sb1-view-0 exports (0,sb1_lr0_0000_vol,VPD83T3:6000144000000010f001ba1ae27bf219,82G).

Then from there if I wanted additional info, i could use "show-use-hierarchy".  Note:  You have to provide the context for it to search on:

VPlexcli:/clusters/cluster-1> show-use-hierarchy sb1_lr0_0000_vol

show-use-hierarchy:  Evaluation of < > failed.

cause:               Could not find appropriate contexts matching '[sb1_lr0_0000_vol]'.

Prepend with the path to the volume:

VPlexcli:/clusters/cluster-1> show-use-hierarchy virtual-volumes/sb1_lr0_0000_vol

storage-view: cluster-1-sb1-view-0 (cluster-1)

  virtual-volume: sb1_lr0_0000_vol (82G, local @ cluster-1, running)

    local-device: sb1_lr0_0000 (82G, raid-0, cluster-1)

      extent: extent_sb1_0000_1 (82G)

        storage-volume: sb1_0000 (82G)

          logical-unit: VPD83T3:600110d000bcb800011100005e5cf033

            storage-array: EMC-SYMMETRIX-0

Tells me a bit more about the volume like what device, extent, storage-volume, array LUN VPD ID, and array the virtual-volume is a part of.

Gary

No Events found!

Top