Unsolved

This post is more than 5 years old

1 Rookie

 • 

15 Posts

15399

July 18th, 2013 11:00

VNX Oracle Linux 6 multipath.conf settings in latest July document not working?

Hi All,

As you know logging a support ticket with EMC hits a brick wall when you ask for support on stuff like this. They handle break fix only. So I am coming here for some advice. A very simple issue. We have all the Linux Host Connectivity documents on hand. The latest two are April 2013 A34 and then July 2013 version A35. Both have the same information from page 262-268 on OEL6 multipath.conf settings for the VNX family storage. These look different than the ones that work on version 5 which we have no problems with. But these new ones DO NOT WORK in OEL 6 and spit out syntax errors when you try to run the multipath commands.

I am hoping someone here that has successfully installed a VNX array on OEL6 will paste real settings that actually work. I have tried to resolve this problem months ago with RHEL6 and OEL6 and failed. We are now trying again on a newly built system and VNX5500 with latest flare code 32.x. The array itself is working great with every host and host type except for this OEL6 server.

Thanks for your help in advance. Please don't post any settings you have not tested or know work. Please dont tell me to log a support ticket. Tried to upload image of settings and also trying to paste them below from the EMC host guide.

SEE ATTACHMENTS PLEASE

#}

# Device attributed for EMC CLARiiON arrays and VNX storage systems ALUA

device {

vendor "DGC"

product "*"

product_blacklist "LUNZ"

path_grouping_policy group_by_prio

getuid_callout "/lib/udev/scsi_id id --whitelisted

--device=/dev/%n"

path_selector "round-robin 0"

features "1 queue_if_no_path"

hardware_handler "1 alua"

prio alua

no_path_retry 60

failback immediate

rr_weight uniform

rr_min_io 1000

}

}

2 Attachments

1 Rookie

 • 

15 Posts

July 18th, 2013 14:00

Here's what we get when running EMC's July 2013 documented settings.

As soon as I reload and restart the multipathd service the following happens with my storage devices, they fail:

[root@lnx2211 /]# multipath -l

ocr2 (36006016054e02c00f05d476b80e5e211) dm-15 DGC,RAID 5

size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=0 status=enabled

| |- 0:0:0:3 sdd 8:48   failed undef running

| `- 1:0:1:3 sdw 65:96  failed undef running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 0:0:1:3 sdj 8:144  failed undef running

  `- 1:0:0:3 sdq 65:0   failed undef running

ocr1 (36006016054e02c007035b04b80e5e211) dm-14 DGC,RAID 5

size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=0 status=enabled

| |- 0:0:0:1 sdb 8:16   failed undef running

| `- 1:0:1:1 sdu 65:64  failed undef running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 0:0:1:1 sdh 8:112  failed undef running

  `- 1:0:0:1 sdo 8:224  failed undef running

voting2 (36006016054e02c00c43b295a80e5e211) dm-13 DGC,RAID 5

size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=0 status=enabled

| |- 0:0:0:2 sdc 8:32   failed undef running

| `- 1:0:1:2 sdv 65:80  failed undef running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 0:0:1:2 sdi 8:128  failed undef running

  `- 1:0:0:2 sdp 8:240  failed undef running

voting1 (36006016054e02c0014171f516386e211) dm-12 DGC,RAID 5

size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=0 status=enabled

| |- 0:0:0:0 sda 8:0    failed undef running

| `- 1:0:1:0 sdt 65:48  failed undef running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 1:0:0:0 sdn 8:208  failed undef running

  `- 0:0:1:0 sdg 8:96   failed undef running

star2 (36006016054e02c008aabbb9180e5e211) dm-17 DGC,RAID 5

size=20G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=0 status=enabled

| |- 0:0:0:5 sdf 8:80   failed undef running

| `- 1:0:1:5 sdy 65:128 failed undef running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 0:0:1:5 sdl 8:176  failed undef running

  `- 1:0:0:5 sds 65:32  failed undef running

star1 (36006016054e02c008ae9cf8580e5e211) dm-16 DGC,RAID 5

size=20G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=0 status=enabled

| |- 0:0:0:4 sde 8:64   failed undef running

| `- 1:0:1:4 sdx 65:112 failed undef running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 0:0:1:4 sdk 8:160  failed undef running

  `- 1:0:0:4 sdr 65:16  failed undef running

2 Intern

 • 

1.3K Posts

July 19th, 2013 05:00


can you post the recommended part you are referrign to?

1 Rookie

 • 

15 Posts

July 23rd, 2013 05:00

What are you asking for please? Yes, the settings documented in both 2013 documents will not work on Redhat 6 or Oracle Linux 6 and the result is also posted. An example of the document is posted and everything you might want to see is either in my two posts or the attachments. I even extracted the specific PDF pages to make it easier. Please look at both of my posts thoroughly and the attachments, thanks. If you need anything else please be VERY specific.

It would be very nice if someone who has gotten it working could cut and paste the text from a properly functioning multipath.conf file on RH6 or OL6.


9 Legend

 • 

20.4K Posts

August 3rd, 2013 20:00

Doug,

i am no dm-mpio expert but i was building a box to test VNX snapshots i decided to give mpio a shot.  This seems to be working ok, i fdisk/pvcreate devices, create volume group/logical volume and mounted it. I can restart the service and devices are staying online. My config:

VNX 5700 - Block OE 05.32.000.5.206

[root@localhost ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@localhost ~]# cat /etc/multipath.conf | grep -v "#"

defaults {

        user_friendly_names yes

}

devices {

        device {

                vendor "DGC"

                product "*"

                prio "emc"

                path_grouping_policy group_by_prio

                features "1 queue_if_no_path"

                failback immediate

                hardware_handler "1 alua"

}

}

[root@localhost ~]# multipath -ll

mpathc (360060160131c3200d263b55218f7e211) dm-3 DGC,VRAID

size=10G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| |- 1:0:0:1 sdc 8:32  active ready running

| |- 1:0:1:1 sde 8:64  active ready running

| |- 2:0:0:1 sdk 8:160 active ready running

| `- 2:0:1:1 sdm 8:192 active ready running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 1:0:2:1 sdg 8:96  active ready running

  |- 1:0:3:1 sdi 8:128 active ready running

  |- 2:0:2:1 sdo 8:224 active ready running

  `- 2:0:3:1 sdq 65:0  active ready running

mpathb (360060160131c320092a5a2b8ecf0e211) dm-2 DGC,VRAID

size=10G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| |- 1:0:0:0 sdb 8:16  active ready running

| |- 1:0:1:0 sdd 8:48  active ready running

| |- 2:0:0:0 sdj 8:144 active ready running

| `- 2:0:1:0 sdl 8:176 active ready running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 1:0:2:0 sdf 8:80  active ready running

  |- 1:0:3:0 sdh 8:112 active ready running

  |- 2:0:2:0 sdn 8:208 active ready running

  `- 2:0:3:0 sdp 8:240 active ready running

 

[root@localhost ~]# service multipathd restart

ok

Stopping multipathd daemon:                                [  OK  ]

Starting multipathd daemon:                                [  OK  ]

[root@localhost ~]# multipath -ll

mpathc (360060160131c3200d263b55218f7e211) dm-3 DGC,VRAID

size=10G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| |- 1:0:0:1 sdc 8:32  active ready running

| |- 1:0:1:1 sde 8:64  active ready running

| |- 2:0:0:1 sdk 8:160 active ready running

| `- 2:0:1:1 sdm 8:192 active ready running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 1:0:2:1 sdg 8:96  active ready running

  |- 1:0:3:1 sdi 8:128 active ready running

  |- 2:0:2:1 sdo 8:224 active ready running

  `- 2:0:3:1 sdq 65:0  active ready running

mpathb (360060160131c320092a5a2b8ecf0e211) dm-2 DGC,VRAID

size=10G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=1 status=active

| |- 1:0:0:0 sdb 8:16  active ready running

| |- 1:0:1:0 sdd 8:48  active ready running

| |- 2:0:0:0 sdj 8:144 active ready running

| `- 2:0:1:0 sdl 8:176 active ready running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 1:0:2:0 sdf 8:80  active ready running

  |- 1:0:3:0 sdh 8:112 active ready running

  |- 2:0:2:0 sdn 8:208 active ready running

  `- 2:0:3:0 sdp 8:240 active ready running

 

 

[root@localhost ~]# vgdisplay VG_VNX -v

    Using volume group(s) on command line

    Finding volume group "VG_VNX"

  --- Volume group ---

  VG Name               VG_VNX

  System ID

  Format                lvm2

  Metadata Areas        2

  Metadata Sequence No  2

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                1

  Open LV               1

  Max PV                0

  Cur PV                2

  Act PV                2

  VG Size               19.99 GiB

  PE Size               4.00 MiB

  Total PE              5118

  Alloc PE / Size       4864 / 19.00 GiB

  Free  PE / Size       254 / 1016.00 MiB

  VG UUID               L39aU9-KM1j-lzcE-aOqo-RAhI-mEJF-HqUV3n

  --- Logical volume ---

  LV Path                /dev/VG_VNX/vnx_lv

  LV Name                vnx_lv

  VG Name                VG_VNX

  LV UUID                hJ8Z0P-8lio-x8P1-ZA7j-4WQH-iDhy-ti50Gq

  LV Write Access        read/write

  LV Creation host, time localhost.localdomain, 2013-08-03 18:52:47 -0400

  LV Status              available

  # open                 1

  LV Size                19.00 GiB

  Current LE             4864

  Segments               2

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     256

  Block device           253:7

  --- Physical volumes ---

  PV Name               /dev/mapper/mpathbp1

  PV UUID               f2e820-lBy5-C2M0-gAJT-ulex-7Jvc-PJdonT

  PV Status             allocatable

  Total PE / Free PE    2559 / 0

  PV Name               /dev/mapper/mpathcp1

  PV UUID               AThNzW-MqYW-EyaF-327R-rmaK-Liuq-e663ZT

  PV Status             allocatable

  Total PE / Free PE    2559 / 254

1 Message

February 7th, 2015 00:00

Hey DougStorage,

We have bought two VNX 5200 storage and we have also a Multipath Problems with running under Oracle Linux 6.6.

Do you have a working Multipath.conf file?

Thanks,

Daniel

4 Operator

 • 

4.5K Posts

February 9th, 2015 09:00

The latest Orcale/Solaris Host connectivity guide (A47) is available here:

https://support.emc.com/docu5132_Host-Connectivity-Guide-for-Oracle-Solaris.pdf

You can also use the mydocuments.emc.com site to create a Host guide - that should have the latest information.

glen

3 Posts

March 4th, 2015 07:00

Here is full working multipath.conf file I saved from a very recent setup I completed. I setup DM-Multipath on 3 Oracle Linux 6.5 Servers.  I don't have a screen shot of the multipath -ll command put I fully verified that DM-Mulitpath was working by pulling physical connections on my UCS env and watching the connections failover on the OS side with maybe a loss of 1 ping or two at the most.  Hope this helps.

multipath.conf.jpg

Top