Unsolved
This post is more than 5 years old
1 Rookie
•
15 Posts
1
15399
July 18th, 2013 11:00
VNX Oracle Linux 6 multipath.conf settings in latest July document not working?
Hi All,
As you know logging a support ticket with EMC hits a brick wall when you ask for support on stuff like this. They handle break fix only. So I am coming here for some advice. A very simple issue. We have all the Linux Host Connectivity documents on hand. The latest two are April 2013 A34 and then July 2013 version A35. Both have the same information from page 262-268 on OEL6 multipath.conf settings for the VNX family storage. These look different than the ones that work on version 5 which we have no problems with. But these new ones DO NOT WORK in OEL 6 and spit out syntax errors when you try to run the multipath commands.
I am hoping someone here that has successfully installed a VNX array on OEL6 will paste real settings that actually work. I have tried to resolve this problem months ago with RHEL6 and OEL6 and failed. We are now trying again on a newly built system and VNX5500 with latest flare code 32.x. The array itself is working great with every host and host type except for this OEL6 server.
Thanks for your help in advance. Please don't post any settings you have not tested or know work. Please dont tell me to log a support ticket. Tried to upload image of settings and also trying to paste them below from the EMC host guide.
SEE ATTACHMENTS PLEASE
#}
# Device attributed for EMC CLARiiON arrays and VNX storage systems ALUA
device {
vendor "DGC"
product "*"
product_blacklist "LUNZ"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id id --whitelisted
--device=/dev/%n"
path_selector "round-robin 0"
features "1 queue_if_no_path"
hardware_handler "1 alua"
prio alua
no_path_retry 60
failback immediate
rr_weight uniform
rr_min_io 1000
}
}



DougStorage
1 Rookie
•
15 Posts
0
July 18th, 2013 14:00
Here's what we get when running EMC's July 2013 documented settings.
As soon as I reload and restart the multipathd service the following happens with my storage devices, they fail:
[root@lnx2211 /]# multipath -l
ocr2 (36006016054e02c00f05d476b80e5e211) dm-15 DGC,RAID 5
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 0:0:0:3 sdd 8:48 failed undef running
| `- 1:0:1:3 sdw 65:96 failed undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 0:0:1:3 sdj 8:144 failed undef running
`- 1:0:0:3 sdq 65:0 failed undef running
ocr1 (36006016054e02c007035b04b80e5e211) dm-14 DGC,RAID 5
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 0:0:0:1 sdb 8:16 failed undef running
| `- 1:0:1:1 sdu 65:64 failed undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 0:0:1:1 sdh 8:112 failed undef running
`- 1:0:0:1 sdo 8:224 failed undef running
voting2 (36006016054e02c00c43b295a80e5e211) dm-13 DGC,RAID 5
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 0:0:0:2 sdc 8:32 failed undef running
| `- 1:0:1:2 sdv 65:80 failed undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 0:0:1:2 sdi 8:128 failed undef running
`- 1:0:0:2 sdp 8:240 failed undef running
voting1 (36006016054e02c0014171f516386e211) dm-12 DGC,RAID 5
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 0:0:0:0 sda 8:0 failed undef running
| `- 1:0:1:0 sdt 65:48 failed undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 1:0:0:0 sdn 8:208 failed undef running
`- 0:0:1:0 sdg 8:96 failed undef running
star2 (36006016054e02c008aabbb9180e5e211) dm-17 DGC,RAID 5
size=20G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 0:0:0:5 sdf 8:80 failed undef running
| `- 1:0:1:5 sdy 65:128 failed undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 0:0:1:5 sdl 8:176 failed undef running
`- 1:0:0:5 sds 65:32 failed undef running
star1 (36006016054e02c008ae9cf8580e5e211) dm-16 DGC,RAID 5
size=20G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=enabled
| |- 0:0:0:4 sde 8:64 failed undef running
| `- 1:0:1:4 sdx 65:112 failed undef running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 0:0:1:4 sdk 8:160 failed undef running
`- 1:0:0:4 sdr 65:16 failed undef running
SKT2
2 Intern
•
1.3K Posts
0
July 19th, 2013 05:00
can you post the recommended part you are referrign to?
DougStorage
1 Rookie
•
15 Posts
0
July 23rd, 2013 05:00
What are you asking for please? Yes, the settings documented in both 2013 documents will not work on Redhat 6 or Oracle Linux 6 and the result is also posted. An example of the document is posted and everything you might want to see is either in my two posts or the attachments. I even extracted the specific PDF pages to make it easier. Please look at both of my posts thoroughly and the attachments, thanks. If you need anything else please be VERY specific.
It would be very nice if someone who has gotten it working could cut and paste the text from a properly functioning multipath.conf file on RH6 or OL6.
dynamox
9 Legend
•
20.4K Posts
0
August 3rd, 2013 20:00
Doug,
i am no dm-mpio expert but i was building a box to test VNX snapshots i decided to give mpio a shot. This seems to be working ok, i fdisk/pvcreate devices, create volume group/logical volume and mounted it. I can restart the service and devices are staying online. My config:
VNX 5700 - Block OE 05.32.000.5.206
Daniel_VNX
1 Message
0
February 7th, 2015 00:00
Hey DougStorage,
We have bought two VNX 5200 storage and we have also a Multipath Problems with running under Oracle Linux 6.6.
Do you have a working Multipath.conf file?
Thanks,
Daniel
kelleg
4 Operator
•
4.5K Posts
0
February 9th, 2015 09:00
The latest Orcale/Solaris Host connectivity guide (A47) is available here:
https://support.emc.com/docu5132_Host-Connectivity-Guide-for-Oracle-Solaris.pdf
You can also use the mydocuments.emc.com site to create a Host guide - that should have the latest information.
glen
fvo
3 Posts
0
March 4th, 2015 07:00
Here is full working multipath.conf file I saved from a very recent setup I completed. I setup DM-Multipath on 3 Oracle Linux 6.5 Servers. I don't have a screen shot of the multipath -ll command put I fully verified that DM-Mulitpath was working by pulling physical connections on my UCS env and watching the connections failover on the OS side with maybe a loss of 1 ping or two at the most. Hope this helps.