Unsolved
43 Posts
0
2833
iSCSI portal duplicate
Hello forum,
i can successfully discover and connect a volume provided by a equallogic PS6110 from a CentOS 7 client, but every time i log in to it, the client gets conntected twice:
first to the target by it's IP and a
second connection to the same target, using it' FQDN.
Rescanning session [sid: 5, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-test, portal: 80.51.3.2,3260] Rescanning session [sid: 6, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-84202e559fd-test, portal: host.sld.tld,3260]
Each of the them connects to a separate drive:
device fs_type label mount point UUID --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /dev/mapper/centos-root xfs / 591d911d-886c-43bc-a8d1-8ed4befa1dcd /dev/sdb1 xfs (not mounted) 5b05c9f6-1990-44df-91b5-5cee70522282 /dev/sdb2 LVM2_member (not mounted) uk1teQ-9i0z-IxyZ-u6Tv-jFmC-EZny-7clEY0 /dev/sdc1 xfs (not mounted) 5b05c9f6-1990-44df-91b5-5cee70522282 /dev/sdc2 LVM2_member (not mounted) uk1teQ-9i0z-IxyZ-u6Tv-jFmC-EZny-7clEY0
Is that o.k./healthy/sensible, or should i be concerned?
Personally i don't like this config, as it looks like having the same content stored on 2 different disks?
Best
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
October 2nd, 2018 06:00
Hello,
It sounds like you might not have multipathing configured correctly.
This PDF will guide you through the process of configuring iSCSI and MPIO with EQL for RHEL / OEL which works the same as CentOS.
http://downloads.dell.com/solutions/storage-solution-resources/%283199-CD-L%29RHEL-PSseries-Configuration.pdf
You do NOT want to mount both /dev/sdXX devices at the same time. Corruption will occur if you do.
You can do a quick check of multipathing with: #multipath -ll
if it's working correctly you will see a /dev/mpath device with the two /dev/sdX devices underneath it.
Regards,
Don
HifDelCo
43 Posts
0
October 4th, 2018 05:00
Hello,
thanks a lot for your fast help.
That sounds very promising and perfectly reasonable. The deployment is a test case in an effort to explore the fundamentals first prior to deal with the advanced stuff i considered multipathing to be (KVM newbe here). Thus i skipped that section entirely for "simplicity".
The reason why i encountered those obscure multiple connections have been... - filesystem corruption ;-)
I'll return and tell how it worked for me.
Thanks again and
best
F
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
October 4th, 2018 10:00
Hello,
You are very welcome. If you follow that guide you should have no more issues and better performance when you layer on KVM.
Good luck.
Regards,
Don
HifDelCo
43 Posts
0
October 5th, 2018 06:00
Hello Don,
that guide is great!
I configured now multipathing strictly according to the "DELL RHEL config guide for native multipathing" step by step, and so far as i can see multipathing config is o.k., as multiple paths are established per volume:
I left the config to the recommendation of that guide except for blacklisting the local drive:
But still those paths map to different drives...
What me started in the first place, still holds: by discovering and logging in, i always see 2 portals, at least that's how i understand it:
I still wonder why the same portal gets logged in to via IP and FQDN separately. Maybe the initiator regards both targets as different, as the have different names?
I doublechecked the EQL config looking for options to configure how the EQL would announce it's portal preferrably (IP or FQDN) and mutually exclusive, but all besides the config of the group and the access control (tried both: basic access point as ACL) happens in the group configuration, if in see this right.
I also checked the RHEL_7_DM_Multipath_Config-Admin guide for clues, but could not find any hint...
Is there possibly s.th. i fundamentally misunderstand?
Any hint is highly appreciated!
Best,
F
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
October 5th, 2018 08:00
Hello,
I think you have missed some steps in the process.
What determines the connection from each NIC to the storage is the interfaces (IFACE) file.
This way you tell Linux to you ETH port X and Y for iSCSI traffic. You have to log into the EQL storage for both interfaces in order to create the device files needed for MPIO. E.g. /dev/sdc /dev/sdd get combined to form /dev/mapper/mpatha
That way there are now two paths to that one device. One path fails the other is still available.
So you only ever want to mount the /dev/mapper device
/var/lib/iscsi/ifaces is the directory where the interface files are stored.
cat eql.eth2
# BEGIN RECORD 2.0-872.16.el5
iface.iscsi_ifacename = eth2
iface.net_ifacename = eth2
iface.transport_name = tcp
iface.vlan_id = 0
iface.vlan_priority = 0
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
# END RECORD
This is an older RHEL OS but the principle is the same.
I would remove any other files, make sure the NICs you want to use are there.
I also would edit the MULTIPATH.CONF file to match the document. I.e. lowering the RR_MIN_IO, using TUR vs. READSECTOR0, etc...
You probably want to clean out all the old existing entries and rediscover once all the files are set correctly.
Especially the defaults in ISCSID.CONF
Regards,
Don
HifDelCo
43 Posts
0
October 12th, 2018 06:00
Hello Don,
thank you for helping again.
Sorry for the late reply but i wanted to make sure to not have overlooked anything of what should be conceivable from the manuals.
You've been right, i missed that explicit creation of an iSCSI IFACE and caught up with it.
Thus having an IFACE now in
i effectively gained another path/session, ultimately running 3 sessions connecting to one target, because there still is the "default" interface around establishing an connection.
Shouldn't it be best practice to have all iSCSI-sessions to be established from that dedicated NIC from?
I.o.w.: shouldn't the session established from the "default" IFACE be removed?
Currently:
But:
I believe to have turned any stone but can't find how to get rid of "iser" and "default" explicitly...
Removing all sessions and establishing them anew did not change the situation either...
And: should the "default" IFACE be removed at all?!?
Would be great if you could provide me with another hint!
Best
F
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
October 14th, 2018 23:00
Hello,
In the guide it shows you how to create an IFACE file for EACH physical network interface you want to use for iSCSI. "default" will use what ever interface Linux determines is the default for that subnet.
So there should ONLY be iface files for interfaces you want iSCSI to use.
Also NEVER mount the /dev/sdX devices inside a /dev/mpathX device
Also you can create a friendly name for each volume to make it much easier to determine what EQL volumes are what device.
Regards,
Don
HifDelCo
43 Posts
0
October 15th, 2018 04:00
Hello Don,
that's how i believed to have understood it, and the new explicitely configured IFACE works well so far.
And thank you for stressing the proper handling of the mpathX device!
It's just that i feel it would complete the job to get rid of all the other IFACES which not are explicitely configured (iser, default), if not other concepts i don't know about yet may indicate that it's actually o.k. or even required to have the "default" and "iser" (infiniband from what i see) around in the background, as they would possibly do any harm.
To pinpoint it, the following divergence leaves me wondering:
As in this case the "default IFACE" and the explicitely set one (enp130s0f0) fall inline, it wouldn't add another path actually.
Nevertheless, it's still in use:
At least it's clear now, where the FQDN is originating from:
From man iscsiadm:
That would mean, that the one path provided by the default would NOT be bount to specific hardware.
And i'm currently not sure whether that is a good thing or not...
Maybe i am reading too much into it, but i'd like to make doublesure at this critical point ;-).
It would be great if you could put that into perspective from your insight and experience!
Best
F
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
October 15th, 2018 08:00
Hello,
You do not want nor need the 'default' binding. As it will simply be using one of the existing nics giving you an additional connection but w/o value.
Regards,
Don
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
October 29th, 2018 07:00
Hello,
You are very welcome!! Glad I could help out.
Regards,
Don
HifDelCo
43 Posts
0
October 29th, 2018 07:00
Hello Don,
thank you for your advice.
I'll remove the "default" IFACE from the iSCSI-Interfaces via the iscsiadm --op=delete option then in order to exclude it.
And thank you agaon for all your help!!
Best
F