Start a Conversation

Unsolved

H

43 Posts

2833

October 2nd, 2018 05:00

iSCSI portal duplicate

Hello forum,

i can successfully discover and connect a volume provided by a equallogic PS6110 from a CentOS 7 client, but every time i log in to it, the client gets conntected twice:

first to the target by it's IP and a

second connection to the same target, using it' FQDN.

Rescanning session [sid: 5, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-test, portal: 80.51.3.2,3260]
Rescanning session [sid: 6, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-84202e559fd-test, portal: host.sld.tld,3260]

Each of the them connects to a separate drive:

device                                                                                  fs_type         label            mount point                                                                                 UUID
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/dev/mapper/centos-root                                                                                               xfs                                            /                                                                                                                                                                                                                                            591d911d-886c-43bc-a8d1-8ed4befa1dcd
/dev/sdb1                                                                                                             xfs                                            (not mounted)                                                                                                             5b05c9f6-1990-44df-91b5-5cee70522282
/dev/sdb2                                                                                                             LVM2_member                                    (not mounted)                                                                                                             uk1teQ-9i0z-IxyZ-u6Tv-jFmC-EZny-7clEY0
/dev/sdc1                                                                                                             xfs                                            (not mounted)                                                                                                             5b05c9f6-1990-44df-91b5-5cee70522282
/dev/sdc2                                                                                                             LVM2_member                                    (not mounted)                                                                                                             uk1teQ-9i0z-IxyZ-u6Tv-jFmC-EZny-7clEY0

Is that o.k./healthy/sensible, or should i be concerned?

Personally i don't like this config, as it looks like having the same content stored on 2 different disks?

Best

1 Rookie

 • 

1.5K Posts

October 2nd, 2018 06:00

Hello, 

 It sounds like you might not have multipathing configured correctly.  

 This PDF will guide you through the process of configuring iSCSI and MPIO with EQL for RHEL / OEL which works the same as CentOS. 

http://downloads.dell.com/solutions/storage-solution-resources/%283199-CD-L%29RHEL-PSseries-Configuration.pdf

 You do NOT want to mount both /dev/sdXX devices at the same time.  Corruption will occur if you do. 

 You can do a quick check of multipathing with:   #multipath -ll

 if it's working correctly you will see a /dev/mpath device with the two /dev/sdX devices underneath it. 

  Regards,

Don 

43 Posts

October 4th, 2018 05:00

Hello,

thanks a lot for your fast help.

That sounds very promising and perfectly reasonable. The deployment is a test case in an effort to explore the fundamentals first prior to deal with the advanced stuff i considered multipathing to be (KVM newbe here). Thus i skipped that section entirely for "simplicity".

The reason why i encountered those obscure multiple connections have been... - filesystem corruption ;-)

I'll return and tell how it worked for me.

Thanks again and

best

F

1 Rookie

 • 

1.5K Posts

October 4th, 2018 10:00

Hello, 

 You are very welcome.  If you follow that guide you should have no more issues and better performance when you layer on KVM.  

 Good luck. 

  Regards,

Don 

 

43 Posts

October 5th, 2018 06:00

Hello Don,

that guide is great!

I configured now multipathing strictly according to the "DELL RHEL config guide for native multipathing" step by step, and so far as i can see multipathing config is o.k., as multiple paths are established per volume:

[root ~]# dmsetup ls
mpathb	(253:4)
mpatha	(253:3)
centos-home	(253:2)
mpathb1	(253:5)
centos-swap	(253:1)
centos-root	(253:0)

 

multipathd> show paths
hcil     dev dev_t pri dm_st  chk_st dev_st  next_check     
10:0:0:0 sdb 8:16  50  active ready  running XX........ 9/40
11:0:0:0 sdc 8:32  50  active ready  running XX........ 9/40
13:0:0:0 sdd 8:48  50  active ready  running XX........ 9/40
12:0:0:0 sde 8:64  50  active ready  running XX........ 9/40
[root ~]# multipath -ll
mpathb (360fff1bafde55691e7b8d53cbfe63958) dm-4 EQLOGIC ,100E-00         
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 13:0:0:0 sdd 8:48 active ready running
  `- 12:0:0:0 sde 8:64 active ready running
mpatha (360fff1bafde586fefd59e50200002054) dm-3 EQLOGIC ,100E-00         
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 10:0:0:0 sdb 8:16 active ready running
  `- 11:0:0:0 sdc 8:32 active ready running

 

multipathd> show paths
hcil     dev dev_t pri dm_st  chk_st dev_st  next_check     
10:0:0:0 sdb 8:16  50  active ready  running XX........ 9/40
11:0:0:0 sdc 8:32  50  active ready  running XX........ 9/40
13:0:0:0 sdd 8:48  50  active ready  running XX........ 9/40
12:0:0:0 sde 8:64  50  active ready  running XX........ 9/40

I left the config to the recommendation of that guide except for blacklisting the local drive:

defaults {
	polling_interval 	10
	path_selector		"round-robin 0"
	path_grouping_policy	multibus
	uid_attribute		ID_SERIAL
	prio			alua
	path_checker		readsector0
	rr_min_io		100
	max_fds			8192
	rr_weight		priorities
	failback		immediate
	no_path_retry		fail
	user_friendly_names	yes
        find_multipaths 	yes
}

blacklist {
       wwid 26353900f02796769
	devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
	devnode "^hd[a-z]"
        devnode "sda"
}

But still those paths map to different drives...
What me started in the first place, still holds: by discovering and logging in, i always see 2 portals, at least that's how i understand it:

[root ~]# iscsiadm -m session --rescan
Rescanning session [sid: 1, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-test, portal: host.sld.tld,3260]
Rescanning session [sid: 2, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-test, portal: 80.51.3.2,3260]
Rescanning session [sid: 3, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-swdistribution, portal: host.sld.tld,3260]
Rescanning session [sid: 4, target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-swdistribution, portal: 80.51.3.2,3260]
[root ~]# 
[root ~]# /sbin/multipath -v4
Oct 05 13:43:34 | loading /lib64/multipath/libcheckdirectio.so checker
Oct 05 13:43:34 | loading /lib64/multipath/libprioconst.so prioritizer
Oct 05 13:43:34 | Discover device /sys/devices/platform/host10/session1/target10:0:0/10:0:0:0/block/sdb
Oct 05 13:43:34 | sdb: not found in pathvec
Oct 05 13:43:34 | sdb: mask = 0x3f
Oct 05 13:43:34 | sdb: dev_t = 8:16
Oct 05 13:43:34 | open '/sys/devices/platform/host10/session1/target10:0:0/10:0:0:0/block/sdb/size'
Oct 05 13:43:34 | sdb: size = 209725440
Oct 05 13:43:34 | sdb: vendor = EQLOGIC 
Oct 05 13:43:34 | sdb: product = 100E-00         
Oct 05 13:43:34 | sdb: rev = 9.1 
Oct 05 13:43:34 | sdb: h:b:t:l = 10:0:0:0
Oct 05 13:43:34 | sdb: tgt_node_name = iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-test
Oct 05 13:43:34 | open '/sys/devices/platform/host10/session1/target10:0:0/10:0:0:0/state'
Oct 05 13:43:34 | sdb: path state = running

Oct 05 13:43:34 | sdb: 13054 cyl, 255 heads, 63 sectors/track, start at 0
Oct 05 13:43:34 | sdb: serial = 60FFF1BAFDE586FEFD59E50200002054
Oct 05 13:43:34 | sdb: get_state
Oct 05 13:43:34 | sdb: detect_checker = 1 (config file default)
Oct 05 13:43:34 | loading /lib64/multipath/libcheckreadsector0.so checker
Oct 05 13:43:34 | sdb: path checker = readsector0 (config file default)
Oct 05 13:43:34 | sdb: checker timeout = 30000 ms (sysfs setting)
Oct 05 13:43:34 | sdb: readsector0 state = up
Oct 05 13:43:34 | sdb: uid_attribute = ID_SERIAL (config file default)
Oct 05 13:43:34 | sdb: got wwid of '360fff1bafde586fefd59e50200002054'
Oct 05 13:43:34 | sdb: uid = 360fff1bafde586fefd59e50200002054 (udev)
Oct 05 13:43:34 | sdb: detect_prio = 1 (config file default)
Oct 05 13:43:34 | loading /lib64/multipath/libprioalua.so prioritizer
Oct 05 13:43:34 | sdb: prio = alua (config file default)
Oct 05 13:43:34 | sdb: prio args = (null) (config file default)
Oct 05 13:43:34 | reported target port group is 1
Oct 05 13:43:34 | aas = 80 [active/optimized] [preferred]
Oct 05 13:43:34 | sdb: alua prio = 50
Oct 05 13:43:34 | Discover device /sys/devices/platform/host11/session2/target11:0:0/11:0:0:0/block/sdc
Oct 05 13:43:34 | sdc: not found in pathvec
Oct 05 13:43:34 | sdc: mask = 0x3f
Oct 05 13:43:34 | sdc: dev_t = 8:32
Oct 05 13:43:34 | open '/sys/devices/platform/host11/session2/target11:0:0/11:0:0:0/block/sdc/size'
Oct 05 13:43:34 | sdc: size = 209725440
Oct 05 13:43:34 | sdc: vendor = EQLOGIC 
Oct 05 13:43:34 | sdc: product = 100E-00         
Oct 05 13:43:34 | sdc: rev = 9.1 
Oct 05 13:43:34 | sdc: h:b:t:l = 11:0:0:0
Oct 05 13:43:34 | sdc: tgt_node_name = iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-842002e559fd-test
[root ~]# sudo blkid -o list
device                                                                                  fs_type         label            mount point                                                                                 UUID
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/dev/mapper/centos-root                                                                                               xfs                                            /                                                                                                                         165f8ebe-8d7c-4f35-8022-26e5b42b7816
/dev/sda2                                                                                                             LVM2_member                                    (in use)                                                                                                                  Id2WBG-cUu5-sLiu-nF2Q-DbdM-Hm6o-hxlHMI
/dev/sda1                                                                                                             xfs                                            /boot                                                                                                                     df74aa83-0e6c-457f-8d07-78d694c0d84b
/dev/mapper/centos-swap                                                                                               swap                                                                                                                                                               50616190-cad5-4cb7-b802-6203f6ec516c
/dev/mapper/centos-home                                                                                               xfs                                            /home                                                                                                                     591d911d-886c-43bc-a8d1-8ed4befa1dcd
/dev/sdb                                                                                                                                                             (in use)                                                                                                                  
/dev/mapper/mpatha                                                                                                                                                   (not mounted)                                                                                                             
/dev/sdc                                                                                                                                                             (in use)                                                                                                                  


 

I still wonder why the same portal gets logged in to via IP and FQDN separately. Maybe the initiator regards both targets as different, as the have different names?

I doublechecked the EQL config looking for options to configure how the EQL would announce it's portal preferrably (IP or FQDN) and mutually exclusive, but all besides the config of the group and the access control (tried both: basic access point as ACL) happens in the group configuration, if in see this right.

I also checked the RHEL_7_DM_Multipath_Config-Admin guide for clues, but could not find any hint...

Is there possibly s.th. i fundamentally misunderstand?

Any hint is highly appreciated!

Best,

F

1 Rookie

 • 

1.5K Posts

October 5th, 2018 08:00

Hello, 

 I think you have missed some steps in the process.

What determines the connection from each NIC to the storage is the interfaces (IFACE) file. 

This way you tell Linux to you ETH port X and Y for iSCSI traffic.  You have to log into the EQL storage for both interfaces in order to create the device files needed for MPIO.   E.g. /dev/sdc /dev/sdd get combined to form /dev/mapper/mpatha   

That way there are now two paths to that one device.  One path fails the other is still available.  

 So you only ever want to mount the /dev/mapper device 

/var/lib/iscsi/ifaces  is the directory where the interface files are stored. 

cat eql.eth2
# BEGIN RECORD 2.0-872.16.el5
iface.iscsi_ifacename = eth2
iface.net_ifacename = eth2
iface.transport_name = tcp
iface.vlan_id = 0
iface.vlan_priority = 0
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
# END RECORD

This is an older RHEL OS but the principle is the same. 

I would remove any other files, make sure the NICs you want to use are there. 

 I also would edit the MULTIPATH.CONF file to match the document.  I.e. lowering the RR_MIN_IO,  using TUR vs. READSECTOR0,  etc... 

 You probably want to clean out all the old existing entries and rediscover once all the files are set correctly. 

Especially the defaults in ISCSID.CONF 

 Regards,

Don 

 

 

 

 

43 Posts

October 12th, 2018 06:00

Hello Don,

thank you for helping again.
Sorry for the late reply but i wanted to make sure to not have overlooked anything of what should be conceivable from the manuals.

You've been right, i missed that explicit creation of an iSCSI IFACE and caught up with it.
Thus having an IFACE now in

[root ~]# ls -al /var/lib/iscsi/ifaces
total 8
drwxr-xr-x. 2 root root  54 Oct 11 14:29 .
drwxr-xr-x. 8 root root  90 Apr 11  2018 ..
-rw-------. 1 root root 458 Oct 11 14:29 enp130s0f0
-rw-------. 1 root root 488 Aug 30 13:58 libvirt-iface-34abd840

i effectively gained another path/session, ultimately running 3 sessions connecting to one target, because there still is the "default" interface around establishing an connection.

Shouldn't it be best practice to have all iSCSI-sessions to be established from that dedicated NIC from?
I.o.w.: shouldn't the session established from the "default" IFACE be removed?

Currently:

[root ~]# iscsiadm -m iface -P 1
Iface: default
	Target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-5420000002e559fd-test
		Portal: strgrp.host.sld.tld:3260,1
	Target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution
		Portal: strgrp.host.sld.tld:3260,1
Iface: iser
Iface: libvirt-iface-34abd840
	Target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-5420000002e559fd-test
		Portal: 80.51.3.2:3260,1
	Target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution
		Portal: 80.51.3.2:3260,1
Iface: enp130s0f0
	Target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-5420000002e559fd-test
		Portal: 80.51.3.2:3260,1
	Target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution
		Portal: 80.51.3.2:3260,1

But:

[root ~]# ls -al /var/lib/iscsi/ifaces
total 8
drwxr-xr-x. 2 root root  54 Oct 11 14:29 .
drwxr-xr-x. 8 root root  90 Apr 11  2018 ..
-rw-------. 1 root root 458 Oct 11 14:29 enp130s0f0
-rw-------. 1 root root 488 Aug 30 13:58 libvirt-iface-34abd840
[root ~]# 

I believe to have turned any stone but can't find how to get rid of "iser" and "default" explicitly...

[root ~]# iscsiadm -m node -I iser --op=delete
iscsiadm: No records found
[root ~]# iscsiadm -m node -p 80.51.3.2:3260,1 -I iser --op=delete
iscsiadm: No records found

Removing all sessions and establishing them anew did not change the situation either...

And: should the "default" IFACE be removed at all?!?

Would be great if you could provide me with another hint!

Best

F

 

 

 

 

 

 

1 Rookie

 • 

1.5K Posts

October 14th, 2018 23:00

Hello, 

 In the guide it shows you how to create an IFACE file for EACH physical network interface you want to use for iSCSI.  "default" will use what ever interface Linux determines is the default for that subnet. 

 So there should ONLY be iface files for interfaces you want iSCSI to use. 

 Also  NEVER mount the /dev/sdX devices inside a /dev/mpathX device 

 Also you can create a friendly name for each volume to make it much easier to determine what EQL volumes are what device. 

 Regards, 

Don 

 

43 Posts

October 15th, 2018 04:00

Hello Don,

that's how i believed to have understood it, and the new explicitely configured IFACE works well so far.
And thank you for stressing the proper handling of the mpathX device!

It's just that i feel it would complete the job to get rid of all the other IFACES which not are explicitely configured (iser, default), if not other concepts i don't know about yet may indicate that it's actually o.k. or even required to have the "default" and "iser" (infiniband from what i see) around in the background, as they would possibly do any harm.

To pinpoint it, the following divergence leaves me wondering:

 

ls -al /var/lib/iscsi/ifaces
total 8
drwxr-xr-x. 2 root root  54 Oct 11 14:29 .
drwxr-xr-x. 8 root root  90 Apr 11  2018 ..
-rw-------. 1 root root 458 Oct 11 14:29 enp130s0f0
-rw-------. 1 root root 488 Aug 30 13:58 libvirt-iface-34abd840

 

iscsiadm -m iface -P 1
Iface: default
	Target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-5420000002e559fd-test
		Portal: host.sld.tld:3260,1
	Target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution
		Portal: host.sld.tld:3260,1
Iface: iser
Iface: libvirt-iface-34abd840
	Target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-5420000002e559fd-test
		Portal: 80.51.3.2:3260,1
	Target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution
		Portal: 80.51.3.2:3260,1
Iface: enp130s0f0
	Target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-5420000002e559fd-test
		Portal: 80.51.3.2:3260,1
	Target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution
		Portal: 80.51.3.2:3260,1
[root@host ~]# 

As in this case the "default IFACE" and the explicitely set one (enp130s0f0) fall inline, it wouldn't add another path actually.

Nevertheless, it's still in use:

 

 

multipath -v3 -d -ll
Oct 12 14:54:30 | mpatha: disassemble status [2 0 0 0 1 1 A 0 3 0 8:16 A 0 8:32 A 0 8:48 A 0 ]
mpatha (360fff1bafde586fefd59e50200002054) dm-3 EQLOGIC ,100E-00         
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 10:0:0:0 sdb 8:16 active ready running
  |- 11:0:0:0 sdc 8:32 active ready running
  `- 12:0:0:0 sdd 8:48 active ready running
[root@host ~]# iscsiadm -m session -P 1
Target: iqn.2001-05.com.equallogic:0-af1ff6-fe86e5fdb-5420000002e559fd-test (non-flash)
	Current Portal: 80.51.3.3:3260,1
	Persistent Portal: host.sld.tld:3260,1
		**********
		Interface:
		**********
		Iface Name: default
		Iface Transport: tcp
		Iface Initiatorname: iqn.1994-05.tld.sld:host
		Iface IPaddress: 80.51.3.11
		Iface HWaddress: 
		Iface Netdev: 
  
		SID: 1
		iSCSI Connection State: LOGGED IN
		iSCSI Session State: LOGGED_IN
		Internal iscsid Session State: NO CHANGE

		**********
		Interface:
		**********
		Iface Name: enp130s0f0
		Iface Transport: tcp
		Iface Initiatorname: iqn.1994-05.tld.sld:host
		Iface IPaddress: 80.51.3.11
		Iface HWaddress: 4c:fe:ec:b1:93:15
		Iface Netdev: 
   
		SID: 3
		iSCSI Connection State: LOGGED IN
		iSCSI Session State: LOGGED_IN
		Internal iscsid Session State: NO CHANGE

		**********
		Interface:
		**********
		Iface Name: libvirt-iface-34abd840
		Iface Transport: tcp
		Iface Initiatorname: iqn.1994-05.tld.sld:host
		Iface IPaddress: 80.51.3.11
		Iface HWaddress: 
    
		Iface Netdev: 
     
		SID: 2
		iSCSI Connection State: LOGGED IN
		iSCSI Session State: LOGGED_IN
		Internal iscsid Session State: NO CHANGE
     
    
   
  
 

At least it's clear now, where the FQDN is originating from:

[root@host ~]# service iscsi status
Redirecting to /bin/systemctl status iscsi.service
 iscsi.service - Login and scanning of iSCSI devices
   Loaded: loaded (/usr/lib/systemd/system/iscsi.service; enabled; vendor preset: disabled)
   Active: active (exited) since Fri 2018-10-12 11:35:51 CEST; 3 days ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 2325 ExecReload=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=0/SUCCESS)
  Process: 1696 ExecStart=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=0/SUCCESS)
  Process: 1682 ExecStart=/usr/libexec/iscsi-mark-root-nodes (code=exited, status=0/SUCCESS)
 Main PID: 1696 (code=exited, status=0/SUCCESS)
    Tasks: 0
   CGroup: /system.slice/iscsi.service

Oct 12 11:35:51 host iscsiadm[1696]: Logging in to [iface: default, target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution, portal: host.sld.tld,3260] (multiple)
Oct 12 11:35:51 host iscsiadm[1696]: Logging in to [iface: libvirt-iface-34abd840, target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution, portal: 80.51.3.2,3260] (multiple)
Oct 12 11:35:51 host iscsiadm[1696]: Logging in to [iface: enp130s0f0, target: iqn.2001-05.com.equallogic:0-af1ff6-9156e5fdb-5839e6bf3cd5b8e7-swdistribution, portal: 80.51.3.2,3260] (multiple)

From man iscsiadm:

-I, --interface=[iface]
[...] The hwaddress is the MAC address or for software iSCSI it may be the special value "default" 
which directs the initiator to not bind the session to a specific hardware resource and 
instead allow the network or InfiniBand layer to decide what to do. 
There is no need to  create  an  iface  config  with  the default behavior. 
If you do not specify an iface, then the default behavior is used.

That would mean, that the one path provided by the default would NOT be bount to specific hardware.
And i'm currently not sure whether that is a good thing or not...

Maybe i am reading too much into it, but i'd like to make doublesure at this critical point ;-).

It would be great if you could put that into perspective from your insight and experience!

Best

F

 

1 Rookie

 • 

1.5K Posts

October 15th, 2018 08:00

Hello, 

 You do not want nor need the 'default' binding.   As it will simply be using one of the existing nics giving you an additional connection but w/o value. 

Regards,

Don

 

1 Rookie

 • 

1.5K Posts

October 29th, 2018 07:00

Hello, 

 You are very welcome!!   Glad I could help out. 

 Regards, 

Don 

43 Posts

October 29th, 2018 07:00

Hello Don,

thank you for your advice.

I'll remove the "default" IFACE from the iSCSI-Interfaces via the  iscsiadm --op=delete option then in order to exclude it.

And thank you agaon for all your help!!

Best
F

No Events found!

Top