Start a Conversation

Unsolved

This post is more than 5 years old

80850

November 3rd, 2009 21:00

MD3000i discovering new block device in Linux

I added three new disks (SATA) to my MD3000i and created a new disk group, new virtual disk and setup host access.

 

Now if I reboot the server (Redhat 5) the new drives/block device e.g. /dev/sdg will show up on the host machine without doing anything. How can I scan this LUN or Virtual Disk so that it shows up automatically without rebooting the host server.

 

I have found references of a command called hot_add that needed to be ran, but that seems to be for older versions of Linux, so tell me how is this accomplished now? Do I need to use iscsiadm or something like that?

104 Posts

November 4th, 2009 04:00

On the Dell Linux wiki found here http://linux.dell.com/wiki there is a walkthrough for setting up a Linux with the MD3000i. The article at the following link explains how to setup a Linux cluster, however, the commands are accurate for a non-clustered enviroment as well. http://linux.dell.com/wiki/index.php/Products/HA/DellRedHatHALinuxCluster/Storage/PowerVault_MD3000i/Storage_Configuration#Connect_Host_to_Virtual_Disk

The two commands of intrest are mppBusRescan and SMdevices. The mppBusRescan will scan for new LUNs. SMdevices will show the mapping of the device mapping of the new LUN. See the above link for sample output.

Regards,
-cjtompsett

41 Posts

November 4th, 2009 09:00

Thanks, I couldn't find the mppBusRescan command so I assume its not installed.

 

I did however restart iscsi and ther you have it. Everything is there now!

 

Thanks

9.3K Posts

November 4th, 2009 10:00

If mppBusRescan isn't there, it could indicate the multipathing driver (rdac) isn't installed. This could mean that if you ever have a failure (nic, switch, raid controller (on the MD3000i)), your server loses disk access.

 

Check the rdac status with "dkms status" and look for the rdac. Check if it's installed, built, or just not there.

41 Posts

November 4th, 2009 10:00

Results

linuxrdac, 09.03.0C06.0030, 2.6.18-92.el5, i686: built
sg, 3.5.34dell, 2.6.18-8.el5, x86_64: built

104 Posts

November 6th, 2009 09:00

The results should show the modules as installed. Did you see any errors when installing the RDAC driver ?

-cjtompsett

41 Posts

November 9th, 2009 14:00

I do not recall any errors.

 

Now I am having serious problems. I rebooted the machine because I got read only filesystem on the VolGroup01 (located on  MD3000i) and here is what I am dealing with now.

 

 

[root@vanhalen ~]# vgscan
  Reading all physical volumes.  This may take a while...
  /dev/sdb: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdb: read failed after 0 of 4096 at 1498712834048: Input/output error
  /dev/sdb: read failed after 0 of 4096 at 1498712891392: Input/output error
  /dev/sdb: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdb: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdb: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdd: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdd: read failed after 0 of 4096 at 1999332376576: Input/output error
  /dev/sdd: read failed after 0 of 4096 at 1999332433920: Input/output error
  /dev/sdd: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdd: read failed after 0 of 4096 at 4096: Input/output error
  /dev/sdd: read failed after 0 of 4096 at 0: Input/output error
  Found volume group "VolGroup01" using metadata type lvm2
  Found volume group "VolGroup00" using metadata type lvm2
[root@vanhalen ~]#

 

[root@vanhalen ~]# dkms status
linuxrdac, 09.03.0C06.0030, 2.6.18-92.el5, i686: built
sg, 3.5.34dell, 2.6.18-8.el5, x86_64: built

 

 

[root@vanhalen mapper]# service iscsi restart
Logging out of session [sid: 1, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.100,3260]
Logging out of session [sid: 2, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.102,3260]
Logout of [sid: 1, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.100,3260]: successful
Logout of [sid: 2, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.102,3260]: successful
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.103,3260]
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.101,3260]
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.100,3260]

Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.102,3260]

iscsiadm: Could not login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.103,3260]:
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.101,3260]:
iscsiadm: initiator reported error (8 - connection timed out)
Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.100,3260]: successful
Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c, portal: 10.10.12.102,3260]: successful
iscsiadm: Could not log into all portals. Err 8.

I am only using two out of four controller ports. 0/0 and 1/0. 0/1 and 1/1 are not connected to the switch.

 

For some reason system-config-lvm see's my VolGroup01 (with the exception that LogVol01 and LogVol02 are umounted and have no filesystem) but theres nothing in /dev/mapper/ for VolgGrou01

154 Posts

November 9th, 2009 15:00

Looking at the above info, it looks like your iSCSI settings are trying to log into 4 ports but only logs into 2. However, this is consistent as you only have 2 ports connected.

Your dkms status shows that you have the mpp drivers built but not installed. That would explain why hot_add does not work for you as hot_add is installed with mpp (linuxrdac). I would recommend using the CD to reinstall the MPP driver.

I'm not sure if this is what you are seeing regarding available partitions. But if you don't have mpp installed, you will have additional devices showing up that are read-only (These are the same virtual disks seen through the alternate path). Maybe that is causing the errors that you are seeing.

Good Luck!

-Mohan

41 Posts

November 9th, 2009 15:00

Phew! I was able to do a vgchange -ay VolGroup01 and that added back the VolGrou01 in /dev/mapper and /dev/VolGroup01.

 

From there I was able to mount both LogVol00 and LogVol01 and I am back in business. Now why the heck did everything going into read only mode?

 

 

41 Posts

November 10th, 2009 08:00

 

I have no space left on the switch so it is ok to leave these two controller ports disconnected?

 

I'll rerun the install for the driver.

41 Posts

November 10th, 2009 09:00

Ok, I will see if I can add the extra ports.

 

I reinstalled the driver again and rebooted the machine and it appears I have lost my block devices again in /dev/mapper because now linux is hung at a maintenance prompt during boot up.

This may be a stupid question but does this driver need reinstalled everytime a kernel is upgraded and the system is rebooted?

154 Posts

November 10th, 2009 09:00

Well, yes :). Using only 1 port on each controller reduces port redundancy (just in case one of the port fails) and you might see some drop in performance depending on the type of IO. But in general, you should be fine. If you don't anticipate connecting them anytime soon, it might make sense to update the iSCSI configuration to delete those ports so that you don't see the errors each time the iSCSI initiator tries to connect.

-Mohan

41 Posts

November 10th, 2009 10:00

OK, I got all four ports connected to the switch on the same VLAN. But I am still having issues with the block device becomes missing after a reboot.

 

Linux hostname  2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

I also reinstalled the driver and here's the output.

 

[root@vanhalen ~]# dkms status
linuxrdac, 09.03.0C06.0030, 2.6.18-92.el5, i686: built
sg, 3.5.34dell, 2.6.18-8.el5, x86_64: built

 

[root@vanhalen ~]# iscsiadm -m session
tcp: [1] 10.10.12.103:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
tcp: [2] 10.10.12.101:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
tcp: [3] 10.10.12.100:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
tcp: [4] 10.10.12.102:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c

 

[root@vanhalen ~]# iscsiadm -m discovery -t sendtargets -p 10.10.12.100
10.10.12.100:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.101:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.102:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.103:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
[root@vanhalen ~]# iscsiadm -m discovery -t sendtargets -p 10.10.12.101
10.10.12.100:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.101:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.102:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.103:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
[root@vanhalen ~]# iscsiadm -m discovery -t sendtargets -p 10.10.12.102
10.10.12.100:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.101:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.102:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.103:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
[root@vanhalen ~]# iscsiadm -m discovery -t sendtargets -p 10.10.12.103
10.10.12.100:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.101:3260,1 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.102:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c
10.10.12.103:3260,2 iqn.1984-05.com.dell:powervault.md3000i.60026b900031260e000000004ab1af8c

 

 

41 Posts

November 10th, 2009 11:00

Note: I have this same problem on a different server running exactly the same setup. MD3000i, r610. Same kernel, same OS CentOS5.4, same dkms status output and tons of I/O Errors in dmesg.

41 Posts

November 10th, 2009 12:00

I talked to Enterprise support and it looks like a package might of been missing. I will let you know.

kernel-devel

41 Posts

November 10th, 2009 14:00

Fixed!

 

[root@vanhalen /]# dkms status
linuxrdac, 09.03.0C06.0030, 2.6.18-92.el5, i686: built
linuxrdac, 09.03.0C06.0030, 2.6.18-164.6.1.el5, x86_64: installed
sg, 3.5.34dell, 2.6.18-8.el5, x86_64: built
linuxrdac, 09.03.0C06.0030, 2.6.18-164.el5, x86_64: installed-weak from 2.6.18-164.6.1.el5
linuxrdac, 09.03.0C06.0030, 2.6.18-128.el5, x86_64: installed-weak from 2.6.18-164.6.1.el5

 

It helps when you install the driver correctly. I never noticed that I didn't have the kernel-devel package installed.

 

Anyhow, a word of advice to anyone who also runs into this issue.

 

Make sure you have all the correct packages/same arch, release etc.

yum install kernel-devel kernel-headers kernel

Then run yum update to make sure they are all up to date and same versions etc, then install the MPIO driver.

No Events found!

Top