Start a Conversation

Unsolved

B

14 Posts

2418

January 23rd, 2021 08:00

ErrCd 255 Enclosure not found

I have a poweredge r730 with 2 controllers:

0 PERCH730Mini 
1 PERCH830Adapter


If I query a physical disk on controller 1 it fails like so:

[root@ROK-VmWare:/opt/lsi/perccli] ./perccli /c1/e30/s2 show
CLI Version = 007.1327.0000.0000 July 27, 2020
Operating system = VMkernel 6.7.0
Controller = 1
Status = Failure
Description = No drive found!

Detailed Status :
===============

------------------------------------------------
Drive      Status  ErrCd ErrMsg
------------------------------------------------
/c1/e30/s2 Failure   255 Enclosure 30 not found
------------------------------------------------

I get the same error if I query a disk in enclusure# 31

but when I query the controller, we see that the enclosure number 10 and 31 both exist:

[root@ROK-VmWare:/opt/lsi/perccli] ./perccli /c1 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.1327.0000.0000 July 27, 2020
Operating system = VMkernel 6.7.0
Controller = 1
Status = Success
Description = None

Product Name = PERC H830 Adapter
Serial Number = 56G002S
SAS Address =  544a84203b95a800
PCI Address = 00:84:00:00
System Time = 01/23/2021 16:22:19
Mfg. Date = 06/18/15
Controller Time = 01/23/2021 16:22:17
FW Package Build = 25.5.8.0001
BIOS Version = 6.33.01.0_4.19.08.00_0x06120304
FW Version = 4.300.00-8366
Driver Name = lsi_mr3
Driver Version = 7.713.07.00
Current Personality = RAID-Mode
Vendor Id = 0x1000
Device Id = 0x5D
SubVendor Id = 0x1028
SubDevice Id = 0x1F41
Host Interface = PCI-E
Device Interface = SAS-12G
Bus Number = 132
Device Number = 0
Function Number = 0
Domain ID = 0
Drive Groups = 2

TOPOLOGY :
========

-----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type   State BT      Size PDC  PI SED DS3  FSpace TR
-----------------------------------------------------------------------------
 0 -   -   -        -   RAID10 Optl  Y  18.190 TB dflt N  N   dflt N      N
 0 0   -   -        -   RAID1  Optl  Y  18.190 TB dflt N  N   dflt N      N
 0 0   0   30:0     47  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   1   30:1     49  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   2   30:2     45  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   3   30:3     43  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   4   30:4     53  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   5   30:5     37  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   6   30:6     52  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   7   30:7     54  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   8   30:8     39  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   9   30:9     35  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 1 -   -   -        -   RAID10 Optl  Y  43.661 TB dflt N  N   dflt N      N
 1 0   -   -        -   RAID1  Optl  Y  43.661 TB dflt N  N   dflt N      N
 1 0   0   31:0     32  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   1   31:1     44  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   2   31:2     42  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   3   31:3     51  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   4   31:4     41  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   5   31:5     48  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   6   31:6     34  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   7   31:7     46  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   8   31:8     50  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   9   31:9     33  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   10  31:10    40  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   11  31:11    36  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
---------------------------------------------------------------------------
...

I have 2 virtual disks setup on that controller, created using the perccli command:

./perccli /c1 add vd type=raid10 name=XCHANGE drives=30:0,1,2,3,4,5,6,7,8,9 strip=64

and 

./perccli /c1 add vd type=raid10 name=REPO drives=31:0,1,2,3,4,5,6,7,8,9,10,11 strip=64

so clearly the perccli "add vd" command knows how to work with enclosures 30 nd 31 knows

What is going on and how can I make perccli display show information for an individual disk on contrlr 1, enclosures 30 and 31?

I can show information in an individual disk in contrlr 1 using its enclosure number. I also have another R730 with the same 2 controllers and I can query the disks on the external enclosure. By the way, what I am using for enclosures with the perc h830 controllers are powervault md1200 the only difference being my problem server has 2 md1200 boxes and the good one has only one.

I suppose I can try disconnecting one controller at a time and see if I can query disk then, but I am waiting for the new virtual disks finish initializing before doing something like that.

 

Moderator

 • 

3.1K Posts

February 22nd, 2021 19:00

Hi,

 

Thanks for the update. Hopefully your post can help other in need. 

 

I think when writing the reply, the text formatting added a spoiler tag on the whole paragraph. I have untagged it. 

14 Posts

February 28th, 2021 16:00

I have an update on what I thought was a cabling issue - it's not, its a reboot issue.

As for the symptom of racadm not finding the disks in the MD1200 and identifying one internal enclosure rather the 2 actual external controllers, that symptom goes away after a complete cold reboot of the enclosures. By complete cold reboot, I mean shutting down the server hosting the perc h830 controller turning off power switches on all power supplies, then turning them all on again and once the disks stop flashing, power up the server. But then if I at some later time reboot the server (without also cold rebooting both the MD1200), the symptom comes back.

So an upgrade to MD1400 enclosures is in order some day. As these are mostly new disks in the cabinets though I won't worry about upgrading right away.

Hot spares do work although the copyback function is a bit crippled. If I pull a disk from a virtual disk array, the hot spare kicks in immediately as it should.  I then wait till the hot spare is fully rebuilt then insert a replacement disk.  Since the controller is configured for copyback, It should start copying back to this replacement disk right away - but it doesn't. Also racadm doesn't see the newly inserted replacement disk so also the racadm raid replacephysicaldisk command does not work with either. But, if I then reboot the server hosting the perc h830, the copyback operation starts immediately after reboot. Just a reboot of the server is all that is required for the copyback to start working (not required to fully cold reboot the MD1200 enclosures). Also if the last reboot of the server left the system in the state where racadm and idrac don't see the physical disks, that also does not stop the hot spare from kicking in when a disk fails, but as above it still requires a reboot before copyback works

 

No Events found!

Top