PowerEdge HDD/SCSI/RAID

Last reply by 02-28-2021 Unsolved
Start a Discussion
2 Bronze
2 Bronze
1265

ErrCd 255 Enclosure not found

I have a poweredge r730 with 2 controllers:

0 PERCH730Mini 
1 PERCH830Adapter


If I query a physical disk on controller 1 it fails like so:

[root@ROK-VmWare:/opt/lsi/perccli] ./perccli /c1/e30/s2 show
CLI Version = 007.1327.0000.0000 July 27, 2020
Operating system = VMkernel 6.7.0
Controller = 1
Status = Failure
Description = No drive found!

Detailed Status :
===============

------------------------------------------------
Drive      Status  ErrCd ErrMsg
------------------------------------------------
/c1/e30/s2 Failure   255 Enclosure 30 not found
------------------------------------------------

I get the same error if I query a disk in enclusure# 31

but when I query the controller, we see that the enclosure number 10 and 31 both exist:

[root@ROK-VmWare:/opt/lsi/perccli] ./perccli /c1 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.1327.0000.0000 July 27, 2020
Operating system = VMkernel 6.7.0
Controller = 1
Status = Success
Description = None

Product Name = PERC H830 Adapter
Serial Number = 56G002S
SAS Address =  544a84203b95a800
PCI Address = 00:84:00:00
System Time = 01/23/2021 16:22:19
Mfg. Date = 06/18/15
Controller Time = 01/23/2021 16:22:17
FW Package Build = 25.5.8.0001
BIOS Version = 6.33.01.0_4.19.08.00_0x06120304
FW Version = 4.300.00-8366
Driver Name = lsi_mr3
Driver Version = 7.713.07.00
Current Personality = RAID-Mode
Vendor Id = 0x1000
Device Id = 0x5D
SubVendor Id = 0x1028
SubDevice Id = 0x1F41
Host Interface = PCI-E
Device Interface = SAS-12G
Bus Number = 132
Device Number = 0
Function Number = 0
Domain ID = 0
Drive Groups = 2

TOPOLOGY :
========

-----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type   State BT      Size PDC  PI SED DS3  FSpace TR
-----------------------------------------------------------------------------
 0 -   -   -        -   RAID10 Optl  Y  18.190 TB dflt N  N   dflt N      N
 0 0   -   -        -   RAID1  Optl  Y  18.190 TB dflt N  N   dflt N      N
 0 0   0   30:0     47  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   1   30:1     49  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   2   30:2     45  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   3   30:3     43  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   4   30:4     53  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   5   30:5     37  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   6   30:6     52  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   7   30:7     54  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   8   30:8     39  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   9   30:9     35  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 1 -   -   -        -   RAID10 Optl  Y  43.661 TB dflt N  N   dflt N      N
 1 0   -   -        -   RAID1  Optl  Y  43.661 TB dflt N  N   dflt N      N
 1 0   0   31:0     32  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   1   31:1     44  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   2   31:2     42  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   3   31:3     51  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   4   31:4     41  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   5   31:5     48  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   6   31:6     34  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   7   31:7     46  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   8   31:8     50  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   9   31:9     33  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   10  31:10    40  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   11  31:11    36  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
---------------------------------------------------------------------------
...

I have 2 virtual disks setup on that controller, created using the perccli command:

./perccli /c1 add vd type=raid10 name=XCHANGE drives=30:0,1,2,3,4,5,6,7,8,9 strip=64

and 

./perccli /c1 add vd type=raid10 name=REPO drives=31:0,1,2,3,4,5,6,7,8,9,10,11 strip=64

so clearly the perccli "add vd" command knows how to work with enclosures 30 nd 31 knows

What is going on and how can I make perccli display show information for an individual disk on contrlr 1, enclosures 30 and 31?

I can show information in an individual disk in contrlr 1 using its enclosure number. I also have another R730 with the same 2 controllers and I can query the disks on the external enclosure. By the way, what I am using for enclosures with the perc h830 controllers are powervault md1200 the only difference being my problem server has 2 md1200 boxes and the good one has only one.

I suppose I can try disconnecting one controller at a time and see if I can query disk then, but I am waiting for the new virtual disks finish initializing before doing something like that.

 

Labels (3)
Replies (17)
1023

Hi,

 

I'm unsure why there is an error when you run show on a specific disk on the enclosure. Let's try to identify the root cause.

 

Could you let us know how you connect both the MD1200 to the R730? Are you using the PERCCLi utility downloaded from our support site? I see that the RAID controller is up to date, 25.5.8. Is the server BIOS up to date? 

 

What's the outcome after you have disconnect 1 of the MD and run the command?


DELL-Joey C
Social Media and Communities Professional
Dell Technologies | Enterprise Support Services
#IWork4Dell

Did I answer your query? Please click on ‘Accept as Solution’. ‘Kudo’ the posts you like!

1004

Hi @DELL-Joey C 

Thanks for responding.

In response to your questions:

-1) "Could you let us know how you connect both the MD1200 to the R730?" I have it connected  in the "Fault-tolerant Asymmetric Cabling Scheme" as per figure 14 on https://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200-m3200i-cabling-guide.pdf (except of course using ports 0 and 1 of the perc h830 on the r730 rather than RC0 and RC1 of an md3200). I also tried the "Simple Cascade Cabling Scheme" as per figure 6 but the result is the same.

-2) "Are you using the PERCCLi utility downloaded from our support site?", yes, and it is version Ver 007.1327 (PERCCLI_4H10X_7.1327.0_A09_VMware) which is the most recent I find on the dell support website. On my other t730 and on my t630 I am running version  007.0529 (a 2 year old version.).

-3) "Is the server BIOS up to date?" Yes It is version 2.11.0 (at least this was the most recent available 2 weeks ago).

-4) "What's the outcome after you have disconnect 1 of the MD and run the command?" The outcome was the same. I tried each controller, one at a time. for each test I shutdown the r730, switched off power supplies of md1200s, reconnected cables in new config, powered on MD1200 then powered on the r730.

New developments:

-1) it is not just the show commands that don't work, for instance the following commands all fail with the same error:
./perccli /c1/e30/s10 spindown
./perccli /c1/e30/s10 add hotsparedrive dgs=0
but strangely, I can still create new virtual drives specifying the disks in each controller as described in my original post. For now, I am OK setting up and deleting virtual drives. But when the time comes that disks fail and I have to replace individual drives, I would need the use of commands to control individual disks like: offline, spindown, set, copyback, hotsparedrive. some things, but not all I can do in the idrac interface. Some but not all I can do with [Ctrl]+[R] at boot time, but once I get this server in production, I won't want to reboot it just to do disk maintenance.

-2)I have since added a second md1200 to my other r730 and it has a slightly different outcome. On that server I am able to query individual disks on the first enclosure but not the second enclosure. While I had that server shutdown I tried the cables from it (the ones from the r730 to the first md1200) on the server that is the subject of this thread. Well, it worked (as far as I remember) to connect to the first controller. I have reason to question this now, I will have to wait until next weekend to test again. When I saw this I started suspecting theat the cables were the issue because the working cables are genuine dell but the non non-working ones are a generic from amazon https://www.amazon.ca/gp/product/B08PCXGRFW/

-3) I have since added a single md1200 to my t630 server using the same model perc h830 adapter. This is using genuine dell cables, part# 0FDPNX. and it works (I can query individual disks on the enclosure). But when I try these genuine cable on my problem server they do not work (I cannot query individual disks on the enclosure) also when I use the generic amazon cables on my t630 server, they also work. So that works against my theory of the cables being the issue. Next weekend I will try again swapping cables between the 2 r730s to double check my findings.
Meanwhile I am interested Joey if you can tell me what would be a genuine dell part number for the cables from one md1200 to the other. I am currently using: https://www.amazon.ca/gp/product/B00S7KTXW6/

989

Hi,

 

I may have found the root cause of the issue. I don't think MD1200 supports H830, that might be the reason that you're facing an issue with communication. https://dell.to/3a1kmTD Page 7 

 

If you're looking at Mini-SAS to Mini-SAS, the DPN# would be 171C5 (1 Meter) or W390D (2 Meters). But I rather confused when you said you're using DPN#FDPNX on the T630, that's SAS HD to Mini-SAS. 

 

I would say, comparing with the MD3200 cabling document and MD1200, the connection should relatively the same, but I would like to just confirm it by giving you the MD1200 documentation: https://dell.to/3a5lHJa Page 22.

 

You could try installing OMSA in ESXi, probably that could help to bring the disk offline or set any properties, https://dell.to/36u6IYn. That's something we can try to find out if the system can communicate on a single disk.


DELL-Joey C
Social Media and Communities Professional
Dell Technologies | Enterprise Support Services
#IWork4Dell

Did I answer your query? Please click on ‘Accept as Solution’. ‘Kudo’ the posts you like!

972

Hi Joey

Thanks again. Yes, the md1200 does not support the h830, but the r730 also does not support the h810 controller. Previously I have seen it written that, although not supported, the best working configuration to use the md1200 with the r730 server is to use the h830 controller. There is no fully supported way to use md1200 on 13th gen server, but in practice it works. That article you linked is the first time i've seen describing the use of the h810 on a 13gen server to migrate md1200 from 12th gen. Anyway, if I have to switch to using h810 controllers I might do it. But I may not have to.

Meanwhile, I discovered something else. each of the three md1200 s that are not working to query individual disks are at firmware version 1.01 and the 2 that do work are at versions 1.05 and 1.06. So I am hoping that if I upgrade firmware on the bad three they will become good. The question is how. If I have to buy a used h810 to do the firmware update I'll do that, but I wonder if it is possible to boot into linux (dell support image image 3.0) and run the firmware 1.06 .bin file, if it will update through the h830. I will try but want to wait 2 more days. I have a big array on one enclosure that is 80% initialized after a week. I will wait till initialization is complete before attempting to flash firmware.

Now, in answer to your questions and suggestion:

-1) "you said you're using DPN#FDPNX on the T630, that's SAS HD to Mini-SAS". Yes, the h830 has SAS HD ports

-2) "You could try installing OMSA in ESXi". No way, I did that a couple of years ago and had some horrible issues with VMs, and veeam backup. I finally found this article, https://kb.vmware.com/s/article/74696 , uninstalled OMSA and all was good again. 

961

Hi,

 

Do try to update the EMM firmware to the latest. The firmware must be updated through OS level. I'm unsure if Live image could work, but it's worth a try. As MD1200 does not support H830 fully so I am unsure the firmware can proceed. Let us know here, to share the knowledge for other users. 

 

Thanks for sharing about the OMSA 9.3 on ESXi, yes it's because incompatible. Here OMSA 9.4: https://dell.to/3qU03yb. That's for reference in case.


DELL-Joey C
Social Media and Communities Professional
Dell Technologies | Enterprise Support Services
#IWork4Dell

Did I answer your query? Please click on ‘Accept as Solution’. ‘Kudo’ the posts you like!

941

Yes, it works to update the md1200 firmware using the .bin file booting on dell Live Image 3.0 on a usb stick. and yes it works when the md1200 is connected via a perc 830. I ran the update, it says it was successful on both EMMs on both enclosures. and now when I look at the enclosures through the idrac interface, it says they are both at firmware version 1.06.

 

but unfortunately, that did not solve the problem. I have ordered an h810, well see when it arrives, hopefully next week, if that is what makes the difference. It s really strange though that of my 5 md1200s, perccli can query the individual enclosure (and disks) of only 2 of them, they are now all at the same firmware revision, they are all using the same model controller (perc H30).

918

Hi,

 

There is something I would like to grab your attention to is that H810 is for 12G servers, which may also experiencing compatibility issue: https://dell.to/3ckifgh.


DELL-Joey C
Social Media and Communities Professional
Dell Technologies | Enterprise Support Services
#IWork4Dell

Did I answer your query? Please click on ‘Accept as Solution’. ‘Kudo’ the posts you like!

878

I tried the perch810 controller. Yeh, that works. using it I can query individual disks and individual enclosures.

But, one enclosure is full of 4kn disks, and the H810 doesn't support 4kn.

So I put the H830 controller back in. I suppose I'll try the OMSA next

875

You can try racadm command with iDRAC also to get storage details. You can refer below link for more details on this command

https://www.dell.com/support/manuals/en-us/idrac8-lifecycle-controller-v2.70.70.70/idrac8_2.70.70.70... 


Thanks,
DELL-Shine K
#IWork4Dell

Latest Solutions
Top Contributor