Start a Conversation

Unsolved

B

14 Posts

2417

January 23rd, 2021 08:00

ErrCd 255 Enclosure not found

I have a poweredge r730 with 2 controllers:

0 PERCH730Mini 
1 PERCH830Adapter


If I query a physical disk on controller 1 it fails like so:

[root@ROK-VmWare:/opt/lsi/perccli] ./perccli /c1/e30/s2 show
CLI Version = 007.1327.0000.0000 July 27, 2020
Operating system = VMkernel 6.7.0
Controller = 1
Status = Failure
Description = No drive found!

Detailed Status :
===============

------------------------------------------------
Drive      Status  ErrCd ErrMsg
------------------------------------------------
/c1/e30/s2 Failure   255 Enclosure 30 not found
------------------------------------------------

I get the same error if I query a disk in enclusure# 31

but when I query the controller, we see that the enclosure number 10 and 31 both exist:

[root@ROK-VmWare:/opt/lsi/perccli] ./perccli /c1 show
Generating detailed summary of the adapter, it may take a while to complete.

CLI Version = 007.1327.0000.0000 July 27, 2020
Operating system = VMkernel 6.7.0
Controller = 1
Status = Success
Description = None

Product Name = PERC H830 Adapter
Serial Number = 56G002S
SAS Address =  544a84203b95a800
PCI Address = 00:84:00:00
System Time = 01/23/2021 16:22:19
Mfg. Date = 06/18/15
Controller Time = 01/23/2021 16:22:17
FW Package Build = 25.5.8.0001
BIOS Version = 6.33.01.0_4.19.08.00_0x06120304
FW Version = 4.300.00-8366
Driver Name = lsi_mr3
Driver Version = 7.713.07.00
Current Personality = RAID-Mode
Vendor Id = 0x1000
Device Id = 0x5D
SubVendor Id = 0x1028
SubDevice Id = 0x1F41
Host Interface = PCI-E
Device Interface = SAS-12G
Bus Number = 132
Device Number = 0
Function Number = 0
Domain ID = 0
Drive Groups = 2

TOPOLOGY :
========

-----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type   State BT      Size PDC  PI SED DS3  FSpace TR
-----------------------------------------------------------------------------
 0 -   -   -        -   RAID10 Optl  Y  18.190 TB dflt N  N   dflt N      N
 0 0   -   -        -   RAID1  Optl  Y  18.190 TB dflt N  N   dflt N      N
 0 0   0   30:0     47  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   1   30:1     49  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   2   30:2     45  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   3   30:3     43  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   4   30:4     53  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   5   30:5     37  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   6   30:6     52  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   7   30:7     54  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   8   30:8     39  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 0 0   9   30:9     35  DRIVE  Onln  N   3.637 TB dflt N  N   dflt -      N
 1 -   -   -        -   RAID10 Optl  Y  43.661 TB dflt N  N   dflt N      N
 1 0   -   -        -   RAID1  Optl  Y  43.661 TB dflt N  N   dflt N      N
 1 0   0   31:0     32  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   1   31:1     44  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   2   31:2     42  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   3   31:3     51  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   4   31:4     41  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   5   31:5     48  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   6   31:6     34  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   7   31:7     46  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   8   31:8     50  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   9   31:9     33  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   10  31:10    40  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
 1 0   11  31:11    36  DRIVE  Onln  N   7.276 TB dflt N  N   dflt -      N
---------------------------------------------------------------------------
...

I have 2 virtual disks setup on that controller, created using the perccli command:

./perccli /c1 add vd type=raid10 name=XCHANGE drives=30:0,1,2,3,4,5,6,7,8,9 strip=64

and 

./perccli /c1 add vd type=raid10 name=REPO drives=31:0,1,2,3,4,5,6,7,8,9,10,11 strip=64

so clearly the perccli "add vd" command knows how to work with enclosures 30 nd 31 knows

What is going on and how can I make perccli display show information for an individual disk on contrlr 1, enclosures 30 and 31?

I can show information in an individual disk in contrlr 1 using its enclosure number. I also have another R730 with the same 2 controllers and I can query the disks on the external enclosure. By the way, what I am using for enclosures with the perc h830 controllers are powervault md1200 the only difference being my problem server has 2 md1200 boxes and the good one has only one.

I suppose I can try disconnecting one controller at a time and see if I can query disk then, but I am waiting for the new virtual disks finish initializing before doing something like that.

 

Moderator

 • 

3.1K Posts

January 24th, 2021 20:00

Hi,

 

I'm unsure why there is an error when you run show on a specific disk on the enclosure. Let's try to identify the root cause.

 

Could you let us know how you connect both the MD1200 to the R730? Are you using the PERCCLi utility downloaded from our support site? I see that the RAID controller is up to date, 25.5.8. Is the server BIOS up to date? 

 

What's the outcome after you have disconnect 1 of the MD and run the command?

14 Posts

January 25th, 2021 15:00

Hi @DELL-Joey C 

Thanks for responding.

In response to your questions:

-1) "Could you let us know how you connect both the MD1200 to the R730?" I have it connected  in the "Fault-tolerant Asymmetric Cabling Scheme" as per figure 14 on https://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200-m3200i-cabling-guide.pdf (except of course using ports 0 and 1 of the perc h830 on the r730 rather than RC0 and RC1 of an md3200). I also tried the "Simple Cascade Cabling Scheme" as per figure 6 but the result is the same.

-2) "Are you using the PERCCLi utility downloaded from our support site?", yes, and it is version Ver 007.1327 (PERCCLI_4H10X_7.1327.0_A09_VMware) which is the most recent I find on the dell support website. On my other t730 and on my t630 I am running version  007.0529 (a 2 year old version.).

-3) "Is the server BIOS up to date?" Yes It is version 2.11.0 (at least this was the most recent available 2 weeks ago).

-4) "What's the outcome after you have disconnect 1 of the MD and run the command?" The outcome was the same. I tried each controller, one at a time. for each test I shutdown the r730, switched off power supplies of md1200s, reconnected cables in new config, powered on MD1200 then powered on the r730.

New developments:

-1) it is not just the show commands that don't work, for instance the following commands all fail with the same error:
./perccli /c1/e30/s10 spindown
./perccli /c1/e30/s10 add hotsparedrive dgs=0
but strangely, I can still create new virtual drives specifying the disks in each controller as described in my original post. For now, I am OK setting up and deleting virtual drives. But when the time comes that disks fail and I have to replace individual drives, I would need the use of commands to control individual disks like: offline, spindown, set, copyback, hotsparedrive. some things, but not all I can do in the idrac interface. Some but not all I can do with [Ctrl]+[R] at boot time, but once I get this server in production, I won't want to reboot it just to do disk maintenance.

-2)I have since added a second md1200 to my other r730 and it has a slightly different outcome. On that server I am able to query individual disks on the first enclosure but not the second enclosure. While I had that server shutdown I tried the cables from it (the ones from the r730 to the first md1200) on the server that is the subject of this thread. Well, it worked (as far as I remember) to connect to the first controller. I have reason to question this now, I will have to wait until next weekend to test again. When I saw this I started suspecting theat the cables were the issue because the working cables are genuine dell but the non non-working ones are a generic from amazon https://www.amazon.ca/gp/product/B08PCXGRFW/

-3) I have since added a single md1200 to my t630 server using the same model perc h830 adapter. This is using genuine dell cables, part# 0FDPNX. and it works (I can query individual disks on the enclosure). But when I try these genuine cable on my problem server they do not work (I cannot query individual disks on the enclosure) also when I use the generic amazon cables on my t630 server, they also work. So that works against my theory of the cables being the issue. Next weekend I will try again swapping cables between the 2 r730s to double check my findings.
Meanwhile I am interested Joey if you can tell me what would be a genuine dell part number for the cables from one md1200 to the other. I am currently using: https://www.amazon.ca/gp/product/B00S7KTXW6/

Moderator

 • 

3.1K Posts

January 25th, 2021 18:00

Hi,

 

I may have found the root cause of the issue. I don't think MD1200 supports H830, that might be the reason that you're facing an issue with communication. https://dell.to/3a1kmTD Page 7 

 

If you're looking at Mini-SAS to Mini-SAS, the DPN# would be 171C5 (1 Meter) or W390D (2 Meters). But I rather confused when you said you're using DPN#FDPNX on the T630, that's SAS HD to Mini-SAS. 

 

I would say, comparing with the MD3200 cabling document and MD1200, the connection should relatively the same, but I would like to just confirm it by giving you the MD1200 documentation: https://dell.to/3a5lHJa Page 22.

 

You could try installing OMSA in ESXi, probably that could help to bring the disk offline or set any properties, https://dell.to/36u6IYn. That's something we can try to find out if the system can communicate on a single disk.

14 Posts

January 26th, 2021 14:00

Hi Joey

Thanks again. Yes, the md1200 does not support the h830, but the r730 also does not support the h810 controller. Previously I have seen it written that, although not supported, the best working configuration to use the md1200 with the r730 server is to use the h830 controller. There is no fully supported way to use md1200 on 13th gen server, but in practice it works. That article you linked is the first time i've seen describing the use of the h810 on a 13gen server to migrate md1200 from 12th gen. Anyway, if I have to switch to using h810 controllers I might do it. But I may not have to.

Meanwhile, I discovered something else. each of the three md1200 s that are not working to query individual disks are at firmware version 1.01 and the 2 that do work are at versions 1.05 and 1.06. So I am hoping that if I upgrade firmware on the bad three they will become good. The question is how. If I have to buy a used h810 to do the firmware update I'll do that, but I wonder if it is possible to boot into linux (dell support image image 3.0) and run the firmware 1.06 .bin file, if it will update through the h830. I will try but want to wait 2 more days. I have a big array on one enclosure that is 80% initialized after a week. I will wait till initialization is complete before attempting to flash firmware.

Now, in answer to your questions and suggestion:

-1) "you said you're using DPN#FDPNX on the T630, that's SAS HD to Mini-SAS". Yes, the h830 has SAS HD ports

-2) "You could try installing OMSA in ESXi". No way, I did that a couple of years ago and had some horrible issues with VMs, and veeam backup. I finally found this article, https://kb.vmware.com/s/article/74696 , uninstalled OMSA and all was good again. 

Moderator

 • 

3.1K Posts

January 26th, 2021 17:00

Hi,

 

Do try to update the EMM firmware to the latest. The firmware must be updated through OS level. I'm unsure if Live image could work, but it's worth a try. As MD1200 does not support H830 fully so I am unsure the firmware can proceed. Let us know here, to share the knowledge for other users. 

 

Thanks for sharing about the OMSA 9.3 on ESXi, yes it's because incompatible. Here OMSA 9.4: https://dell.to/3qU03yb. That's for reference in case.

14 Posts

January 27th, 2021 17:00

Yes, it works to update the md1200 firmware using the .bin file booting on dell Live Image 3.0 on a usb stick. and yes it works when the md1200 is connected via a perc 830. I ran the update, it says it was successful on both EMMs on both enclosures. and now when I look at the enclosures through the idrac interface, it says they are both at firmware version 1.06.

 

but unfortunately, that did not solve the problem. I have ordered an h810, well see when it arrives, hopefully next week, if that is what makes the difference. It s really strange though that of my 5 md1200s, perccli can query the individual enclosure (and disks) of only 2 of them, they are now all at the same firmware revision, they are all using the same model controller (perc H30).

Moderator

 • 

3.1K Posts

January 28th, 2021 16:00

Hi,

 

There is something I would like to grab your attention to is that H810 is for 12G servers, which may also experiencing compatibility issue: https://dell.to/3ckifgh.

14 Posts

February 17th, 2021 17:00

I tried the perch810 controller. Yeh, that works. using it I can query individual disks and individual enclosures.

But, one enclosure is full of 4kn disks, and the H810 doesn't support 4kn.

So I put the H830 controller back in. I suppose I'll try the OMSA next

14 Posts

February 17th, 2021 18:00

Hmm, that racadm storage command might just do what I need. Thanks Shine k, I'm just heading out the door now. I'll give it a try when I am back next week

4 Operator

 • 

3K Posts

February 17th, 2021 18:00

You can try racadm command with iDRAC also to get storage details. You can refer below link for more details on this command

https://www.dell.com/support/manuals/en-us/idrac8-lifecycle-controller-v2.70.70.70/idrac8_2.70.70.70_racadm/storage?guid=guid-9e3676cb-b71d-420b-8c48-c80add258e03&lang=en-us 

14 Posts

February 21st, 2021 17:00

So now my question to Joey C and Shine K: If I invest in replacing my four MD1200 with four MD1400 (2 for each server), will it work? That should be all supported hardware, 13 gen servers with perc h830 and MC1400. (I'll take my chances with any non Dell disks in the enclosures - right now it is about 50% Dell and 50% non-Dell)

14 Posts

February 21st, 2021 17:00

Unfortunately racadm has the same problem. Like perccli, it all seems to work if there is only one md1200 enclosure connected to a controller. but more than one, it fails

Following is on a system with only one MD1200:

 

 

[root@ccv-vmware-stone:/opt/lsi/perccli64] racadm storage get enclosures
Enclosure.Internal.0-1:RAID.Slot.8-1
Enclosure.External.0-0:RAID.Slot.6-1

[root@ccv-vmware-stone:/opt/lsi/perccli64] racadm storage get enclosures -o
Enclosure.Internal.0-1:RAID.Slot.8-1
   State                            = Ready
   Status                           = Ok
   DeviceDescription                = Backplane 1 on Connector 0 of RAID Controller in Slot 8
   RollupStatus                     = Ok
   Name                             = BP13G+EXP 0:1
   BayId                            = 1
   FirmwareVersion                  = 3.35
   SasAddress                       = 0x500056B37E7BAAFD
   SlotCount                        = 18
Enclosure.External.0-0:RAID.Slot.6-1
   State                            = Unknown
   Status                           = Ok
   DeviceDescription                = Enclosure 0 on Connector 0 of RAID Controller in Slot 6
   RollupStatus                     = Ok
   Name                             = MD1200 0:0
   EnclosurePosition                = 0
   ConnectedPort                    = 0
   FirmwareVersion                  = 1.05
   ServiceTag                       =
   AssetTag                         =
   RedundantPath                    = Present
   SasAddress                       = 0x500C04F20B41F73D
   SlotCount                        = 0

 

 

The first found enclosure is the internal backplane of the T630 server on the perc H730 controller.

The second found enclosure is the single md1200 connected to the perc H830 controller. The details for that enclosure correctly identify it as an MD1200

Also if I do a racadm storage get pdisks it lists all disks on the server including those on the external md1200

 

now on a system with 2 MD1200:

 

 

[root@ROK-VmWare:/opt/lsi/perccli64] racadm storage get enclosures
Enclosure.Internal.0-1:RAID.Integrated.1-1
Enclosure.Internal.1-0:RAID.Slot.3-1

[root@ROK-VmWare:/opt/lsi/perccli64] racadm storage get enclosures -o
Enclosure.Internal.0-1:RAID.Integrated.1-1
   State                            = Ready
   Status                           = Ok
   DeviceDescription                = Backplane 1 on Connector 0 of Integrated RAID Controller 1
   RollupStatus                     = Ok
   Name                             = BP13G+EXP 0:1
   BayId                            = 1
   FirmwareVersion                  = 3.35
   SasAddress                       = 0x500056B36B5133FD
   SlotCount                        = 16
Enclosure.Internal.1-0:RAID.Slot.3-1
   State                            = Unknown
   Status                           = Unknown
   DeviceDescription                = Backplane 0 on Connector 1 of RAID Controller in Slot 3
   RollupStatus                     = Unknown
   Name                             = Enclosure 1:0
   BayId                            = 0
   FirmwareVersion                  =
   SasAddress                       = Not applicable
   SlotCount                        = 0

 

 

The first found enclosure is the internal backplane of the R730 server on the perc H730 controller.

The second found enclosure is all wrong, it is listed as Enclosure.Internal.1-0:RAID.Slot.3-1. But there is no internal enclosure on that controller and it should show 2 external enclosures on that controller. Also the details for the fake internal enclosure on the perc H830 controller list certain properties as "Unknown" and "Not Applicable"

Also if I do a racadm storage get pdisks it lists only the internal disks, but none of the disks on the perc H830.

 

So it looks like it just doesn't work with 2 enclosures. But most things do work. In perccli I can list all disks on both MD1200, I can create and destroy virtual disks using these disks, I just can't assign individual disks as hot spares for specific virtual disks, can't spin down an individual disk, or any other operation on an individual disk. I have another R730 server that has had a single MD1200 for 2 years. The virtual disks on it are in use in production, used everyday all day. I have now added a second MD1200 to this server. It has the same problem as above, I can't do any operations on individual disks. Before adding the second MD1200 I could address individual disks on the single MD1200. I haven't yet created any virtual disks on the second MD1200 but the existing virtual disks are still working, all day , everyday

Moderator

 • 

3.1K Posts

February 21st, 2021 18:00

Hi,

 

Well, in the MD1400 guides, it should work. As it supports 13G and 14G servers 

https://dell.to/3bxVgN4 and it support H830 https://dell.to/3uke7nd;

14 Posts

February 22nd, 2021 18:00

< Admin removed the spoiler tag>

update: racadm does work!

I have not yet tested all required functions but I am more hopeful now. 

Before giving up I tried one more thing, I tried changing the cabling. I had it set to the "Fault-tolerant Asymmetric Cabling Scheme" as per figure 14 on https://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200-m3200i-cabling-guide.pdf . so I changed it to "Simple Cascade Cabling Scheme" as per figure 6. This change makes no difference whatsoever to perccli. But it does make a difference to racadm. so now racadm shows me all of the disks on all enclosures.

 I found that blink does not work

 

[root@ROK-VmWare:/opt/lsi/perccli64] racadm storage blink:Disk.Bay.10:Enclosure.External.0-0:RAID.Slot.3-1
ERROR: STOR006 : Unable to complete the operation.
Retry the operation. If the issue persists, contact your service provider.

 

although it does work on disks on the internal backplane of the r730 server

The important thing for me is that turning a disk into a dedicated hot spare does work. It is convoluted because I then have to create the jobque task and run it but yes, it works. Yeah!!

14 Posts

February 22nd, 2021 18:00

why does my above post have a title of "Spoiler" that I have to click to view the content? wierd

No Events found!

Top