Start a Conversation

Solved!

Go to Solution

1 Rookie

 • 

17 Posts

189

February 10th, 2024 02:32

PowerEdge R610 iDRAC6 IPMI Drive Status

I have a Dell PowerEdge R610 with iDRAC6 enabled for use with IPMI and a PERC6/i RAID controller and am monitoring the server status remotely. Specifically I am monitoring the drive status via ipmitool and am having issues getting the proper status when the first drive in the RAID array is faulted.

The utility I am using is ipmitool and am running the following command:

ipmitool -I lanplus -H <ip address> -U <user> -P <password> sdr elist

Setting up a RAID 1 array with two drives and completing an initialization shows a value from the above command that I would expect of 'Drive Present'

Drive            | 80h | ok  | 26.1 | Drive Present

When faulting the second drive by unplugging it, I am getting the following message which is that is providing the correct status

Drive            | 80h | ok  | 26.1 | Drive Present, In Critical Array

Then restoring the drive and letting the array rebuild, or recreating the array and faulting the first drive by unplugging it gives the following result but the status part of this message is blank. I would expect to get the same message as above about being in a critical state.

Drive            | 80h | ok  | 26.1 |

In this last case, the PERC6/i utility properly shows that the array is in a degraded state. 

I have tried multiple different drives, different bays, different RAID levels (RAID1 with 2x drives, RAID10 with 4x drives), leaving the system set in the RAID6/i screen, booting into ESXi, booting into Ubuntu, and have seen the same result each time.

Why does this status message reported via IPMI not report correctly when the first drive in the array is faulted?

Any help or clarification on this issue would be greatly appreciated!

1 Rookie

 • 

17 Posts

February 13th, 2024 23:24

Hi Erman,

The issue does not seem to follow bay 0 as I tried creating a RAID1 array using bay 1 and bay 2. Additionally I tried switching drives and using bay 2 and bay 3. In each case the issue occurred in the first populated bay.

Since I have free drive bays, I plan to use a hot spare in bay 0 and create the array in subsequent bays.

Moderator

 • 

2.3K Posts

February 12th, 2024 10:04

Hello, this is indeed an odd issue. ideally, the iDRAC should report of status all drives in the array correctly. It seems most likely related to iDRAC FW version. Because IPMI tool runs via iDRAC if the a specific issue with FW version it should be fixed newer version. another possible reason could be a firmware the PERC6/i RAID controller. It might be beneficial to check if there are any updates available for these components. 

And please do below steps:

  1. Clear flea power (unplug power and hold the power button for 20 seconds).
  2. Boot to F2 BIOS Configuration Utility and perform a soft reset (press Alt+E, Alt+F, Alt+B, and the system will reboot).
  3. Boot into the Ctrl+E iDRAC BIOS and reset it to default.

 

Hope that helps!

1 Rookie

 • 

17 Posts

February 13th, 2024 11:43

Hi Erman,

Thank you for your reply and suggestions. I have tried clearing the flea power and a soft reset but would prefer not to reset my iDRAC configuration.

Both the iDRAC version and the PERC 6/i versions are the latest that I can find on the support page; v2.92 and v6.3.3-0002 respectively.

It seems if I put a hot spare in bay 0 and then my array drives in subsequent drive bays it will correctly report the array status for both drive failures.

Moderator

 • 

2.3K Posts

February 13th, 2024 12:48

Thanks for your feedback. It’s interesting that the issue seems to be resolved when you put a hot spare in bay 0 and then your array drives in subsequent drive bays. This could suggest that the issue might be related to how the iDRAC or the PERC 6/i controller is handling the drive status reporting for the first bay.  if the workaround of using a hot spare in bay 0 is not causing any other issues and correctly reports the array status, you might want to continue with this setup

Moderator

 • 

3.9K Posts

February 14th, 2024 04:06

Hello, from what I see, the disk might not be the issue- it could be the backplane or the PERC, I'm suspecting.
Respectfully,

No Events found!

Top