Start a Conversation

Unsolved

This post is more than 5 years old

51812

April 5th, 2014 04:00

Failed and degraded virtual disks on Dell T110 II RAID 1 PERC S300

We have a Dell T110 II server, factory installed with Microsoft Windows Server 2012, Standard x64 Edition, running PERC S300 RAID Controller (1068E HBA).

It came installed with RAID 1 on the 500GB operating system disk. We've since added another 1TB data disk

OMSA is reporting both a virtual disk failure and a degraded disk (see screen shots).

So my questions are:

  • Should I be replacing the physical disks? If so, which one(s) is(are) at fault?
  • How can I fix the virtual disk failure / degraded issues?

In anticipation of a few questions, I know an S300 isn't a great RAID solution (nothing I can do about that right now), and I have several full backups!


3 Posts

April 5th, 2014 12:00

Hi Daniel,

I get to the command line, but after entering 'racadm racreset' I get an error:

"A firmware update is currently in progress. Unable to reset the RAC at this time."

All services are stopped, and I've waited a few minutes. I didn't kick off a firmware update either, so assume this is automatic?

Can you advise?

Thanks

Moderator

 • 

6.2K Posts

April 5th, 2014 12:00

Hello James

Before we start rebuilding, deleting virtual disks, or replacing drives let's refresh OMSA. Since the S300 does not have a controller log we cannot be sure exactly what is going on. It is not uncommon for OMSA to report incorrectly on the status of an array. To get OMSA to pull current data perform these steps:

  1. Shut down OMSA
  2. Open services.msc and stop the OMSA services. There should be ~4 and they all start with DSM
  3. Open a command line and type in: racadm racreset
  4. Wait about 2 minutes for the iDRAC/BMC to restart
  5. Start the OMSA services
  6. Start OMSA

If it still reports the same status of the drives then let us know. I suspect it is a reporting issue since the status of all of the drives in the physical disk list is online.

Thanks

Moderator

 • 

6.2K Posts

April 5th, 2014 12:00

Go ahead and start the services back up, then open OMSA and see if it is reporting correctly.

There may have been a firmware update run previously that is pending a restart. If OMSA still does not report correctly then restart the server during your next maintenance window.

Thanks

3 Posts

April 5th, 2014 14:00

Thanks for you help,

I did as you said but OMSA still reports it incorrectly.

I've restarted and followed the original instructions again with the same outcome. Then restarted and tried a third time, but again with the same outcome.

What would you suggest now?

Thanks

Moderator

 • 

6.2K Posts

April 5th, 2014 15:00

Thanks for you help,

I did as you said but OMSA still reports it incorrectly.

I've restarted and followed the original instructions again with the same outcome. Then restarted and tried a third time, but again with the same outcome.

What would you suggest now?

Thanks

I was hoping it would correct itself by restarting, but that does not appear to be the case. Let me explain what the controller has done, and then I'll suggest a way of fixing it.

You have 2x500GB drives in a RAID 1. You then have a 1TB drive in non-RAID. The controller has put one of those 500GB drives int a non-RAID status as well, so one of the 500GB drives is part of two arrays. This is causing a conflict in the controller.

I don't think you will lose data by doing this, but I would recommend that you backup anything important just in case. To correct the problem delete the 500GB non-RAID. You may need to restart afterward. If the RAID 1 is still degraded then offline disk 0. Then set it as a hot spare and let it rebuild into the RAID 1.

Drive 0 appears to be the problem disk. I don't see anything to suggest a physical failure. It looks like the controller has just duplicated it.

Thanks

No Events found!

Top