I have a T110 II with Perc S100 and a RAID1 array consisting of 2 hard disks.
Every couple of months, Dell Open Manager will log that disk 0 or disk 1 has 'failed'. The array will be degraded and the 'failed' disk will be shown as 'removed' in Open Manager.
After rebooting the server Dell Open Manager shows the 'failed' disk now has a status of 'ready'. I can then assign the disk as a hot spare and the array rebuilds automatically.
When this first happened with disk 1 the array was rebuilt and the motherboard (with onboard RAID controller) was replaced. Dell Support had looked at the DSET logs and no disk errors were reported.
The second time it happened was with disk 0. The RAID array was deleted and recreated and the OS reinstalled - a major inconvenience (Dell Support suggested it could have been caused by a bad stripe).
The third time it happened was with disk 0. The disk was replaced and so was the motherboard again. Since then disk 1 has failed and this week disk 0 failed.
I'm not convinced it really is a disk issue and wonder if this is a known problem or if anyone has experienced the same issue and managed to resolve it?
First thing I would do is get the array back to optimal and then update the controller to current. Here is the latest update available.
Let me know how it goes.
The array is up and running again. The controller was already up-to-date.
Open Manager shows driver version 2.0.0.0162
Device Manager shows the Storage Controllers as: "Dell PERC S100 S300 Configuration Device [storport]" and "Dell PERC S100 S300 Controller [storport]" with driver version 220.127.116.11
Device Manager also shows Disk Drives as: "Dell PERC S100/S300 SCSI Disk Device" with driver 6.1.7600.16385
Is all that as expected? Screenshots below.