Have a Dell R900 with 2 1 GB mirrored SAS drives. One of the two drives failed so we purchased a new drive, installed it, and started a rebuild. After several days OpenManage was still showing the status of the new drive as rebuilding with 0% complete. I was curious so I exported the raid controller log file and it shows that the drive actually completed the rebuild successfully and is in optimal status. I tried restarting openmanage to see if that would refresh the status but it had no effect. Has anyone ever seen something like this where incorrect information is persistently displayed? The rebuild completed 3 days ago but if you look at Openmanage it says nothing has happened. Is there something that can be done other than restarting openmanage to get it to re-poll the hardware? I'm pasting the log file showing the rebuild completing below.
01/14/14 0:32:52: EVT#42176-01/14/14 0:32:52: 103=Rebuild progress on PD 01(e0x20/s1) is 95.94%(11314s)^M
01/14/14 0:35:58: EVT#42177-01/14/14 0:35:58: 103=Rebuild progress on PD 01(e0x20/s1) is 96.94%(11500s)^M
01/14/14 0:39:06: EVT#42178-01/14/14 0:39:06: 103=Rebuild progress on PD 01(e0x20/s1) is 97.94%(11688s)^M
01/14/14 0:42:22: EVT#42179-01/14/14 0:42:22: 103=Rebuild progress on PD 01(e0x20/s1) is 98.94%(11884s)^M
01/14/14 0:45:40: EVT#42180-01/14/14 0:45:40: 103=Rebuild progress on PD 01(e0x20/s1) is 99.94%(12082s)^M
01/14/14 0:45:50: EVT#42181-01/14/14 0:45:50: 99=Rebuild complete on VD 00/0^M
01/14/14 0:45:50: EVT#42182-01/14/14 0:45:50: 100=Rebuild complete on PD 01(e0x20/s1)^M
01/14/14 0:45:52: EVT#42183-01/14/14 0:45:52: 114=State change on PD 01(e0x20/s1) from REBUILD(14) to ONLINE(18)^M
01/14/14 0:45:52: EVT#42184-01/14/14 0:45:52: 81=State change on VD 00/0 from DEGRADED(2) to OPTIMAL(3)^M
01/14/14 0:45:52: modify_log_drv_state: oldState: 2 newState: 3 pinned_cache_present: 0 targetId: 0^M
01/14/14 0:45:52: EVT#42185-01/14/14 0:45:52: 249=VD 00/0 is now OPTIMAL^M
It's a Seagate enterprise drive. The part number is the same as the drive that was replaced, but it does not have a Dell label.
We were able to resolve the issue after rebooting the machine. (The server had numerous live VMs on it so we had to arrange it carefully). It looks like what occurred is that even though the raid controller was working fine, OpenManage lost the ability to see the second connector on the PERC 6i controller. What's odd is that there was nothing wrong with the connector or the drives connected to it, (this was verified by looking at the controller log files), and Openmanage never showed an error regarding this - according to the web interface the status of the controller was green.