Actually, that's what's making me nervous. This is a 6 disk raid. The one labeled Hotspare is the new drive I just put in, and then labeled hot spare.
The one labeled Rebuild,.....I have no idea why it's in that state. It's acting fine. Since there's no documentation on what the states mean (that I can find) I was assuming that the one labeled rebuild, was rebuliding the hotspare? No idea, but this is a 6-disk RAID 5.
Yes, it is rebuilding. Looks like you had a hot-spare that stepped in to do its job.
Not "super" critical with drives that are not really big, but I would still recommend unassigning your hot-spare and converting the 5-disk RAID 5 to a 6-disk RAID 6.
To try and be clear, I'll step through exactly what I've done.
I have a 6-disk raid 5 R710. I noticed that two of the drives were blinking amber/green (drive 3, and drive 5), which means they are approaching failure. I know I can only replace one at a time, so I went ahead and powered off the server (it's not mission critical, and it's a hypervisor with only a few VM's on it). I pulled out disk 5 (the one currently in the "Hotspare" state. I put in a new hard drive of the exact same size, and model. When I rebooted the server, I noticed nothing was happening, and the drive was labeled "ready". I read online that I needed to change it to "Hotspare" and that it would then begin to rebuild. When I rebooted the machine, and went into the raid controller Physical Disk Management, I noticed that drive 8, which had no warning lights, and was functioning fine, was in a "Rebuild" state. I moved drive 5 into the Hotspare state, and restarted. Now, drive 5 (the new drive, in the hotspare state) has two solid green lights.
To be clear, drive 3 is still amber/green, and I plan on replacing it next, and drive 8 which doesn't have any warning lights, and appears to be acting normal (but is for some reason in a "rebuild" state is just confusing me.
You say it is a 6-disk RAID 5 ... but with one already offline (hot-spare), it would not be possible for one to be rebuilding. Are you sure it is not a RAID 6 already?
Can you screenshot the VD MGMT screen? You might confirm the status from the OS using OMSA, particularly if the firmware is out of date.
Just a friendly piece of advice ... you should never power down to replace or introduce a hot-swappable drive.
I can't send a screenshot now, but I will tomorrow. That's why I'm so confused...this doesn't seem possible. It's a 6 disk raid 5, unless I looked at it incorrectly today, but I really don't think I did, as I looked at it several times to make sure I wasn't going crazy.
Another way to know is that a 6-disk RAID 5 with a hot-spare and a 5-disk RAID 6 would both give you about 2.2TB of disk space, versus 2.7 in a 6-disk RAID 5.
Turns out I was freaking out over nothing. It was a 5 disk raid 5, and already had a hotspare. I got confused when I restarted the server, because all 6 drives had a state that wasn't hotspare. I guess I have learned my lesson to not restart the server when repairing a raid. Thanks for the help
slimmons
11 Posts
0
March 29th, 2017 22:00
Actually, that's what's making me nervous. This is a 6 disk raid. The one labeled Hotspare is the new drive I just put in, and then labeled hot spare.
The one labeled Rebuild,.....I have no idea why it's in that state. It's acting fine. Since there's no documentation on what the states mean (that I can find) I was assuming that the one labeled rebuild, was rebuliding the hotspare? No idea, but this is a 6-disk RAID 5.
theflash1932
9 Legend
•
16.3K Posts
0
March 29th, 2017 22:00
Yes, it is rebuilding. Looks like you had a hot-spare that stepped in to do its job.
Not "super" critical with drives that are not really big, but I would still recommend unassigning your hot-spare and converting the 5-disk RAID 5 to a 6-disk RAID 6.
slimmons
11 Posts
0
March 29th, 2017 23:00
To try and be clear, I'll step through exactly what I've done.
I have a 6-disk raid 5 R710. I noticed that two of the drives were blinking amber/green (drive 3, and drive 5), which means they are approaching failure. I know I can only replace one at a time, so I went ahead and powered off the server (it's not mission critical, and it's a hypervisor with only a few VM's on it). I pulled out disk 5 (the one currently in the "Hotspare" state. I put in a new hard drive of the exact same size, and model. When I rebooted the server, I noticed nothing was happening, and the drive was labeled "ready". I read online that I needed to change it to "Hotspare" and that it would then begin to rebuild. When I rebooted the machine, and went into the raid controller Physical Disk Management, I noticed that drive 8, which had no warning lights, and was functioning fine, was in a "Rebuild" state. I moved drive 5 into the Hotspare state, and restarted. Now, drive 5 (the new drive, in the hotspare state) has two solid green lights.
To be clear, drive 3 is still amber/green, and I plan on replacing it next, and drive 8 which doesn't have any warning lights, and appears to be acting normal (but is for some reason in a "rebuild" state is just confusing me.
theflash1932
9 Legend
•
16.3K Posts
0
March 29th, 2017 23:00
Rebuild means it is rebuilding.
You say it is a 6-disk RAID 5 ... but with one already offline (hot-spare), it would not be possible for one to be rebuilding. Are you sure it is not a RAID 6 already?
Can you screenshot the VD MGMT screen? You might confirm the status from the OS using OMSA, particularly if the firmware is out of date.
Just a friendly piece of advice ... you should never power down to replace or introduce a hot-swappable drive.
slimmons
11 Posts
0
March 29th, 2017 23:00
I can't send a screenshot now, but I will tomorrow. That's why I'm so confused...this doesn't seem possible. It's a 6 disk raid 5, unless I looked at it incorrectly today, but I really don't think I did, as I looked at it several times to make sure I wasn't going crazy.
theflash1932
9 Legend
•
16.3K Posts
0
March 30th, 2017 00:00
Another way to know is that a 6-disk RAID 5 with a hot-spare and a 5-disk RAID 6 would both give you about 2.2TB of disk space, versus 2.7 in a 6-disk RAID 5.
slimmons
11 Posts
0
March 30th, 2017 07:00
Turns out I was freaking out over nothing. It was a 5 disk raid 5, and already had a hotspare. I got confused when I restarted the server, because all 6 drives had a state that wasn't hotspare. I guess I have learned my lesson to not restart the server when repairing a raid. Thanks for the help