RAID1 physical drive 0:1:1 replaced drive comes up as Ready - now HD 0:1:0 has an error
I have a customer where we have had a RAID1 mirror running for a long time. Recently the second physical hard drive in the RAID 1 pair (0:1:1) failed and was replaced with a new drive. This drive did not auto-build but was flagged as "Ready".
Now the first physical drive has reported a single error at the same block location in /var/log/messages - the server is a CentOS 7 Linux server and the iDrac interface also shows the drive as having the error (attached screen shots below this text). The server is running happily (for now) so what I propose is to set the 0:1:1 drive from "Ready" to "Global Online Spare" and hopefully the array will build again with at least a new healthy drive running in 0:1:1.
The customer has purchased a second drive which I will then have them replace 0:1:0 with and hopefully it should auto-build the RAID1 and be back to running correctly again with two healthy drives. It may make this second new drive "Ready" as it did with the first replaced drive, but I am assuming I can then set the second replacement drive as Global Online Spare for this other drive?
I need to wait until a time where I have full rsync backups of the hosted servers (this CentOS 7 server is running VirtualBox for hosted Linux servers VMs and one Windows 2016 server VM), before I risk any potential problems.
I would sincerely value any input on whether this is a safe way to go. Presently a fresh build and restore of the huge amounts of 'stuff' is off the table as they don't want down-time as the site is moving premises shortly..... and we don't want to power off the box prior to move, move premises..... and then boot and have any RAID/HD boot issues at the new location.
Screen shots attached for your perusal.
Many thanks in advance.