A little bit on what I have already tried today: I have a PowerEdge T420. Nothing has changed with the config and the H710 has the current firmware version. Our BIOS firmware was 1 back, so I just upgraded it to 2.7.0 to see if it corrected the issue and it didn't.
I have a RAID1 w/ 2 Dell branded Seagate Savvio 10k drives functioning as the system/boot partition for our Hyper-V host. Part #: 9FK066-150. Server 2016 installed.
I then have 4 other drives that work as a RAID10 that host my actual VMs. The server and office location is 100% down right now because one of the RAID1 drives listed above crashed. Usually it wouldn't be a problem - but it has put the second drive in the RAID1 array into "blocked" for some reason.
2 Fold Question -
1) How can I get the one function RAID1 drive up so I can at least get them up over the holiday weekend here.
2) I have never had to rebuild a Hyper-V host because of failure, just for upgrading. Theoretically I should just be able to replace both drives with like spares I have on the shelf and then build the RAID1 array back up with Server 2019 w/ Hyper-V roles and the config and VHDs are still okay over on the RAID10 array. I do have 2 600GB Dell drives in a newer server I just bought I would be willing to go this route if it would be easier.
Any information/help is MUCH appreciated! Thank you.
Just to confirm a couple things.
Is the failed drive still in the server, or has it been removed?
What firmware version is the drives themselves at, as you stated the server is up to date?
Lastly, where are you seeing the blocked message?