Unsolved
This post is more than 5 years old
8 Posts
0
2056
Raid 5 rebuild failing at 70%
I have six 2tb drives in a raid 5. PowerEdge 2950 III with a percent 5/I.
Drives: 0-5
Drive 1 does, drive 4 immediately turns foreign
Using import/recover foreign config, the array is available but degraded and data is accessible
Replace drive 1 with a brand new 2tb enterprise drive, rebuild begins, gets to 70% and fails.
Drive 4 shows as failed and drive 1 as ready.
Repeated twice with this drive and once with another, same result.
Looks like drive 4 has bad sectors in the parity region causing the rebuild to fail, but what's my next move?
Vdisk consistency check while degraded? Windows chdsk? I'm trying to go for the next best solution to not put to much stress on the other disks without a drive to fail on.
DELL-Joey C
Moderator
Moderator
•
3.1K Posts
0
June 13th, 2017 23:00
Hi,
I would suggest you to do a data back up since data is mentioned accessible, as you have a RAID5 6 drive config, that would make a 10TB data. The parity would be enormous to rebuild.
From what you have mentioned, you may be facing a double parity error. But may need to understand the status of drive 1. Unsure what you have mentioned on "drive 1 does, drive 4 immediately turns foreign". Is drive 1 showing as failed or foreign when you import foreign?
dasheeown
8 Posts
0
June 14th, 2017 07:00
I'm having trouble finding a source to back the data up to right now, but it's on to of my priority list.
The known good new drives, when plugged into bay 1, show as ready after importing the foreign drive 4. Any functions I can run on drive 4 to move bad sectors or even identify them?
dasheeown
8 Posts
0
June 14th, 2017 10:00
DELL-Joey C
Moderator
Moderator
•
3.1K Posts
0
June 14th, 2017 19:00
Hi,
That could work, but I'm unsure if the block copy, does copy the RAID parity correctly. I've not done this before.
There's another step I thought of, have you tried force online failed drive 4 and rebuild drive 1?