Hi there people,
I get here with a problem and many easy solutions that... well that I don't like or want. If you are inclined to participate, please keep the thread clean from the obvious work arounds. Where will be the fan in that?
The problem arises from a simple philosophy that I like to embrace: "If it works in windows it sure will work too with Linux".
First of all my configuration:
I have a Alienware Aurora ALX with the following configuration (in the aspects I think are relevant to the problem):
Intel ICH10R/D0 SATA Raid Controller:
Firmware revision 126.96.36.1990 ICH10R/D0 wRAID 5
Windows 7 x64 Ultimate.
ICH10R drivers installed in windows 7: 188.8.131.524 (newer that the included in the controller firmware) (I also found a newer version 10.8.0.1003 but the one I used when I made the thing work was the 9.6
6 SATA disks configured like follows:
Creating "jurR04TB" wasn't a piece of cake. The old Raid rom included in the latest Aurora BIOS (A11 as I type this) doesn't support the creation of +2TB volumes in raid disks. When I tried to do it in the firmware configuration I was only capable of getting a 2 TB volume. But with the Intel Storage Utility that was newer I could do it. I must mention anyway that during creation the utility warned saying that that volume was not supported by the Chipset installed ROM. And it's true:
The jurR04TB works fine within the Windows 7 scope even when, during boot phase, the Raid chipset says that those disk are "Incompatible". But since It works I kept it.
The problem comes when I wanted to replace the Windows install with a new Linux partition. Actually I plan to keep both: Windows for gaming and Linux to act as a backbone of my home storage and media server. And also because I like to make linux work everywhere
I installed Ubuntu with no problem at all with the exception that jurR04TB volume wasn't not recognized. I used only the drivers provided by the ubuntu installation (I didn't find anyway any Intel Linux specific drivers).
I just started the investigation and since this is my first time dealing with Raid drives in Linux I guess it will take me a while to gather all the knowledge to figure out if it will work out or I should give up with this.
I know, as I said at the beginning, that several work around are easy (like making the 4tb raid into 2 2tb ones compatible with the rom), but as I also said I like to think that if it works with windows I will with linux...
I just wanted to write here just in case someone has fight this fight before and is as stubborn as me... and also to have a place to log the advances...
I will do that, if nobody opposes...
Of course, any constructive help will be appreciated.
I leave several open questions that maybe someone can answer:
As I understand (reading in the Intel forums) the Matrix Storage ROM can only be updated via motherboard BIOS update. I know for a fact that my current version (184.108.40.2060) didn't changed when I updated the Aurora ALX BIOS from A10 to A11. Is there any other way to upgrade this ROM?
Does anybody know of Intel RAID drivers for Linux that will help recognize +2TB volumes like the windows one does?
Thanks for your time and help in advance.
While I am unable to anwser your questions, as not an expert on Raid (used for the first time with my AW Area 51 ALX) nor Linux., I did run into a drive/array problem. - trying to fix by removing Raid as soon as I find the time. I have posted my problems and observation starting on Mar 6, 2012 here
However, as you can see if reading my various postings I also observed the followng puzzle which I like to solve even if dismanteling RAID;- initially for that "problem" Raid concerning the two HDD as one of them appears to be a problem drive (unless related to a potential Windows confiict).
My question to you: Does your Raid0 array list both the array as well as the two Seagate drives in the registry or only the array?
Thanks in advance.
I have not reviews your thread yet (sorry I'll do when I have some spear time) but I can tell you that with in windows, the Disk Manager only sees my Raid drive (not the disk components). I don't know if that answers your question.
I'll take more time to review your problem and try to help asap.
And coming back to my thread I'm making great advances. I even thing I can get the thing solved pretty soon. I find the time to properly document it here for future references, but here is an advance.
According to a Intel note I was able to find for kernels 2.6.27 and above mdadm (and not dmraid) must be used for Linux to correctly recognize the RAID volumes above 2TB.
The Linux distribution I was using was installing dmraid. This was able to see the 2TB raid volume but not the 4TB.
I manually uninstalled dmraid and installed mdadm. I followed all the steps to assemble the raid volumes to find that I wasn't able to mount the 4TB volume... This time the problem appears to be that +2TB volumes are not supported in the x86 architecture of Linux I was using.
I was lead to believe that, in Linux, this was not a problem, but maybe that statement only refers to ext file systems and the one I'm trying to use is NTFS (for compatibility with the Windows installation).
So I reinstalled another version of Ubuntu for amd64 and will repeat the steps...
I'll keep you informed.
Thanks for your reply. Glad to see that you are on the way to solve your problems.
As to my PUZZLE. In Disk Management. mine also only shows the partitions of those two arrays. In Device Manager under Disks Drives it only shows the arrays, etc. but not the individual drives associated with the arrays. So both those functions are as expected.
The "head scratcher" issue is that while the Registry lists both arrays as it should, it has no info at all on my two SSD drives for the one Array but it has also full info on the two HDD's of the other array, hence for this 2nd Raid volume a potential conflict. It's obviously an inconsistency and I am trying to find out what other users running Raid0 lists in their registry. For ready reference mine reads: (my explanations in RED) - for more information see my previously posted link.
(1) & (2) being the two HDD's for DiskVolume_00001
(2) The disk which came with the PC
(1) The disk subsequently purchased from DELL which based on S/N seems to to be older then (2) and perhaps thus listed first (could be a re-furbished drive - Neither I nor Dell personnel was able to find such a drive on Dell's site until another person finally found one eventually). This is also the disk which may be going bad as a few times the Raid controller reported "Volume_0000 Failed" with a red "x" shown for this disk. Each time clicking on "Reset to Normal" immediately solved the problem. (Both partitions of this raid volume are backed up just in case pending deletion of this Raid Volume and separate monitoring of this particular disk)
I was finally able to mount the raid volumes with in linux... but only read-only for the moment.
The mdadm was a good hint. As I promised I will detail the solucion... but not now. This was just a small update.