I have an MD1000 SAS chassis which we were hoping to use to expand our ZFS pool.
The chassis is second-hand, and doesn't contain any drives. When powered on it will run normally for about 30 seconds and then enter a fault state with the system status LED at the front blinking amber, on for 1 second, off for 1 second.
The same behavior occurs both when the chassis is running disconnected from a host, or when it's connected to a 1950-III via a Dell 6GBPS SAS HBA. When disks are inserted the controller and operating system fully recognize them and don't appear to generate any obvious warnings.
My question is how to diagnose the amber status condition. Will I need to purchase a PERC 5/E and/or debug cable just to get any indication of the cause of the issue?
So, the MD1000 is designed to work with either the PERC 5/e or the PERC 6/e. Here's a link to the Support Matrix for the MD1000: downloads.dell.com/.../powervault-md1000_Service%20Manual_en-us.pdf
Not saying it won't work with another RAID controller, but that it is designed to work with these. That being said, the modules in the MD1000 are called EMM's. (Enclosure Management Module) They're not Raid Controllers, they're simply 'pass through' modules. You will need a raid controller to build a RAID set, and cut out a virtual disk from the space provided by the drives. The RAID controller also plays the part in communicating alerts.
This box specifically is designed to us OMSA. (Open Manage Server Administrator) This is the software you'd use to build the raid set, and carve the space and manage your alerts, etc.
So, without these items, it's impossible to know what the blinking amber light is telling us. Although if it blinks when the drives are NOT in, and doesn't when the drives ARE in, that's probably expected behavior. The EMM's are probably saying, "hey, there's nothing here and that's not normal"
But again, without the aforementioned tools, we won't know for sure.
I hope that helps.