I have a MD1000 connected to a H800 controller installed in a R720. All works well with 600gb SAS disks.
I have another MD1000 that is not in use but I'd like to put it in use for CCTV storage so thought I'd put 4tb or 6tb sata disks in it, again attached to a H800 controller in a Dell rack server (xeon-based).
Will this work? any particular pitfalls besides making sure we have current firmware on the powervault and the H800 controller?
With the MD1000 what you have to be aware of is that the enclosure only supports 3gb SAS/SATA drives & not 6GB drives as the backplane is a 3gb backplane in the MD1000. Now with that being said you may be able to get the 4 or 6 TB drives seen in the MD1000 but we haven’t tested to be sure.
Please let us know if you have any other questions.
Social Media Support Enterprise
There are multiple levels of unsupported things here so I will elaborate on the previous answer.
The MD1000 is an older 3gbps sas enclosure that was released to be used with the perc5e and perc6e controllers. At that time many systems (including these controllers) were limited to a 32 bit addressing scheme which could only handle addressing for physical disks up to 2tb. Your perc H800 is from a generation that is newer than the MD1000 so it has connectors for 6gbps sas cables (which should connect to an MD1200) and did support some disks larger than 2tb. To make the MD1000 physically interface with the H800 you would have had to obtain some special cable with a 3gbps sas connector on one end and a 6gbps connector on the other to force them to physically connect 3g <->6g. This is not something that Dell would validate since these devices are from two different generations and its more appropriate to connect 3g <-> 3g or 6g <-> 6g.
It is possible you actually have an MD1200 instead of an MD1000. Google an image of the back of the devices to compare and you will easily verify which you have. The MD1000 has EMMs next to each other, MD1200 has them stacked on top of each other. If you are using an MD1200 that is fully supported with the H800 and your next obstacle will be verifying which drives are supported.
Instead of sata you would want 7.2k rpm NL-SAS drives. Without these you would have some pathing issues since sata will only communicate down one of two data channels on a sas connector. Likely symptoms would be things like server only sees data when plugged into one of the EMM modules but not the other. Purchasing a sata/sas interposer board for every drive may help with this but then your really just throwing a lot of money at a Frankenstein and hoping it works.
Even though the MD1200/h800 is newer than the MD1000/Perc6, it is still years behind the newer controllers (H830). 6tb enterprise NL-SAS drives are still fairly new. I am not sure if Dell still validates new drive models on the H800 or if that product has reached its end of life. If they are still validating disks maybe the drives you want are supported, otherwise lets say the last time new drive models had been tested with that adapter could have potentially been when 4tb drives were the latest. If so your basically on your own testing whether or not 6tb drives work on the H800. Using non-dell drives or even dell drives that are not validated for your controller may cause them to come up as unuseable, or best case scenario useable but labeled as unsupported. In which case you are the one validating whether or not there are any nasty bugs when using this combination of hardware.
So to sort of shorten it down...
-If your using the MD1000 w/ H800 you are sailing in untested waters. It may work fine or there may be bugs that nobody has done testing to identify.
-If your using the MD1200 w/ H800 this is a working combo but your next obstacle is to find what drive models and sizes are compatible with this setup.
-Using sata instead of sas means you will likely have some pathing issues.
-If you end up using drives that are not validated you are back to sailing in untested waters. It may not work at all, or may work fine, or may work but with random bugs that nobody has done testing to identify.
-If you want to buy something that's really supported you would need to call your sales rep and see what they can offer you.
I know this is an old thread, but it shows up on Google when searching for MD1000 (or 1200) sata. This equipment is now old enough (and cheap enough) that people are using them for relatively inexpensive solutions for home mass storage. Personally I use it to store media.
The reason I'm updating this is to clarify some stuff that most certainly works with the MD1000. I have a couple of R710s that I use for ESXi hosts. Here are the things that I know work.
-The MD1000 is connected to an H800. I bought a Chinese made cable with an SFF-8470 (MD1000) on one end and SFF-8088 (H800) on the other.
-I have populated 10 of the drive bays with 4TB drives (combination of WD Red and Seagate NAS/Iron Wolf). This works amazingly well for slow storage.
-In my experience you can use the drives without interposers, however the RAID controller will show the link speed to the drive as 1.5Gbps. If you use interposers the link speed will be at the full 3Gbps (for the MD1000) speed.
This is the information I can personally confirm as 100% working. It is most certainly not a setup that is approved by Dell but it works fine.
I've read in quite a few places that larger drives (8TB+) will also work fine but I have not been able to confirm this personally. The size restriction is really based on the controller you're using. The PERC 5/6 were 32-bit, so they were limited to 2TB drives. The H800 SHOULD be able to do any size drive that's available today.
Hopefully this will clear up some confusion for people looking to do what I and many others have done.
I know this is a very old thread, but I landed here when searching for this very topic on Google.
Just wanted to share my experience with the question asked:
I did manage to get a pair of 6TB hard drives working in an MD1000, with an H800 controller in an R730. I did use interposers on the drives, the H800 sees the drives and I was able to create a RAID1 volume. OS is Server 2019.
To get the volume created in Windows, I had to use GPT instead of MBR partition.
Updating again as I've purchased new drives since I updated last. I was able to use 12TB drives in the same set up that I had before. I can also say that the MD1000 works just as well with a Dell H310 HBA and these 12TB drives.
@pabohoney1 - Are you getting decent speeds out of that setup now with the H310? Or are you still using the H800? I'm looking at going this route instead of buying another (or larger) NAS.
I don't need crazy speeds, this is going to be mostly for a media / Plex server, but I'd like to be able to saturate a 1Gbe connection.
@LordAthens I'm using the H310 flashed to IT mode and I can easily saturate my 1GbE connection. I have no need to move to 10GbE so I have not tested that out. I would assume the MD1000 SAS/SATA connection would be the bottleneck at that point since it's only SATAII (3Gbps, which I was only able to achieve using interposers according to the MegaCLI output).
@pabohoney1 Thanks, I appreciate the info.
If I'm understanding correctly as to what I will need;
12x pn939 interposers
H800 (or H310?)
Then whatever cables I need from the H800/H310 to the MD1000?
It seems as though a H800 is going to work out better since the H310 doesn't have an external interface?
@LordAthens sorry for the confusion, I've gotten a few things mixed up with my set up as I've changed a bunch of stuff recently. For the MD1000 I originally used a the H800 to do a large RAID6 array and was able to saturate my GbE easily. I then switched to an LSI 9200-8E as I needed an HBA with no RAID. That also was able to saturate the network.
The H310 is for internal SAS/SATA (like the backplane on an R710). I used something similar (H200 in IT mode) to get HBA support for my internal drives on an R710.
The cable that goes from the MD1000 to the H800 is SFF-8470 to SFF-8088. These can be found on Monoprice or eBay or really anywhere. Of course these are not supported by Dell but they work just fine.