Start a Conversation

Unsolved

1 Rookie

 • 

11 Posts

3107

December 1st, 2019 00:00

R820 NVMe -- Intel VROC Compatibility Questions

So I have an R820 with the 4x NVMe SSD Slots on the front. 768GB RAM, 4 Intel Xeon CPUs, etc.

I am thinking of adding a 4x NVMe M.2 PCIe Card inside the system.

The best performance these days seems to come from VROC-enabled systems, with up to 128 Gbps coming out of the ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2.

But the R820's manuals don't say it has VROC, so I probably can't avail of it. Unless someone here is familiar with what the R820 has in terms of VROC, and can say otherwise? If the R820 actually supports VROC, it would sure simplify things.

There is also the Dell Ultra-Speed Drive Quad-NVMe M.2 PCIe x16 Card, of course. But I am not sure the R820 can use that either, since it says is is only for the Dell Precision Tower 5810, 7810, 7910 workstations and the Precision Rack 7910 workstation. "Other systems are not supported." Is that due to a lack of VROC Support? That card doesn't mention VROC at all, but those systems do, in their specs.

Then, there's the Dell HUSMR7676BHP3Y1 which is only PCIe x8... but doesn't have any listed restrictions.

In short, my goal is to get the fastest card, with the highest IOPS, that works with the Server.

There seems to be no clear guidance out there when it comes to the R820 and NVMe... so I thought to reach out here.

Please, oh Storage Gurus, grace me with your wisdom.

1 Rookie

 • 

11 Posts

December 1st, 2019 17:00

I have a bit more info. Both the Dell Ultra-Speed Drive Quad-NVMe M.2 PCIe x16 Card and the ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 require PCIe Slot Bifurcation, where the x16 PCIe slot is essentially split into four x4 lanes, one for each M.2 drive on the card. But -- the R820 can't do it.

In fact, no Dell Servers earlier than Generation 14 support Slot Bifurcation... so it seems that adding one or more of the single, large x8 Dell / HGST Ultrastar SN260 (HUSMR7676BHP3Y1) is the only solution.

The challenge, of course, is that there is no RAID, no redundancy, no hot-swap of failed drives when it comes to PCIe card drives. They're fast -- yes -- with 1.2+m IOPS... but deadly when the card dies.

h710p_chart_1.pngMy fallback for retaining redundancy is to put 8x Dell 12G SAS SSDs in the 8x available front bay slots and use the onboard H710P... but that solution comes with its own simply awful throughput (and IOPS) limitations due to the PERC H710P Controller's bus bandwidth (8-channel PCI Express 2.0, which is only 4GB/s (8 channels * 500MB/s per channel)). As if that wasn't bad enough, the controller itself doesn't get anywhere near its bus capacity. It only puts out <400 MB/sec and <50,000 IOPS with 8x 12G SAS SSDs.

So, to rephrase and expand my earlier question to the grim-eyed and steely-souled Storage Gurus who frequent this place, what's the best way of taking advantage either of the full speed of a full spread of 8x Dell 12G SAS SSD drives in RAID, or establishing some sort of redundancy across multiple (vastly faster) internal PCIe drive cards?

No Events found!

Top