Start a Conversation

Unsolved

T

5 Posts

2198

July 30th, 2018 05:00

R620 with H710p Mini low IOPS and random drive offline

Hello,

we have a very strange behavior with one of our R620 Servers: When benchmarking a SSD with fio under linux we get very low IOPS on this particular R620 machine. The SSD is configured in RAID0 with "No Read-Ahead" and "Write-Through". The benchmark results are 2000-4000 IOPS. Also the RAID0 array goes randomly offline sometimes. The SSD must be re-imported (it becomes a "foreign drive" after going offline) after a reboot into the RAID-Controller management-menu.

While inspecting the Hardware Inventory in the Lifecycle-Controller we noticed that the attributes "MaxAvailablePCILinkSpeed" and "MaxPoissblePCILinkSpeed" of the RAID-Controller both were "Generation 2" while on another R620 with similar specs both attributes were "Generation 3"

With the second R620 we get around 45K IOPS with the same SSD.



Specs of the "low-IOPS" R620:
PowerEdge R620
PERC H710p Mini
2x Xeon E5-2670
2x 8GB 1066MHz RAM



Specs of the "high-IOPS" R620:
PERC H710p Mini
2x Xeon E5-2660 v2
16x 8GB 1333MHz RAM



Benchmarking conditions:

- GRML 17.05 Linux
- GPT partition-table with EXT4 partition
Benchmarking command: fio --name=/hdbench/sda/randwrite --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --direct=1 --size=4G --numjobs=4 --runtime=600 -group_reporting



What we have tried to solve the problem:

- Change SAS BP Cable
- Tried all backplane ports
- Replace RAID-Controller with the one from the working server
- Replace RAM (also took from working server)
- Remove all PCIe-cards
- Remove all risers
- Remove integrated NIC
- Replace SSD
- Reset/Clear: BIOS,DRAC,OS Driver Pack [whole system reset/repurpose]
- BIOS SATA Mode : Off,ATA,AHCI,RAID
- BIOS boot-mode: BIOS, UEFI
- Mirror BIOS-settings and Firmware versions from working Server. After that we tried following Firmwares:
- BIOS FW: 2.5.4 , 2.6.1. , 2.7.0
- iDRAC FW: 2.50.50.50 , 2.52.52.52 , 2.60.60.60
- PERC FW: 21.3.5-0002 , 21.3.4-0001 , 21.3.2-0005

With an NVMe SSD (connected over PCIe) we got the expected IOPS.


We would be really happy if you could help us with our Problem.

Moderator

 • 

8.5K Posts

July 30th, 2018 09:00

Hi,

What model drive are you using? Is it a Dell drive? Is fastpath i/o enabled? Is it just synthetic benchmarks that you have tested or file copies?

July 31st, 2018 00:00

Hi Josh,

thanks for your reply. We tested with Samsung 850 EVO and Sandisk Ultra 3D SSD. Fastpath I/O should be enabled as we set the array policies to "Write-Through" and "No Read-Ahead". We only did synthetic benchmarks with random read-writes.

Moderator

 • 

8.5K Posts

July 31st, 2018 08:00

We have not validated any consumer level drives with the controller.

August 1st, 2018 00:00

Hi Josh,

none the less you don't find it strange, that one one server it works as expected and on the other server it doesn't? Could that have to do something with the SSDs? Which SSDs are validated then?

Moderator

 • 

8.5K Posts

August 1st, 2018 08:00

We have Intel, Micron, Sandisk, Samsung, Toshiba and HGST SSD drives that have been validated, but they are the enterprise line of drives and not the consumer drives.

September 27th, 2018 00:00

Hi Josh,

we have tried several Dell Certified/Enterprise SSD's till now and we still didn't get any good results. The IOPS were still very low with all SSD's.

Do you or someone else has any idea what the problem could be? I have no clue anymore...

No Events found!

Top