Start a Conversation

Unsolved

P

6 Posts

979

June 5th, 2019 04:00

Slow disk performance with xx2x2 card in a C6100 with 10TB disks - best way to resolve?

I run a small fleet of C6100 hosts operating as hypervisors, running a mix of mostly VMware plus some XenServer, mostly with directly connected (no raid) disks, however I'm starting to move towards raid hosts.

A couple of years ago I enquired here about the largest disks that could be used with an xx2x2 raid card and the answer was that a controller that can address over 2TB (as SATA III can) should be able to address almost any size of disk.  https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/How-do-I-find-the-maximum-disk-size-I-can-use-with-an-xx2x2-raid/td-p/5166077

Following on from that I deployed a blade with an xx2x2, no battery, write-through, raid-1e, with 3 x 4TB disks and the performance of that blade has been very good.

I've recently deployed a new C6100, with hexacore processors and 48 gig ram per blade, two blades are fitted with Dell xx2x2 raid cards and the other two are fitted with standard LSI PCI raid cards obtained from eBay.  Currently one of the xx2x2 and one of the PCI blades are active, in both cases connected to 3 x Seagate Exos 10TB disks, the xx2x2 in raid-1e and the PCI in raid-5.  All blades running VMware.

While both blades operate correctly and see the full size of the array, the xx2x2 fitted one shows very poor disk performance, including very significant wait state impacts on Linux VMs, console errors due to write time outs, etc.  As there is no battery installed, the raid is configured as write through.

The PCI fitted one has better performance but still not ideal, and significantly poorer than the blades in the other systems with directly connected (non-raid) disks.

I'm wondering how to best address the performance problems, both on the existing blades and in bringing up the other two which aren't currently live, and am considering one or more of:

1. Abolishing the raid altogether, and going for direct connection, giving 30TB per blade, and scripted backup processes to protect the VMs against disk failure.  We already have a proven VM backup system in operation.

2. Bringing up the second xx2x2 fitted blade with just 4 or 5 TB disks - will this make a difference?

3, Adding the battery module - does anyone have links to information on the procedure?


I'm guessing that the cards (particularly the xx2x2) are struggling to address the large disks when writing?

If I connect 10TB disks directly to the SATA on each blade (remove the raid cards altogether) are they likely have any performance issues, or is it mainly a raid card issue?

Thanks for reading.

No Responses!
No Events found!

Top