Unsolved
This post is more than 5 years old
18 Posts
0
479615
December 11th, 2012 19:00
H310 performance problems
I posted a message in the the storage community forum but not sure if that was the best place for it. http://en.community.dell.com/techcenter/storage/f/4466/p/19480687/20249968.aspx#20249968
Anyhow, I have done some additional testing and I think I need to have my drives hooked up in a different way or have a different/additional drive controller.
I have an R720xd with 12 x 2TB SAS drives in the front and a pair of SSD in the back. They are all hooked up to an H310 and are setup as non-RAID in the bios.
In my testing of sequential throughput with dd if I am writing to one of the SSDs the performance is as expected (almost 400MB/s) however as soon as I also write to one of the spinning drives the throughput drops to 140MB/s on the SSD to match that of the spinning disk. There is still headroom on the CPU so I think this is different than my other posting referenced above. If I write to both SSDs they average over 370MB/s each. So basically I am loosing some of the benefit of having an SSD when writing concurrently to one of the spinning disks.
I am guessing I would benefit from having a separate controller for the SSDs. Thoughts on this? I am thinking I would keep the SSDs on the current H310 and then move the others (front backplane) to a different controller. Any suggestions on which controller which might help me get around the issue from my other post (or is that a not enough CPU problem)? I hear there is an IT firmware that can be applied to LSI based cards would this be of benefit to me for the 12 drives (I am not going to use hardware raid). Should I go for a PERC H200 since that supposedly is already in IT mode?
thanks.
0 events found


create123
18 Posts
0
December 11th, 2012 19:00
I have a R720xd with 1 CPU (4core E5-2609), 16GB ram, with an H310 and 12 x 2TB SAS drives in the front. OS is Centos 6.
My write performance seems to be stuck around 500MB/s so I decided to experiment and observed the following.
I put all 12 drives in non RAID mode and then proceeded format them as ext4 and mount them all individually. Then I created a script using dd to create a 20GB file on each drive, sleeping for 20 seconds between each start of dd in the background. In a separate window I had iostat going with a 2 second interval.
What I observed, is that up to 4 drives it looked like I was limited by the drive bandwidth. Iostat was showinf about 140MB/s writes to each drive up to 4 drives. After that as each successive dd kicked in another drive the average per drive dropped. With 5 drives avg 103MB/s, 6 drives avg ~83Mb/s, and so forth with the aggregate sum being somewhere between 500 and 550MB/s.
Is this what I should expect? Seems like the iowait and system time is pegged during this. Would having an extra CPU help? Or how about switching to a different driver controller (plain HBA?)? OS tuning?
On the read side there is no degradation in performance from 1 to 12 drives it stays between 140 and 150MB/s for each drive (over 1.5GB/s aggregate !!)
Daniel My
12 Elder
•
6.2K Posts
0
December 12th, 2012 14:00
Hello create123
I combined the two threads for you.
This is a known issue. There is currently no ETA on when or if this will be addressed with a future firmware update. If it is possible to correct the issue with a firmware update then I suspect it will be fixed in the future. The issue is only present when the H310 is in JBOD/non-RAID mode. The issue is not present in the H200, and it is noted that in a JBOD configuration an H200 performs better than an H310 because of this issue.
You might try putting the two SSD's in a RAID 0. I am not sure if the issue will affect RAID drives with JBOD in use, but it is noted that the issue is not present in RAID mode. Based on your testing it sounds like the controller is treating the drives like a SCSI bus rather than a point to point connection. It appears to be slowing down to the slowest member disk because it is waiting on confirmation commands from the HDD before performing another request, but that is just conjecture on my part.
Thanks
create123
18 Posts
0
December 12th, 2012 14:00
Thanks for the info. Is there somewhere I can sign up for updates on this issue?
I'm actually going to partition the SSDs into separate RAID 1 and RAID 0 for ZFS ZIL and L2ARC once I can make sure it all performs well in a supported setup. So I need to keep the SSDs in JBOD as I don't think the H310 supports that. Can I fit an H200 in my R720xd? What parts would I need to plug the front backplane to an H200 (assuming it works) and the back SSDs to the existing H310?
thank you.
Daniel My
12 Elder
•
6.2K Posts
0
December 12th, 2012 16:00
The H200 is not a supported controller on the R720. The system is only designed to support functionality of a single integrated RAID controller(H310, H710, or H710P). The 2 add on slots in the rear are cabled off of the backplane. Those slots are in backplane extensions. I don't think it would work if you cabled one of our controllers directly to those slots.
Are you talking about slicing arrays on the SSDs, putting a RAID 0 across both of them on part of the available space and then a RAID 1 across them on the remaining space? I thought you weren't supposed to mirror the cache drives. I'm not very familiar with ZFS or caching with it via ARC, but I think you should be able to configure the two SSDs as single drive RAID 0's.
create123
18 Posts
0
December 12th, 2012 18:00
Thank you again for your helpful info.
Re H200: I was thinking of an H200 in one of the standard PCIe slots like this person did: en.community.dell.com/.../19882873.aspx
Anyhow, if I understand you correctly, the connection between the backplane and the 2 add on slots is not some standard SAS connection variant and therefore can not plug them into a SAS HBA using approriate cables.
So until the H310 is fixed (if possible), sounds like my best option (if I want JBOD) is to use a SAS HBA like the LSI 9211-8i in IT mode in one of the PCIe slots. Is that card compatible with the backplane+add ons in the R720xd?
Regarding ZFS, yes I was referring to slicing the arrays on the SSDs. The ZIL is a write cache (intent logs) and best practice is to use a mirrored SSDs for data integrity. The L2ARC is a level 2 read cache and striping is usually the preferred setup for size and performance.
Daniel My
12 Elder
•
6.2K Posts
0
December 12th, 2012 18:00
Correct, it may be possible somehow, but they are not designed to be directly attached to a controller.
The only controller's listed as compatible for the backplane are the H310, H710, and H710P. I'm not sure why you are restricted to JBOD mode, but you may consider single drive RAID 0's on any of the above controllers as an alternative.
Yes, the H310 supports up to 16 virtual disks per disk group, so you can have 16 RAID 0 or 1's across those two SSD's. If you did that you wouldn't be able to create arrays across any of the other drives though as the maximum virtual disk count is 16 as well.
Thanks
Daniel My
12 Elder
•
6.2K Posts
0
December 13th, 2012 09:00
I got an update on the firmware. I was informed that the issue is not correctable with firmware, so this problem will not be resolved.
Daniel My
12 Elder
•
6.2K Posts
0
December 14th, 2012 10:00
Yes, I apologize about the issue. If you would like to reach out to your account team about switching to an H710 I will explain the issue with them to see if we can do a swap and only pay the difference.
create123
18 Posts
0
December 14th, 2012 10:00
One other question...does this problem apply to the H310 adapter also or only the integrated one?
create123
18 Posts
0
December 14th, 2012 10:00
Thanks I'll do that.
What kind of throughput should I expect with a RAID 10 (same as above) using that card. My drives are 12 x 2TB SAS 7200rpm.
create123
18 Posts
0
December 14th, 2012 10:00
Hmm... don't like the sounds of that.
Anyhow I decided to check out the performance of the H310 with RAID 10 (6 spans of 2 PD per span) and dd maxes out at just under 400MB/s. I was doing slightly better with JBOD and software RAID. I am in the process of creating 3 different RAID 0 of 6, 4 and 2 drives to see how those perform. So far can't say I'm happy about this setup.
create123
18 Posts
0
December 14th, 2012 11:00
Don't mean to be snarky, but those are the stated specs for the H310 too and look where it got me. I originally selected it because it was the only one supporting pass-through.
Anyhow, I have requested a quote for the H710P.
Daniel My
12 Elder
•
6.2K Posts
0
December 14th, 2012 11:00
Correct, there is a known issue with the H310 not being able to achieve rated specifications.
Daniel My
12 Elder
•
6.2K Posts
0
December 14th, 2012 11:00
All versions of the H310 to my knowledge.
It will be able to easily max out what your drives are capable of. It has 8 ports that are capable of 6Gb/s for a total throughput of 48Gb/s(~6GB/s). If your account team has any questions for me then have them search the corporate directory for Daniel_My. I'm the only one that will show up.
http://www.dell.com/downloads/global/products/pvaul/en/dell-perc-h710-spec-sheet.pdf
Thanks
create123
18 Posts
0
January 3rd, 2013 17:00
So I had a look inside finally and the rear backplane uses a standard SAS connector. Do you know if it is possible to connect this to the SW raid SAS connector on the motherboard? Is that connector active if an integrated RAID card is installed?
This could mostly solve my problem of the performance degradation that my SSDs are suffering while being connected to main backplane. If they are separate then I am ok with my HD raid maxing out at 400-500MB/s if this means the SSDs are not impacted.