Unsolved
This post is more than 5 years old
15 Posts
0
20506
March 11th, 2014 09:00
PowerVault MD3260 - Performance Impact of Initialization?
Have two PowerVault MD3260's (dual controller) each with 30 7.2K drives in them configured as a single RAID10 virtual disk (so 15 RAID groups presented as a single virtual disk on each controller). These virtual disks are presented two an attached R series server running RHEL6+XFS via 6Gbps SAS. The two RAID groups are being striped via LVM on top of the Linux multipath layer. So in a way I guess this is RAID100 (stripe of two RAID10's).
In performing some basic speed tests, I'm not getting anywhere near the performance I expected (maybe 400-500MB/sec writes and under 200MB/sec reads for "sequential" -- just via bonnie or dd).
Realized that the initialization process is still running on the PowerVault side. Obviously this has a performance impact, but unsure as to how much? Setting the rebuild priority to Low improves my numbers a bit, but not by much.
I feel I should be able to easily get 1GB/sec across these two controllers.
0 events found


DELL-Sam L
Moderator
•
7.9K Posts
•
36 Points
0
March 11th, 2014 11:00
Hello rvandolson,
First off to make sure that you are getting the true test results when doing a performance test you should wait till after the initialization has completed. Second you stated that you have 2 MD3260i connected to a single PowerEdge R series server, & then striped them together using LVM on the host. So with having a second stripe on your raid 10’s outside of the MD’s raid creation of the raid 10 you have added extra CPU overhead which will slow down the overall performance or the array. Also with the speeds that you are getting for your writes that is pretty good speed as that is 0.5GBs. Now to increase your read speed what you can do is to change your block size for reads from 4k to 16k.
Last thing is that you can look at the performance tab in MDSM (Modular Disk Storage Manger) & check out your performance there & compare the results to your performance test that you have ran earlier.
Please let us know if you have any other questions.
rvandolson
15 Posts
0
March 13th, 2014 07:00
Hi Sam, thanks for your response.
Initialization completed and although I didn't see a huge bump in performance, we did make some additional changes that are likely adequate.
First, the devices are actually MD3260E's (I think) -- SAS based and directly attached to our R720 host.
We ended up creating a 30 disk RAID10 on each MD3260, and within those disk groups, created two equally sized virtual disks and assigned one to each controller in the MD3260. So in the end we present four different LUNs to our RHEL host. I'm then striping over those LUNs with either LVM or mdraid (performance seems similar -- maybe a bit better with MD but likely will prefer the flexibility of LVM). Really basic testing with dd (single thread) and without the "direct" option (so OS caches are used) gives me around 880MB/sec writes and 1GB/s reads.
Where I'm still a little confused is with the segment size. In PowerVault terms is this the same as the stripe size? Or is my stripe size segment size * number of RAID groups (in my case 15 of them per MD3260)?
Dev Mgr
6 Operator
•
9.3K Posts
0
March 13th, 2014 09:00
General rule in storage; use either software raid or hardware raid, but not both (on top of each other). LVM is fine, but I'd suggest to skip the mdraid.
Also; if you carve a single raid set into 2 or more virtual disks, don't turn around and span them together in the host OS (LVM or otherwise).
When looking for performance, keep things simple. If you want to span things together in the OS, use 2 separate disk groups (raid sets) with a single virtual disk each (max size in the group).
If you want to test pure mdraid performance, you'd want to make multiple single disk raid 0's and present each virtual disk to the host and let the host use LVM and/or mdraid to make it a software raid solution. This option will definitely be more involving when you have to replace a faulted disk.
Also, if you're going to go software raid to a single host, you could have saved yourself a lot of money by going with a simple JBOD and use a compatible HBA (though this may not be able to assign the storage to multiple hosts if you want to cluster).