Adding to my other discussion where I described my testing/configuration between VMware native multipathing and PowerPath/ve, I completed more comprehensive testing between the two products and wanted to highlight it here (and see if others are seeing similar resuls).
I performed this latest round of testing in another one of our data centers with both EMC (DMX and CLARIION) and HP storage. We have a range of between 2 and 30 to 35 VM's per hosts and run them between 40 and 90% (average is ~ 70%) all the time. When I installed and tried to configure NMP the best I could, we didn't really see a performance improvement (over no multipathing) since we needed to keep updating the system when VMachines moved or changed - maybe that's the biggest headache. But when we pulled native MP out and put in PowerPath we immediately saw at least a 20% jump. we hadn't even configured it yet. And now that we've tweaked it I think we're seeing over 33 or 34% better performance. We're even thinking that when we upgrade to VMax's later this year we will beable to use fewer HBAs since Powerpath will take care of load spreading. Now if we could just get EMC and VMware to change their licensing of PowerPath/ve from vsphere enterprise plus to include enterprise too, I'd bee much happier!!!
Has anybody else seen this type of improvement?
Those are the numbers that other's that I know have talked about (not sure if that was their experience of what they heard somewhere).
I would love to test it and find out, but sadly I'm running Enterprise which isn't good expensive enough to allow me to purchase PP/ve (not that I'm bitter or anything).
A longer term review has shown that our performance (IO throughput) with PowerPathVE has peaked at about 32% to 35% (better than with NMP or no multipathing at all)!!! we haven't measured before and after CPU use changes. We might even be able to reduce some hardware costs during an upcoming upgrade!
does anyone have ideas on how to tweak their config with PowerpathVE to get even better gains?
Does the performance jump concerned also HP storage (was was it by the way)?
What have you tweaked to improve performance from 20% to 33%?
Could you please share with us configuration and test scenarios details, that has shown such performance gain?
Not 100% sure if you would see same performance gain with HP. I'm using CLAROPT and SYMOPT on EMC storage.
We actually havn't had to tweak much. although we initially had some NMP-owned paths mixed in with PowerpathVE-owned paths and have subsequently removed the NMP ones, and made sure that PowerpathVE can see all channels, we haven't done anything - definitely a plus in Powerpath's corner. For testingwith iometer, we ran it on 8 VMservers running Exchange with ~30VMs total and I think ~2500 users per VM with each file at ~500M. PowerPaathVE showed significant improvements in throughput, less CPU use (by avg 31% over NMP). actually, the more load we put in test, the better the results. our first test was 4 servers with 10VMs and 1000 users per.
Have you done any testing on HP yet?