Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4337

February 1st, 2011 06:00

Has anyone been able to actually use the Uber VSA 3.2?

After getting everything setup and replicating, I created an NFS datastore.  I'm seeing write latency on average of 800ms, with spikes to 3600ms.

It's living on a HP DL380G6 connected via FC to HP EVA 6000 with 15k FC disks.  Even when I move it to local SAS spindles, it's still cripplingly slow.

It doesn't like VLAN's, so the DM and the management are on the same VLAN.  I'm connected to Cisco 2960's at 1Gbps.  My VLAN's are trunked there.

What am I doing wrong?  This is unusable even in a lab.

February 4th, 2011 19:00

Keep in mind that VSA is built on top of Redhat.  This means that the datamovers (usually dedicated hardware running DART software) are all running as services in the Redhat linux VM  The important part of this is that their IO and performance is dictated by Redhat's own caching/flushing/dirty algorithms and the backend disk.  The VSA itself can be used for many different purposes (mainly for demo of interface and capabiltiies) and tweaking to the Redhat OS has not been performed regarding memory optimization etc. and it may lead to less predictable operations in other areas.  If you are looking to get the best performance from the VSA, you can look into making Redhat cache and be as "gentle" as possible on your backend disk.  This means changing dirty cache settings, default swapping behavior etc.

There is plenty of information out there on how to optimize Redhat for IO which can directly relate to the VSA (see Oracle, etc on Redhat).  There are a few things that I have done that have improved the IO performance that you can try.  If there is an overall consensus that it helps and doesn't materially add risk to std operation then we could bring it in to the VSA's build.  However, it is important to mention that the tweaks that I am showing below could lead to more risk of data loss since more caching (in RAM) could be in effect which leads to less backend disk usage and better response times.

/sbin/swapoff -a

/etc/sysctl.conf - change dirty page info

vm.dirty_background_ratio = 50

vm.dirty_ratio = 80

vm.swappiness=10 (instead of swapoff, just try swaps down)

1 Message

February 1st, 2011 07:00

hi,

my lab is pretty poor. I have virtualized esxi hosts on a top of a physical one. I got around 600ms latency w/ cloning from another datastore, if cloning inside the datastore (vsa nfs) I got around 2000ms...

have you tried to add more memory to the VSA?

53 Posts

February 1st, 2011 13:00

Just doubled the RAM to 4GB, but that made no difference.  I changed over to iSCSI to see if that was any faster. It is about 2x the speed.  I'm deploying a VM from a template that's on the EVA and I'm seeing an average network traffic of about 80 mbps inbound to the VSA.

Guess I'll try another data mover and see if that helps.  30 mins is too long to deploy a 14GB VM from a template.

53 Posts

February 7th, 2011 09:00

Changed those settings, and as a result, the VSA is much faster. So dramatic in fact, that it deserves its own blog post:

http://www.virtualinsanity.com/index.php/2011/02/08/emc-celerra-uber-vsa-3-2-performance-tweak/

Thanks Clinton!

58 Posts

August 17th, 2020 07:00

Does anybody have a working URL to download this UBER VNX ovf?

Tia

Carl

No Events found!

Top