Start a Conversation

Unsolved

This post is more than 5 years old

5038

November 1st, 2011 10:00

Optimizing the Celerra VSA

For all of you out there running the Celerra VSA this post is long overdue and can help optimize the VSA quicker administration, file based operations, and better VSI demonstration response times.  If you are using the Celerra VSA for demonstration purposes against the VSI plugin, this is a must-read.  Some things presented below may not be new to all, but I have a discrepancy in some usages where the right settings were set the but the amount of memory available to the VM made the settings irrelevant.

It is important to note that the heavy lifting portion that makes the Celerra what it is gets encapsulated differently on the VSA than on its native hardware platform.  For this reason I have put together a bit of a HOWTO if you are looking to optimize under Linux (the VSA).  Some of the things listed below are taken straight from internet sources for tuning DB’s on Linux. 

Important note:  Change #4 and #5 can lead to services crashing in the Linux OS if there is not enough memory available or more data left in memory which can cause data corruption upon unplanned VM outages.

The numbered items listed below can be considered the quick and dirty approach to getting optimal bandwidth and administration out of the Celerra VSA.

SSHD Service Changes

If you are using the VSI plugin against the VSA this could be very important if not set correctly.  In order for the VSI to do certain operations it performs an SSH login to the Celerra VSA.  SSH natively does a reverse lookup on your IP logging in.  If the Linux OS does not have proper DNS servers set for the OS (Celerra services are different), then a delay can exist when logging into the VSA which will cause the VSI to unnecessarily hang for certain operations.  Instruct the SSHD service not to check do reverse lookups on logins or set the DNS servers under /etc/resolv.conf correctly.

  1. Edit /etc/ssh/sshd_config, find the line as “#useDNS yes”.  Remove the “#” character and change this to “useDNS no”.
  2. Enter "/etc/rc.d/init.d/sshd restart"
  3. Check the login times with SSH

The above change can mean a whole lot for VSI, so check to see if this setting by itself gets what you're looking for before moving below.  It is important to mention that the below changes will get you better performance for the VSA.  However, the VSA up to this point has been packaged for minimal resource usage and demonstration capabilities vs it's true potential with the right resources below it.  The future will tell.. for now this should help.

Virtual Hardware Changes

The current version of the VSA has the capability of running more than one datamover (ie. NAS head).  This is useful only in the case that you are looking to have a full functioning Celerra management environment with minimal alerts and aligned as close as possible to the hardware.  If performance and minimal overhead is what you are looking for then you do not want to run the second datamover.  The default behavior of the VSA is to have enough resources assigned out of the gate which signals to the first-time start scripts internal to the VSA that only one datamover is to be configured.  If you modify the virtual hardware prior to starting the VSA the first time then it possibly could add another datamover.

  1. Do not modify the Virtual Hardware resources prior to the Celerra VSA running for the first time
  2. After a successful initial configuration, shutdown the VSA, and enhance Virtual Hardware (shutdown -a now, and poweroff)
    1. Increase VM memory (RAM) to 4GB
    2. Increase vCPU count to 2
    3. Future – consider changing IDE disk controller out for SCSI

VM Guest Changes

Note:  The changes below are assuming that you at least increased memory above greater than the standard amount.  Without this change, some of the things listed below will have minimal effect.  As well things listed in the following paragraphs can increase the risk of data corruption for data living on the VSA since we are telling Linux to “sync” data to disk less often to minimize ingress/egress disk IO to the virtual hardware.

As mentioned in previously, the datamover’s are encapsulated under Linux and under a “EMC Blackbird” service.  This service uses memory and makes calls to the Linux OS no different than any other process.  For this reason it is important that best practices be maintained for the Linux platform if you are looking for optimal operations.  The Celerra VSA is packaged at the moment to hold a minimal footprint and not for optimal operations.

Linux can do strange things with the swapfile even when memory is not in contention.  In order to minimize disk IO we suggest disabling the swapfile on all drives of the VSA through reboots.

  1. Edit the /etc/rc.d/rc.local file, add “/sbin/swapoff -a” at the bottom

Another optimization for disk access is around how often Linux will “sync” data in memory to the backend disk.  If there is enough memory available (VM hardware at least 4GB) we can utilize the vm_dirty and vm_background_dirty options to tell only sync data when certain thresholds are met.  This will help with random disk reads/writes and even smaller sets of sequential disk reads/writes (small depends on how much RAM the OS has above what’s being used).  There are plenty of resources online that demonstrate how to test this functionality with a simple “dd” command and by echoing values to set these options.  We will set the options perpetually in the sysctl.conf file.

  1. Edit /etc/sysctl.conf, add “vm_dirty=80”, and “vm_background_dirty=50” to the bottom

Blackbird Service Changes

Previsouly we increased the amount of memory to the virtual machine and told the OS to use more for disk IO.  We can also increase the amount of memory that Blackbird has for the Celerra services.  This can help with the administrative interface response times and IO operations.  There is a special blackbird_setup file that we can open in order to modify this setting.   Again this setting is only helpful if the virtual hardware RAM has been increased and there is more free RAM.

  1. Edit /opt/blackbird/blackbird_setup and change the “export BB_MEMSIZE=” line to be “export BB_MEMSIZE=2000”.

Celerra VSA 3.2 download http://nickapedia.com/2010/10/04/play-it-again-sam-celerra-uber-v3-2/

Celerra VSA Performance Tuning http://www.virtualinsanity.com/index.php/2011/02/08/emc-celerra-uber-vsa-3-2-performance-tweak/

1 Rookie

 • 

20.4K Posts

November 3rd, 2011 14:00

Clinton,

thank you for sharing these tips, very helpful. I remember trying VSA 5.6 a couple of years ago and it was dog slow, the new one is actually very snappy. Do you know if Nick is/when going to release an updated version that has the new Unisphere version. I would like to tie my two VSA into one domain.

Thanks

November 9th, 2011 23:00

As I understand it (don't quote me), many of the enhancements that Nick made in the Uber edition and all/some that Clinton shared above with the Celerra VSA's were used in the latest VNX versions available from PowerLink.  The latest VNX simulator (VSA) is running:

VNX OE v7.0.35.3

Home > Support > Product and Diagnostic Tools > VNX Tools > VNX Simulator

vnxvsa.JPG

On the other hand, as for any new releases of the Celerra VSA's (DART 6.0), I would need someone else to comment.

1 Rookie

 • 

20.4K Posts

November 10th, 2011 03:00

Arghh..i should have checked under VNX before deploying the latest Celerra one. Thanks Christopher

1 Rookie

 • 

20.4K Posts

December 3rd, 2011 19:00

i downloaded the latest VNX simulator and have spent a couple of hours trying to get network connectivity to work. Nicks' Uber VSA is in OVF format, so easy to import into ESXi and everything works.  Has anybody managed to create an OVF package of the latest simulator ?

Thanks

No Events found!

Top