Start a Conversation

Unsolved

This post is more than 5 years old

A

5 Practitioner

 • 

274.2K Posts

2202

September 14th, 2012 10:00

Why choose a VNX over VMAX for VDI

A customer is looking to share his existing VMAX to also run his VDI workload. They want to run 3000 desktops concurrently and this can scale up. We are recommending a seperate VNX to them but they want to know why they can't use their existing VMAX instead.

We've spoken to them about all the different reasons:

-     The VDI workload is very different form the Server workload and should be isolated.

-     The ability of FAST Cache to let them size for average IOPs while letting them absorb all IO Storms without sizing for peak IOPs.

-     The Unified Storage aspect of the VMAX to host user date, profiles, etc.

-     And so on....

They are asking for specific data to back this up. Please take a look at what I copied and pasted from Itxik's blog. I need something similar for the VNX.

"

  1. This came from one of our team members, a few months back.
  1. A large financial did this for awhile on a DMX and then realized what a bad idea it was.  Their storage architect’s quote to me was “VDI is a nasty workload, I don’t want it on my enterprise storage.”  What he was saying: “we had VDI on our DMX along with other workloads and the spikes affected them so now we run it on its own NS.”
  1. We can't lose sight of the $$ / User that is always factored into the VDI discussions.  It's a metric that works in the server world, but in the desktop world, it's not so clear cut.  Putting the workload on a VMAX, even if the customer already owns the Frame, will cost more to populate the disks and licenses than purchasing the VNX.

The other area that customers have been speaking to me about lately is the Failure domain.  The idea of hyper consolidation of the desktop environment is a very different risk profile than that of the server world.  Lose an infrastructure with servers, and your users can still function to some degree, but lose one with the desktops, and you just cut them off at the knees...which is why I've seen some customers lately opt for smaller configs and scale out...for price, performance isolation, and risk mitigation.

Anyways, if they still want to pursue the VMAX route….

Here is something that you can forward them and have them take a look at. I can’t stress enough to get the assessment done for their particular workloads, as their results could vary. But this should be a good starting point for them.

http://itzikr.wordpress.com/2011/05/25/designing-a-really-large-vdi-deployment-on-a-vblock-700mx/

Another POC we did for a customer resulted in this . …..

  • Workload Profile:
  • Average: 10 IOPS per desktop
  • Peak: 30 IOPS per desktop
  • Total number of virtual desktops: 1000
  • 100% of the sessions running with 60% of the sessions in use
  • 20 GB C: drive per desktop
  • Expected response time: 10 ms
  • Hardware:
  • Storage Array: VMAX
  • Servers: Dell PowerEdge R815 with (2 X 12 Core AMD processors & 128GB RAM): ESXi
  • Software:
  • ESXi 4.0 Update 2
  • VMware vCenter 4.0 Update 2
  • VMware View 4.5 (Beta)
  • Windows XP Professional: desktop clients
  • View Storage Layout:
  • Parent VM: 1 x 122 GB EFD disk
  • Replicas: 8 x 122 GB EFD disks
  • Linked Clones (1 replica : 128 linked clones): 8 x 600 GB FC disks

Based on the above config, 1000 users (RAID1) consumed around 35% of the Cache Utilization (128GB) with two engines So following this logic, 100% / 35 = 3 (roughly) so you can probably put up to 3,000 per 2 engines and 128GB of Cache Now, following this logic, on a fully spec VMAX 20k (8 engines), you can put around 12,000 users (3,000 X 4) with no FASTP VP."

7 Posts

September 14th, 2012 11:00

The biggest two reasons I would give for VNX over a shared VMAX are:

1. Isolation

2. Cost

I think the first reason is the most important.  Any significant VDI implementation should be on its own infrastructure, period.  Could VDI run on a VMAX alongside other workloads?  Yes.  But it is just not a best practice to mix these workloads.  Spikes, management domains, failure domains, capacity planning, scaling... these are all reasons why VDI needs to be on its own infrastructure.  This is reason why Vblock is so popular for VDI:  it's a net new architecture on net new infrastructure.  The vast majority of VDI owners today will agree that you should design blocks, pods, whatever you want to call them that run 500, 1000, 2000, or more, each.  This makes it very easy to scale.  Also, everyone who deploys VDI goes through a learning period.  Don't count on your design today at desktop 0 to be exactly the same a year from now when you have 1000 desktops running.  Expectations and reality are almost always different no matter how much due diligence and planning you do. 

The second reason is important as well, but maybe less for customers who have already deployed VMAX.  They may look at just adding more disk vs. investing in another array.  The VMAX is very capable of VDI but it's not the most cost efficient... unless the requirements dictate the extreme reliability that VMAX delivers. Starting a VDI deployment on VMAX to learn and test things is one thing, but creating a large, production environment that's mixed with your virtualized servers, databases, etc, is just asking for trouble.

For specs and designs of VNX for VDI I'd recommend looking at the VSPEX reference architectures here:  https://community.emc.com/docs/DOC-16196

16 Posts

September 14th, 2012 12:00

These two posts are definately on the mark.  In the end, most customers will end up paying more per user when running on VMAX than on VNX.  In addition, the isolation case is totally true.  There are limits that exist in both vCenter and in View that are going to cause a logical break in the design, which ends up making the "Pod" design even more attractive.  The Vblock is a great solution to that, but even in the case of a VSPEX design, breaking out the pods into smaller performance and failure domains makes sense.

Also, FAST Cache is KING when it comes to VDI and Linked-Clone designs, which further reduces cost and increases the predictability of performance along the way.

Will VMAX work for VD?  You BET....is there perhaps a better choice?  VNX is currently the right choice for our customers.

9 Posts

September 14th, 2012 12:00

What would also help serve the cause is, some performance numbers that came out of Itzik's analysis from the POC he did for one of our customers on the VMAX side.

Does anyone have any information on VNX For eg. A VNX 7500 with "X" amount and "Y" types of drives can accommodate up to " Z " number of desktops based on and avg desktop user profile with 30iops, 20GB of space per desktop etc.

5 Practitioner

 • 

274.2K Posts

September 14th, 2012 12:00

I also like the fact VNX is Unified so you have a better solution for all dimensions of VDI : boot image (SAN or NAS), user files (NAS) and application virtualization (NAS).

My 2 cents.

5 Practitioner

 • 

274.2K Posts

September 14th, 2012 13:00

What Ashok said.

I agree with everything that the helpful folk have posted. And most of this has been communicated to the customer. They are looking for specific data as Ashok mentioned. Any such data would really help.

No Events found!

Top