Start a Conversation

Unsolved

This post is more than 5 years old

4417

June 20th, 2010 10:00

Which RAID protection to use with VDI

Which RAID protection to use with VMware View.

We are intent to build 25 VMware VDI's for videoconferencing.

What is better to use with 25 VMware VDI on storage. RAID 3 or 5 when using videoconferencing on it? Because of the streaming I would say 3.

I know that everything depends on the IOPS profile but this one is difficult.  Difficult to foresee.

5 Practitioner

 • 

274.2K Posts

June 24th, 2010 08:00

Hi

In traditional terms you are right that with the large sequential nature of video streaming that you would use RAID 3 - though these days with the improvements made to RAID 5 via Morley Parity enabling double full stripe writes - if the VDI guests are created aligned - then you should see equivalent or perhaps better overall results from RAID 5.

The question though is unless somehow you are caching the stream to a separate partition - something that some of the broker / image composer type technologies may be offering in the future - the reads and writes will be co-mingled with other reads and writes and it may be that the write proportion - we have seen it as high as 60-70% in some deployments - might justify looking at RAID 1/0

Alternatively with new capabilities emerging in the mid-tier that allow the provision of a supplementary high speed cache based on Enterprise Flash Disk (EFD) technology that may provide enough of a 'cache soak' to mop up the spikes in traffic seen from peaks with the video conferencing.

A lot of it will also depend on the efficiency of the codecs used by the video confencing and how conservative those codescs are at only transporting changed areas of the screen - rather than doing complete video refreshes - this too has an impact on disk usage

These days there are not too many cases where we see RAID3 - what you need to look at is whether RAID 5 will service the particular profile your video conferencing VMs will have doing everything else they are doing as well as the video conferencing and whether RAID 5 or RAID 1/0 are a better fit

Many thanks

Alex Tanner

7 Posts

June 24th, 2010 19:00

I'll second Alex.  When VDI was first hitting the scene, many folks thought a 90% read profile was the most common.  But over time and after analyzing many customers' VDI environments, we usually see a much higher write ratio.  The most common I've seen is 60%-70% writes.  This type of workload is most efficiently served by RAID 10 (really anything 40%+ writes).  On Alex's point of FAST Cache, we have already seen some VERY strong benefits especially in the VDI space.  I'm attaching a great preso from EMC World that shows some of the testing we did with FAST cache and VDI.

I would do some testing up front though to try to determine what the IO profile will be.  Can you run a physcial or virtual desktop with your video streaming and measure the IO read and writes?  I'd guess this is a very heavy read environment and in that case RAID 5.   Either way, FAST cache will help tremendously.

1 Attachment

211 Posts

June 24th, 2010 23:00

Thanks guy's this helps alot. You both make a point and I will definitely consider looking at it.

The EFD 'disks' seems to be great. Still but still expensive. Also here I guess definitely consider looking at it.

7 Posts

June 25th, 2010 02:00

You're right that EFD's are much more expensive... from a per GB standpoint.  But they are also MUCH cheaper from a per IOPs standpoint.  The fastest FC drive handless about 180 IOPs, EFD's handle about 2500 IOPs.  So for a workload like VDI, where many people are using some form of writeable snap like VMware View Composer, your capacity requirement is reduced greatly while your IO requirement stays the same.  This is where EFD's can make sense.  Also, using a small amount of EFD's as FAST cache in an array can give you the added performance benefit of dozens of spinning disks.  You'll see in that preso I posted that we found adding just 66 GB of EFD as FAST Cache (two 73 GB drives in RAID 1) handled as much IO as 60 FC drives in some of the use cases.  The cost differences there are easy to see, EFD can save money when you are IO bound.

211 Posts

June 25th, 2010 05:00

Hi Brian,

So for a workload like VDI, where many people are using some form of writeable snap like VMware View Composer, your capacity requirement is reduced greatly while your IO requirement stays the same.  This is where EFD's can make sense. 

I'm aware of the fact by using VMware View Composer (which we do) our capacity requirements will reduced. IO requirments stay's the same. Absolutely true.

I guess and correct me if I'm wrong you ment that when we use EFD's we need less traditional disks because of the capacity?

Thanks in advance

Roy Mikes

7 Posts

June 29th, 2010 05:00

What I mean is that for every workload you are either capacity or IO bound.  If you're reducing the capacity through composer then you are usually IO bound.  in other words, for a given workload you may need only 15, 400 Gb drives to get to the capacity you need, but to support the IO you need 100 drives.  The old method of "fixing" this gap is "short-stroking"... using large amounts of smaller drives to keep up with the IO.  Since EFD's come in generally the same capacities as traditional drives but handle ~10x IO each, you can solve the short-stroking problems of the past with EFD.  SO in this case instead of buying 100 fibre channel drives you may be able to buy 10 EFDs.  In the end, small numbers of EFD drives can be less cost than FC drives to serve the same IO... assuming you are not capacity bound.

211 Posts

July 1st, 2010 02:00

It's completely clear to me. Infact I was thinking the same way. So far thanks for all your response. I think I'll definitely will look at the EFD's.

Two in production in a raid 1/0 and in my failover site instead of a lot of FC disks

121 Posts

July 1st, 2010 12:00

Roy,

That's exactly the way to think about EFDs.

If you look at EFDs by the "standard" storage measurement of "cost/GB" they are, far and away, the most expensive drive we sell.  If you want "cost/GB" value, you want to be looking at SATA, especially the 2TB drives (and drive manufacturers have started to announce 3TB drives...).

But, if you look at EFDs to provide performance for your applications rather than simply capacity, and instead measure them by "cost/IOPs" they are, at approximately 2,500 IOPS per drive, far and away the least expensive drive we sell.

Use the right tool for the job, put the right workload on the right tier.  AFor workloads that either change, or can't be easily separated, the soon-to-be-released sub-LUN capabilities of EMC's FAST (Fully Automated Storage Tiering), announced at EMC World in May, will take care of that for you automagically.

If you do end up installing the EFDs, make sure to come back here and let us know what kind of performance you're getting from them.  I think a lot of readers here would love to see some actual-customer real-world info on this.

        -Dave

5 Practitioner

 • 

274.2K Posts

July 1st, 2010 14:00

211 Posts

July 2nd, 2010 01:00

Dave and txtee. Thanks for the additional information. I'm now tend to EFD's. So we see what the future us brings.I will sure post my experience here.

1 Rookie

 • 

20.4K Posts

July 5th, 2010 18:00

Alex,

why such a high write ratio in the VM ? Page file or just the applications that people run ?

Thanks

5 Practitioner

 • 

274.2K Posts

July 7th, 2010 10:00

Hi

Initially we couldn't put our finger on it either - but as we have worked withh more and more clients and helped them profile their desktop images carefully we have begun to notice a ton of interesting things about windows and the fact that the OS itself often has a steady state of 2-3 I/Ops of routine chatter - this can be event viewer log writes, WUS database updates, certain applications commonly installed as part of the Office Suite and a whole host of things that add a much higher overhead then we ever anticipated onto the client load and a lot of this stuff is writes not reads

Trimming windows OS builds (particularly Windows 7) is critically important in keeping I/Ops low and how this is done is relatively individual to each customer - though some of the packages from the likes of Liquid Labs that profile desktop usage http://www.liquidwarelabs.com/ and lakeside http://portal.lakesidesoftware.com/

Regards

Alex Tanner

No Events found!

Top