Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

10602

September 23rd, 2010 18:00

Should I virtualize my vWorkspace Terminal / Remote Desktop servers?

Common question that has been asked many times since the advent of OS virtualization.  Usually the anser is it depends.  Take some user environments:

Apps are not able to run on 64 bit architecture and you want to increase per/host density.   In this case OS virtualization makes a strong case as you can carve out several 32 bit TS/RDS servers that have limited 32bit memory address space.  In this scenerio, a client can host multiple TS/RDS 32 bit servers and take advantage of the improvements in hardware.  Typically you will give up some overhead for the hypervisor (no matter what the vendor states.)  Assume this to be as high as 25%...even at that loss you still gain density.  take this example:

500 users that can run on 10 physical terminal servers at 50 users per host (assuming dual/quad or quad/dual core with 4gb ram.)   If you virtualize this environment you may find that on the same hardware with 16 gb ram you could now run 4 virtual 32bit terminal servers with 3 gb ram each and 2 virtual procs.  Assume each of these virtual servers can now only host 38 users.  Now your density per physical host is now 152 users.  Not a bad gain for adding a hypervisor and some memory....

Now take that same scenerio and assume the apps ARE 64 bit compatible...here is what could happen.

Run the environment on the same hardware with 16gb ram.  Where you had 50 on the 32 bit server you now may b pushing 200ish.  (many factors contribute)  That is a bit better than the 152 on a hypervisor.  Could the argument be made that if you run 64bit terminal server on a hypervisor and get double the benefit.....

98 Posts

September 24th, 2010 23:00

I run a hosting environment for small business completely off a virtualized environment.  Provided you have a good idea of the workload of the applications, plus have a strong background in virtualization and storage design, RDS can be successfully deployed in a virtual environment.

So far I find that using 2 vCPU's and 4GB of RAM with about 15-20 users is a nice sweet spot between utilzation and performance.  Some people recommend overloading on the vCPU's to try and get higher ratio's and on virtualization thats just the wrong approach.  If user density per box is the goal...go physical.

Depending on your virtualization environment, data center licenses are usually the best option in a 40-50 server farm (6 sockets of datacenter will cost you about 18k...but 10 licenses of Enterprise (4 VM's per license) = about $28k...standard isn't worth it with virtualization as you cannot legally use DRS or HA effectivly as a box can only change hosts once every 90 days.

As for VDI, it would be awesome for a Hosting environment IF Microsoft would pull their head out of their nether region and make it available for the SPLA.  The only workaround is to turn windows 2008 into a desktop and provide each client with a RDS Server for themselves...thankfully Windows 2008/Windows 7 are the same Core, so its pretty much the same thing...I just license it out as a Desktop with a TS and Windows Cal for licensing...

180 Posts

September 23rd, 2010 19:00

Virtualize all the way!!!

I would rather have 30 - 50 users per RDS VM than 100 - 300 per physical server. If Windows crashes you loose 30 - 50 users not 100 - 300.

Even if you couldnt increase user density... You know you can have a high percentage of users still working in the event of an RDS server going down. Of course this assumes you have the right level of HA, etc in place in the event of a physical server crash

Dan.

22 Posts

September 23rd, 2010 23:00

Good Point assuming it is the guest OS that crashes and not the HyperVisor.  OR if you protect the host for failure you significantly upgrade the architecture to add shared storage and redundant/spare hosts to get the HA functionality to move the failing workload to the second host.

180 Posts

September 24th, 2010 09:00

Thing is...

A crash is a crash so the user will have to re-connect to a session anyway....

If you have layered your user "experience" in the following way,

- User Data
- User Environment

- Applications Layer
- OS

The user can log back in and pretty much pickup where they left off and think it's exactly the same session! Minus some lost changes to data. User have come to expect this when a physical desktop crashes anyway.

So as long as you have the server capacity on another physical host - you should be able to avoid central storage costs.

If that made any sense?

Dan.

173 Posts

September 24th, 2010 13:00

It used to be a no-go to virtualize TSes because the hypervisor provided too much overhead. This changed overtime as ESX and HyperV and -more importantly- the hardware started to get their act together. Now it is perfectly doable to virtualize your TSes. as a matter or ironic fact, in a strict x86 scenario you are MUCH better off virtualizing them. Installing x86 TSes physically on current new hardware will not allow you to utilize all those cores and RAM efficiently before you hit the kernelmode-memory bottleneck (even on 2008).

So if you need to go for x64, that only leaves the choice: virtual or physical. I think that the answer is: (sorry) it depends. If you want to fully utilize x64 you need to give it heaps of memory and cores. If you do that, you probably wont get more than 2-3 TS per host. is that worth the virtualizing? I tend to think not because I am pretty sure there is a performance impact - no matter how small it is.

So then the answer would be x64 on physical right? Well, not always. This would depend on what Robb talked about: applications. Even today (late 2010) I firmly believe that more apps will work on x86 out of box than on x64. And this is not even taking into account WOW64 which leaves you to use more memory. So if application compatibility is an issue then x86 TSes virtualized would make the most sense, right? Well, not always. At the risk of contradicting myself also the choice of x86 TSes virtualized has drawbacks: it does give you less users per box than pure x64 and it requires you to buy more Windows Server licenses (x86 and x64 are equally priced).

I hope this adds some insights...

180 Posts

September 24th, 2010 13:00

Completely agree!

Big fan of RDS myself

180 Posts

September 24th, 2010 13:00

So your saying... Screw RDS and just use VDI then

173 Posts

September 24th, 2010 13:00

Not at all. Use each one where it is best suited ☺

Sorry for the vWorkspace one-liner but it is true!

173 Posts

September 26th, 2010 22:00

Out of curiosity: do you care about DRS/HA for TS/RDSH?

180 Posts

September 27th, 2010 05:00

Yes, but not in the form of central storage. If a physical host goes down the users have to re-connect anyway. So connecting to another virtual RDS server using local storage on another host/hypervisor is fine.

22 Posts

September 27th, 2010 12:00

Good point Daniel, you may loose the inflight sessions but reconnection is always an option.   It's furnny how when we were looking at apps/desktops being delivered via TS nobody cared much about HA however as we look at VDI most architect for HA from the get go.  

180 Posts

September 27th, 2010 13:00

Yep.... It's all the storage vendors trying to get in on the action and using scare tactics so people pay for server virtualization features for their desktops!

RDS and VDI should have a whole new set of SLA's - not server ones.

Of course it doesn't help when some of the vWorkspace "alternatives" are owned by smelly storage companies.

My personal conspiracy theories of course

Dan.

173 Posts

September 28th, 2010 01:00

I was actually asking Mark since he said: 'standard isn't worth it with virtualization as you cannot legally use DRS or HA effectivly as a box can only change hosts once every 90 days'

180 Posts

September 28th, 2010 11:00

Oops

Still my answer was awesome

Dan

98 Posts

September 29th, 2010 02:00

Sorry...been away.

I use DRA/HA on the hosts and think its a very effective load balancing mechanism.  I always load balance my Ts/RDSH across the systems in the farm (never want too many SMP VM's on a single host for scheduling efficiency, but in the case of a single physical host kicking the bucket, I really would rather have those VM's come back online pretty quick as I don'rt want to unnecessarily overload the remaining VM's and I don't want to have to over allocate VM's just incase a host dies.

Typically if a Host dies, I leave enough room on the other host(s) to be able to run everything without degradation...this at least means people get booted, get logged back in and not really have them realize (outside of their existing work is gone) that something went sideways.  Better they think, that was weird...rather then oh oh...what failed.

I recently had a very "long" conversation with a vendor who believes that to get the best performance of virtual VM's they are gonig to build an ESX host with 2 sockets- 6 cores (12 cores+hyperthreading) and put 6, quad vCPU xenapp servers, running on local storage with no DRS/HA and think thats the best possible design you could come up with.  Basically a 1-2 ratio....like to go over the pond and strangle them...especially since they are recommending 6 ESX hosts configured above for 200 concurrent users.

For the love of god, why would you then spend 250k to run 200 concurrent users...when 25k would run 250 on physical boxes...

I sent them a link where 32bit xenapp has the largest most ROI using 2vCPU's and a single server with 8 VM's can handle 550 concurrent users (running office)...

Some people drive me crazy.

No Events found!

Top