A Quick-Start Guide to Virtual Desktops

Virtual Desktops

Despite what vendors would have you believe, deploying virtual desktops is no small undertaking. There’s a surprising amount to think about and the potential panacea of massive hardware savings combined with super-easy management can easily turn into a nightmare of unexpected costs and problems. That said, if you’re heading for a desktop refresh, or if you’re looking to reduce IT overheads, you should at least have a basic understanding of virtual desktop infrastructure (VDI). Done right, it can pay dividends in the long term.

For the uninitiated, the overall premise of virtual desktops is simple - take the computing requirement away from local machines and centralise it (in servers). Subsequently, the user requires a very low performance/cost machine to act as a portal to their desktop in a virtual environment. This can be achieved in a variety of different ways depending on the size of the deployment, the use case(s) and existing technology, but the (very) basics are these:

  1. The network
  2. The user’s local hardware
  3. The Virtual Infrastructure (servers and software)

Firstly, this guide is only designed to get you off on the right foot, not cover every eventuality. The reality is that most of this stuff goes a lot deeper and that every organisations’ use cases are different. It goes without saying that if you’re evaluating a project, you’ll need to do a lot more reading, planning, budgeting, testing, piloting, reprovisioning and generally pulling your hair out.

For brevity, we’ve made some assumptions; specifically that all your users are “normal” office users, with no intensive software (like artworking, heavy data crunching, 3D or CAD applications) and no special comms requirements like softphones, webinars or video conferencing. Also, we’re only discussing Windows in this article. There aren’t really any virtualisation options for MacOS presently and although linux desktops aren’t inconsequential in business, they are too variable to cover here.

1. Your network

Your network is step one - if you've got loads of devices on busy WiFi access points or your comms rooms are stuffed full of old (or worse, unmanaged) switches, you’re going to have problems dealing with large numbers of users. The amount of bandwidth required per user varies (sometimes drastically), but if your network speeds aren’t great due to a bottleneck somewhere, and/or users are requiring sound and heavy refreshing as they consume media, they can use upwards of 500Kbps each - if you have a lot of users coming through a node, it’s easy to end up with some serious latency problems.

Under normal conditions, the “viewer” software used to access virtual desktops is relatively well optimised. Essentially, users open a remote desktop connection to the server that hosts their desktop and keep it open while they are logged on - in order to control bandwidth consumption, typical refresh rates are around 15 frames per second (fps). Where virtual desktops begin to suffer is rich media - in some cases, if a user starts to watch a video, bandwidth consumption can jump to 30Mbps. Even if that bandwidth is available, it’s likely that users will see degraded video (because it’s essentially been randomly downsampled to 15 fps). Although newer virtual desktop infrastructure software is better at on-the-fly compression and managing bandwidth, the media issue is by no means solved. For now, the answer is that more network capacity normally means fewer problems.

The final major network consideration is your connection to the outside world. As workers become increasingly mobile, particularly as their desktops have suddenly become easily accessible, increased bandwidth usage is inevitable - as is the additional monthly cost.

2. User hardware

User hardware is where some of the major savings can be achieved. Put simply, because very little computing is being done by the user’s machine, it’s specs can be low. Everything from CPU capability and local storage to local memory and subsequent power requirements can be minimal. This means that you can opt for zero clients/thin clients (often more expensive than low spec machines) or repurpose existing end-of-life machines. By drastically reducing locally installed software, and standardising hardware with a one-size-fits-(nearly)-all approach, you can even reduce support requirements at the same time.

Once your desktop “viewer” client machines are in place, they should have a substantially longer life than any outgoing generation of “local compute” machines. If a conventional desktop machine’s average lifespan is dictated predominantly by its specifications (Processor speed, RAM etc) - such requirements have now been outsourced to a centralised server pool, from where they can be shared and allocated dynamically on demand. This means two things. Firstly; that you’ll need to invest less in unused storage, memory and processing capacity - which would otherwise sit on desks largely unused. The easy way to think about this is to ask yourself how many largely empty hard drives reside in your user's machines now. Secondly; because the requirement of the viewer machines remains largely unchanged, even if users’ requirements increase, they remain capable for longer.

Lastly, for mobile workers, most common virtualisation environments such as Citrix, Microsoft and VMWare offer excellent viewer applications for Android, iOS and Windows - allowing IT Managers to serve BYOD staff with ease and easily cater for hybrid users. In fact, by containing the entire work environment in the virtual appliance, you can greatly reduce risk and improve compliance.

3. Virtual Infrastructure

The virtual infrastructure that drives any virtualisation deployment is probably the most complex and variable piece of the puzzle. Essentially, we’re talking about servers and the software that runs them.


The Virtual Machine Monitor (sometimes called a Hypervisor or VMM) is the software in charge of your virtual environment - through it you create, administer and monitor your virtual machines. Hypervisors can be ”hosted” or “native”, which is simply a case of whether they are running “on” an operating system or they “are” the operating system respectively.

The VMM is important, as beyond the user-controlled provisioning of machines, it manages and allocates resources such as CPU and memory to virtual machines in real-time. There is much debate over whether hosted or native VMMs are best, but as with most things, it depends on the use case. If you’re a small business or you’re deploying equipment for a satellite office, you might prefer the flexibility and multi-tasking capability of something like a Windows server rather than an all-server-consuming native system. Conversely, if you’re sure your appliance is going to be totally dedicated to virtual machines, native systems are likely to offer improved performance.

When you're happy with your VMM(s), you need to think about the virtual disk images that will form the basis of your users machines. With virtual machines, it's important to spend some time stripping out unnecessary software and cleaning up the image(s), as inefficiencies are duplicated over and over again. It’s also worth making sure you’ve installed any compatibility software to ensure that your guest operating system(s) behave nicely on a virtualised platform.



Unsurprisingly, if you’re going to be running a large number of virtual desktop instances from a smaller number of servers, they need to be fairly capable machines. It’s common for VDI server appliances to run multiple multi-core processors and hundreds of GB of RAM. For average use, real CPU utilisation per-instance is often around the 500 MHz mark. By this logic, you could run more than 5 desktops per server CPU core, meaning a dual-processor server running 16 cores could theoretically support 160+ desktops. Of course, in practice the number is lower, to cater for fluctuations throughout the day, system overheads and more.

Another issue often overlooked is that conventional machines-on-desks almost always have on-board dedicated GPUs. If your virtual appliance is not equipped with dedicated graphics hardware, you can quickly find even heavyweight CPUs struggling to keep up, so it's worth considering dedicated vGPU hardware for your servers.

Lastly, we could write an entire guide just looking at storage options, but the primary consideration is latency - if read/write speeds are poor, bearing in mind there’s a whole load of moving data and switching to be done before the user sees their live desktop, the lost seconds can lead to a laggy user experience. Issues like this can escalate if you've got hundreds of people logging on at the same time - like at 9am. Spinning magnetic HDDs or highly trafficked NAS can severely degrade performance. Locally attached disk arrays, fibre channel SAN and SSD’s are all worth investigation.



As we’ve already covered, there are drawbacks with virtualisation. Chief among them is that costs are front-loaded - if you need to buy a load of servers and upgrade your network, you can soon find a large chunk of your budget is gone before a single user has been migrated.


The usual failure risks exist here, but some are magnified with virtualisation. For example, power or component failure, leading to system failure. Such failure used to be isolated to a single machine - but in virtualised environments, it can take out a group of users, or worse a VMM. As such, you’ll need to have spare capacity in your infrastructure and the ability to respond to issues quickly.


One of the least-favourite topics for IT leaders, licensing rears it’s ugly head in virtualisation. Many established software providers have very complex licensing structures when it comes to virtualisation. There are plenty of ambiguities and some vendors simply don't offer it, preferring to force users towards cloud/subscription offerings. It’s easy to think that current enterprise/volume agreements will cover you, but that’s not always the case. It pays to check, lest you get caught out.

Non-Standard Users & Software

Finally, one of the biggest drawbacks with virtualisation is catering for non-standard users and software. From softphones chewing up bandwidth to video transcoders bringing VMM’s to a standstill, here be dragons! Virtualisation vendors are notoriously vague about performance and requirements when it comes to anything more than standard images with Windows and Microsoft Office. Ultimately, only by conducting your own bandwidth and general resources tests will you ever really know what’s going on, so be prepared for a reasonably long teething period, combined with some uncomfortable decisions about whether to over-provision or chance it.

So What Now?

Possibly more so than in any other area of IT, there are thousands of whitepapers, guides, videos and more devoted to virtualisation. Unsurprisingly, the vast majority of these are created by vendors, so are specific to their own offerings, but they can give some good insights nonetheless. Our advice is simply this: Virtualisation is no small undertaking, it pays to do your homework and proceed with caution. A few well-placed consultant hours or pilot groups can go a long way towards preventing a disaster. Who knows, in a few years, you might be reaping the rewards.

If it all sounds a bit complicated, there are alternatives - virtualisation is not a requirement! Alternatively, in our next article, we’ll take a look at Desktop as a Service (DaaS) options, which take the work out of desktops… for a price.

Notable Suppliers

Notable virtualisation suppliers include Oracle, Citrix, VMWare, Microsoft, Hewlett Packard Enterprise, Dell, Cisco and more.

Want more like this?

Want more like this?

Insight delivered to your inbox

Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy

side image splash

By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy