If you want to build a patio deck off the back door of your house, going down to the lumber store and hauling a big load of lumber back to the house as your first step is probably not the best idea. You almost always want to start out with a plan. You’ve got to take measurements, think about the deck’s relationship to the house, how it will be supported, and the necessary maintenance. But the fact is, a lot of us want to just gather up a bunch of material and start cutting wood and driving nails. It’s not fun to plan, scope, gather data, and create detailed lists of materials.

Obviously, that’s a bit of satire. But planning and assessing is one of the longest and most difficult parts of any significant project, including sizing a transformation from physical to virtual infrastructure or scaling an existing virtual infrastructure. Fortunately, there are best practices and open source or off-the-shelf tools that can help you get the job done.

Taking inventory

There are many elements involved in an inventory of IT assets. The low-hanging fruit is physical devices, including servers, storage, and networking components. Once physical devices are captured, it’s time to correlate overall IP address inventory, resource utilization per system (network, CPU, RAM, disk), and actual system workload – which, simply put, refers to the applications that a server is running. There are three more key inventory items to capture: power consumption, cooling requirements, and rack space utilized by the existing hardware. These items will be used in your final assessment, when you’ll calculate a cost-based justification for the move to virtualization as you realize cost savings from reductions in these three categories.

Workload targets

As the application inventory is gathered, additional fields will need to be mapped to each application to determine if it is a valid target for virtualization. Is the application code compatible with a virtual platform? Can the virtual platform support the performance requirements of the application? As you get ready to map workloads to future virtual machines, a mandatory data point that is derived during the inventory will be the current workload-to-resource utilization, including average and peak usage data. This data will be used to determine the number of processors and cores that will be required to support each workload. In the end, you will have a list of in-scope workload targets and workloads that will require dedicated physical hardware.

How many servers should be virtualized?

How many virtual machines can you run in your environment? In most cases, optimization and cost savings can be realized as your assessments reveal under-utilized physical hardware. But the final answer will lay in a few more considerations: Do you want to run as many workloads as possible on the absolute least number of physical servers? That will result in high server costs as you deploy four-socket six-core servers, in a common scenario.

What else should be virtualized?

We all understand the benefit of standardizing and automating our infrastructures. Additional ease of management and predictive costing of your data center can be obtained as you look beyond typical server workloads as optimization targets. Leading suppliers of firewalls, load balancers, and WAN optimization devices are offering these technologies as virtual appliances, so be sure to investigate them as well.

There is no magic formula or calculator that will automatically discover what you have, measure it, and funnel it into a machine that produces a virtualization plan for your business. Once you put forth the effort, you will quickly find that the up-front assessment will pay back a hundred-fold with an easy-to-scale, easy-to-manage, and easy-to-predict infrastructure.

David Reoch is an enterprise technologist at Dell, specializing in cloud and virtualization strategies for SMB customers.

Recommended assessment tools
Use these free tools to help assess your server infrastructure and plan for virtualization: