The Physical-to-Virtual Cookbook Part 1

Migrating applications from a physical to a virtual environment is no easy task. This multipart series presents a real-world migration project. Part 1 looks at the client’s environment, examines the rationale for the migration, and walks through efforts to inventory the data center and map application dependencies.

October 30, 2012

6 Min Read
NetworkComputing logo in a gray background | NetworkComputing

While many organizations have signed on to SaaS or piloted cloud computing deployments, lots of businesses continue to run critical production enterprise services in traditional data centers with applications running on dedicated hardware.

Why? Moving to a virtualized environment for new applications is easy; moving legacy applications and services is tough. Given the unique application-by-application analysis that is required to move environments to a virtual environment, whether a hypervisor in the data center or a private or public cloud, many enterprises struggle with a psychical-to-virtual migration.

In this multipart series, we present a cookbook approach to the steps and tools that are required to successfully migrate application from a traditional physical data center environment to a virtual one, based on a migration we conducted for a national telecommunications company. We will also highlight some of the pitfalls along the way and how to overcome them.

The Environment

The client's migration environment consists of a Cisco network with several 7609s and a handful of 3750G switches configured with multiple VLANs to separate management, user and server traffic. The server farm is made up of over 80 Dell servers hosting mostly dual-socket, eight-core systems with average memory and disk space. Most of these servers are 3 years old or older. More than 100 applications run on theses servers. They are a mix of standard COTS applications such as SharePoint, Oracleand Citrix; several file servers; print servers; and workstations for remote access. There are also custom applications on these servers, many of which are not fully documented.

Many of the COTS applications running on the systems are multi-tiered, and the logical connections of these tiers resides in the heads of the client's IT staff (in other words, we have limited documentation to rely on as we determine a path for this P2V migration). Most storage resides on the servers, but the client does have a NetApp array connected to servers via FiberChannel.

Why P2V?

P2V is not a slam-dunk for every organization, so it's important to analyze whether moving to a virtual environment makes sense. If so, you must also determine how the migration will affect the overall architecture of the data center. We will not discuss all the details of our client's business case, but instead focus on just a few of the key decisions.

First, the client's data center was maxing out. Although the client saw an increasing demand for services and applications, its rack space and HVAC thresholds were nearing their limits. Given these constraints, virtualization was a sensible option. For certain workloads, the ability to virtualize CPU, RAM, disk and network connections would let the client consolidate multiple physical servers into a single machine. When utilization spikes, or there is a modest progression of the workload, virtualized platforms would also let the client tune key attributes to compensate for the higher demand, instead of having to purchase a bigger server or simply accept the performance hit and deal with disgruntled users.

The high availability (HA) and disaster recovery architecture was not ideal. The client had a number of third-party tools to enable cold-standby systems, but they didn't offer the service level the client wanted to provide to customers. Like other companies that have gone down the virtualization road, the client wanted to take advantage of the possibilities that emerge when you decouple applications from hardware.

For instance, many hypervisor vendors, including VMware and Microsoft, allow a systems or software administrator to move running virtual machines from one server to another, minimizing or even negating downtime for maintenance and outages. Capabilities like virtual machine HA protect against physical machine failures, and resource checks ensure capacity is available for a possible restart of the VM in case of a hardware failure. Centralized management services provided by the virtual platform, and by some hardware vendors, give a consolidated view into all server and virtual machines. This can streamline administrative tasks like troubleshooting, configuration, cloning and patch management.

Next Page: Drawing a MapBefore embarking on any actual migration, you must understand the applications and physical devices you have in your data center. This is easier said than done for many organizations. You must understand the complete logical and physical application topology before you touch anything, especially complex, multitier applications.

We used a set of application dependency mapping tools at our client's site, including an appliance that was connected to the network. A network administrator configured port mirroring to the appliance to collect and store transaction data for us to examine. We used the OPNET Response Xpert appliance in conjunction with the OPNET appMapper Xpert application. (OPNET was recently acquired by Riverbed.)

We ran the appliance for about one week and collected interconnect information across 100 applications. In addition to server-to-server connectivity, the appliance also mapped details such application connections at the port level. This gave us a complete picture of the client's data center environment. We also discovered some custom applications that didn't show up in the client's software inventory spreadsheets.

With our dependency data in hand, we then needed performance metrics for the applications. This would allow us to accurately size the virtual servers that would be used in the migration. Our client already had the open-source Paessler PRTG Network Monitor installed. This tool collected CPU, memory, I/O throughput and disk statistics. Luckily, this tool had been running for 12 months, so we had a good baseline of metrics from which to draw.

This isn't the case with everyone; in many of our clients' environments we need to install agents on the applications to collect performance metrics, and then allow at least 30 to 45 days of data to capture baseline performance data. Agent installation can be time-consuming and expensive, particularly if you only plan to use agents during a migration, so consider open-source options like CACTI or Helios if you do not regularly monitor your application environment.

In a typical P2V migration you should analyze your existing hardware, but we were confident that the client's physical inventory was solid and that it had a good record of the type of hardware, CPU with socket count, core count, amount of memory and local storage. However, even with a detailed inventory and IP addresses for servers and applications, we still had to physically find these servers in the data center.

In addition to gathering technical details, we also decided to send out application questionnaires and conduct interviews with some of the key stakeholders to determine if there were areas for improvement. IT pros rightly focus on improving efficiency and lowering costs, but it's also important to check in with users to see if their service expectations are being met and find out if they are happy with the applications provided by IT.

We found the questionnaire to be an effective way to communicate with the client's key users about the migration and to verify the information we'd collected. We also discussed future growth plans and organizational needs that would increase load on the client's infrastructure, and looked into applications that were no longer supported or being phased out. The questionnaires and interviews also helped to bring people on board by making them part of the process, which is always good when interjecting change.

In Part 2 of this series, we'll delve more deeply into application dependency mapping and gathering performance metrics.

Don Magrogan is CTO of Fusion PPT, a cloud computing strategy and technology solution firm.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights