Keeping IT Simple

The first of a series of columns on how virtualization can simplify and automate data center operations

July 1, 2004

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Over the next few weeks, Ill be taking a look at user concerns about virtualization and trying to dispel any myths surrounding the technology. But first, let’s take a look at what virtualization is and how it works in order to address some of the confusion surrounding it.

As you know, data centers comprise a wide array of computing, storage, and network systems. The result is a very complex infrastructure that creates a great many management issues. These, in turn, place a strain on IT resources. There is a growing need for technologies that can simplify and automate the data center in order to make it run more smartly and more efficiently – and that’s where virtualization comes in.

Virtualization is the pooling of IT resources in a way that masks the physical nature and boundaries of those resources from users. This allows companies to meet logical resource needs with fewer physical resources. Virtualized products allow you to deploy multiple instances of a variety of services – all from the same appliance. Though virtualization has just come into the limelight within the past year, it has been used for over a decade in technologies such as Frame Relay, virtual LANs, logical partitions (LPARs), and RAID.

Virtualization couples the economics and efficiencies of a shared system with the integrity, performance, and security of an independent system. Virtual devices deliver a wide range of functions on a single physical hardware platform. However, network administrators can configure, deploy, and manage these functions as if they were separate devices. So the benefits here are twofold: You save money by purchasing fewer physical appliances (capital expenditure); and the ability to electronically provision the functions remotely means you no longer have to send someone into the data center each time you need something reconfigured, which saves you a great deal of time (operating expenditure).

Some people are uncomfortable with the idea of having multiple virtualized instances of different functions – such as multiple firewalls – sharing the same resources. While virtualized gear does combine multiple instances of certain functions, they also partition and isolate resources such as processing capacity, memory, bandwidth, and more, into multiple sets of resources. Users can operate these resources independently and allocate different quantities to specific applications in order to ensure isolation.Isolation starts at the configuration level, ensuring that each set of resources within the system has a separate and dedicated configuration. This protects each set of resources so that a misconfiguration of one application’s resources will not interfere with a second application. The system isolates resources all the way down to the hardware and provides each virtualized instance with separate queues, buffers, memory, or processing resources.

The system shifts resources among the virtual instances – such as providing one with more bandwidth or another with more memory – as need dictates.

Virtualized systems maintain system integrity in the face of faults. They do this by establishing protective domains that prevent the failure of any given segment from propagating through the system and affecting other virtual entities. If a service experiences a crash for any reason, the virtual system can simply clean up all the memory associated with the virtualized partition and recreate it from scratch. This allows quick service restart without compromising other partitions that are also running in the same system.

Virtual services cannot “see” any of the other virtual services on the system – which ensures that virtual functions operate independently and also guarantees security and integrity among virtual functions on a single hardware platform. By using hardware memory management unit (MMU) technology, the system can ensure that one virtualized instance cannot inspect the memory state of another virtualized instance or corrupt or use its resources.

Original virtualization schemes were software based. These provided a certain level of virtualization, but left users open to problems such as resource exhaustion, denial of service (DOS), and unpredictable performance. Today’s modern virtualization solutions integrate hardware-based techniques to solve these problems and ensure the same predictable performance and integrity as do dedicated solutions.— Dave Roberts, VP Strategy and Co-Founder, Inkra Networks

Read more about:

2004
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights