Dirty Little Secrets of Virtualization

How can you ensure the stability of your data center while at the same time taking maximum advantage of the flexibility of virtualization?

By Steve Nye, Infoblox EVP of product strategy and corporate development
Wed, April 27, 2011

Network World — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

The virtualized data center has accelerated the pace of operational change. Virtual machines are reconfigured, computing loads are moved, and applications are scaled up and down rapidly. We know that with rapid rates of change come high levels of mistakes; analysts estimate that 60% to 80% of data center problems are caused by management mistakes. How can you ensure the stability of your data center while at the same time taking maximum advantage of the flexibility of virtualization?

Virtualization promises to improve data center operations and indeed it does. Server consolidation has great benefits. The ability to migrate loads without stopping them greatly eases hardware management. The ability to deploy new virtual machines in a fraction of the time of a physical machine makes application development and deployment more rapid and effective.

ANALYSIS: Virtual server management demands strong policies, automation tools

However, the advantages of virtualization bring some associated costs. The hypervisor adds another level of complexity in the software stack and imposes requirements on the servers, the storage system, and especially on the network. While the hypervisor offers some automation for simplifying operations of servers, the environment around the virtual cluster was impacted without being made any simpler.

Virtualization Definition and Solutions

In a recent survey of Infoblox customers, 70% reported that virtualization put more pressure on their network operations. It's easy to see the source of this pressure. Every virtualization initiative is surrounded by physical resources:

• Storage systems

• Users, workstations and partner networks

• Load balancers and security devices

• Remote peer servers

• Physical unvirtualized servers

• Competitive hypervisors that are not compatible

Private clouds, laboratory systems and other specialized clusters

The boundary between each of these elements and the virtualized environment is a place where operational mistakes can be made. Both sides of the boundary matter; the hypervisor's configuration may be incorrect, or the external environment may be misconfigured. When a performance problem arises, information from both sides of the boundary must be integrated to find the solution.

When new applications are deployed, both sides must be validated in advance. Mistakes and inconsistencies will show up in three different ways: in application performance issues; in delays in operational procedures; or in inefficient operations that eat up staff time. Each data center will have its own pattern; here are some examples:

Application performance becomes poor or inconsistent

Continue Reading

Originally published on www.networkworld.com. Click here to read the original story.
Our Commenting Policies