What is virtualization?
Virtualization refers to technologies designed to provide a layer of abstraction between computer hardware systems and the software running on them. By providing a logical view of computing resources, rather than a physical view, virtualization solutions make it possible to do a couple of very useful things: They can allow you, essentially, to trick your operating systems into thinking that a group of servers is a single pool of computing resources. And they can allow you to run multiple operating systems simultaneously on a single machine.
Virtualization has its roots in partitioning, which divides a single physical server into multiple logical servers. Once the physical server is divided, each logical server can run an operating system and applications independently. In the 1990s, virtualization was used primarily to re-create end-user environments on a single piece of mainframe hardware. If you were an IT administrator and you wanted to roll out new software, but you wanted see how it would work on a Windows NT or a Linux machine, you used virtualization technologies to create the various user environments.
But with the advent of the x86 architecture and inexpensive PCs, virtualization faded and seemed to be little more than a fad of the mainframe era. It’s fair to credit the recent rebirth of virtualization on x86 to the founders of the current market leader, VMware. VMware developed the first hypervisor for the x86 architecture in the 1990s, planting the seeds for the current virtualization boom.
Why would I want virtualization?
The industry buzz around virtualization is just short of deafening. This gotta-have-it capability has fast become gonna-get-it technology, as new vendors enter the market, and enterprise software providers weave it into the latest versions of their product lines. The reason: Virtualization continues to demonstrate additional tangible benefits the more it’s used, broadening its value to the enterprise at each step.
How Server Virtualization Tools Can Balance Data Center Loads
The Virtues of Virtualization
The Benefits of Consolidation and Virtualization
Server Virtualization Snapshot
Server consolidation is definitely the sweet spot in this market. Virtualization has become the cornerstone of every enterprise’s favorite money-saving initiative. Industry analysts report that between 60 percent and 80 percent of IT departments are pursuing server consolidation projects. It’s easy to see why: By reducing the numbers and types of servers that support their business applications, companies are looking at significant cost savings.
Less power consumption, both from the servers themselves and the facilities’ cooling systems, and fuller use of existing, underutilized computing resources translate into a longer life for the data center and a fatter bottom line. And a smaller server footprint is simpler to manage.
However, industry watchers report that most companies begin their exploration of virtualization through application testing and development. Virtualization has quickly evolved from a neat trick for running extra operating systems into a mainstream tool for software developers. Rarely are applications created today for a single operating system; virtualization allows developers working on a single workstation to write code that runs in many different environments, and perhaps more importantly, to test that code. This is a noncritical environment, generally speaking, and so it’s an ideal place to kick the tires.
Once application development is happy, and the server farm is turned into a seamless pool of computing resources, storage and network consolidation start to move up the to-do list. Other virtualization-enabled features and capabilities worth considering: high availability, disaster recovery and workload balancing.
How can virtualization benefit my business?
Beyond the potentially dramatic cost savings, virtualization can greatly enhance an organization’s business agility. Companies that employ clustering, partitioning, workload management and other virtualization techniques to configure groups of servers into reusable pools of resources are better positioned to respond to the changing demands their business places on those resources.
Also, this technology offers the potential for a fundamental change in the way IT managers think about computing resources. When managing individual boxes becomes less of a challenge, the focus of IT can shift from the technology to the services the technology can provide.
What are the different types of virtualization?
There are three basic categories of virtualization: Storage virtualization melds physical storage from multiple network storage devices so that they appear to be a single storage device; network virtualization combines computing resources in a network by splitting the available bandwidth into independent channels that can be assigned to a particular server or device in real-time; and server virtualization hides the physical nature of server resources, including the number and identity of individual servers, processors and operating systems, from the software running on them.
This last category is far and away the most common application of the technology today, and it is widely considered the primary driver of the market. When most people use the term “virtualization,” they’re likely talking about server virtualization.
What important terminology should I know?
What is a hypervisor?
The hypervisor is the most basic virtualization component. It’s the software that decouples the operating system and applications from their physical resources. A hypervisor has its own kernel and it’s installed directly on the hardware, or “bare metal.” It is, almost literally, inserted between the hardware and the OS.
What is a virtual machine?
A virtual machine (VM) is a self-contained operating environment—software that works with, but is independent of, a host operating system. In other words, it’s a platform-independent software implementation of a CPU that runs compiled code. A Java virtual machine, for example, will run any Java-based program (more or less). The VMs must be written specifically for the OSes on which they run. Virtualization technologies are sometimes called dynamic virtual machine software.
What is paravirtualization?
Paravirtualization is a type of virtualization in which the entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The kernels of both the OS and the hypervisor must be modified, however, to accommodate this close interaction. A paravirtualized Linux operating system, for example, is specifically optimized to run in a virtual environment. Full virtualization, in contrast, presents an abstract layer that intercepts all calls to physical resources.
Paravirtualization relies on a virtualized subset of the x86 architecture. Recent chip enhancement developments by both Intel and AMD are helping to support virtualization schemes that do not require modified operating systems. Intel’s “Vanderpool” chip-level virtualization technology was one of the first of these innovations. AMD’s “Pacifica” extension provides additional virtualization support. Both are designed to allow simpler virtualization code, and the potential for better performance of fully virtualized environments.
What is application virtualization?
Virtualization in the application layer isolates software programs from the hardware and the OS, essentially encapsulating them as independent, moveable objects that can be relocated without disturbing other systems. Application virtualization technologies minimize app-related alterations to the OS, and mitigate compatibility challenges with other programs.
What is a virtual appliance?
A virtual appliance (VA) is not, as its name suggests, a piece of hardware. It is, rather, a prebuilt, preconfigured application bundled with an operating system inside a virtual machine. The VA is a software distribution vehicle, touted by VMware and others, as a better way of installing and configuring software. The VA targets the virtualization layer, so it needs a destination with a hypervisor. VMware and others are touting the VA as a better way to package software demonstrations, proof-of-concept projects and evaluations.
What is Xen?
The Xen Project has developed and continues to evolve a free, open-source hypervisor for x86. Available since 2003 under the GNU General Public License, Xen runs on a host operating system, and so is considered paravirtualization technology. The project originated as a research project at the University of Cambridge led by Ian Pratt, who later left the school to found XenSource, the first company to implement a commercial version of the Xen hypervisor. A number of large enterprise companies now support Xen, including Microsoft, Novell and IBM. XenSource (not surprisingly) and SAP-based startup Virtual Iron offer Xen-based virtualization solutions.
What are the cost benefits of virtualization?
IT departments everywhere are being asked to do more with less, and the name of the game today is resource utilization. Virtualization technologies offer a direct and readily quantifiable means of achieving that mandate by collecting disparate computing resources into shareable pools.
For example, analysts estimate that the average enterprise utilizes somewhere between 5 percent and 25 percent of its server capacity. In those companies, most of the power consumed by their hardware is just heating the room in idle cycles. Employing virtualization technology to consolidate underutilized x86 servers in the data center yields both an immediate, one-time cost saving and potentially significant ongoing savings.
The most obvious immediate impact here comes from a reduction in the number of servers in the data center. Fewer machines means less daily power consumption, both from the servers themselves and the cooling systems that companies must operate and maintain to keep them from overheating.
Turning a swarm of servers into a seamless computing pool can also lessen the scope of future hardware expenditures, while putting the economies of things like utility pricing models and pay-per-use plans on the table. Moreover, a server virtualization strategy can open up valuable rack space, giving a company room to grow.
From a human resources standpoint, a sleeker server farm makes it possible to improve the deployment of administrators.
What kinds of challenges does virtualization present?
This technology changes the way a data center is managed, administered and operated. For example, before server virtualization, you could walk into any data center, ask the admin to name the organization’s top five applications, and he would be able to show you the machines those apps were running on. However, the traditional coupling of hardware and software is broken by virtualization.
This decoupling creates the potential for performance conflicts. For example, some applications have cyclical performance profiles. A West Coast stock-trading application and a SIMEX app running on the same machine are going to overlap at peak market hours, slowing performance. Consequently, administrators have to think through how the virtualized data center will operate. The major virtualization vendors typically provide extensive technical resources and at least some training to explain how their solutions work. But each data center operates differently, and it’s up to the administrators to know their systems.
What should I look for in a virtualization solution?
In a word: management. The core hypervisor technology that decouples the application stack from the underlying hardware is well on its way to commoditization. The large enterprise software vendors (Microsoft, Sun Microsystems, BEA Systems, Hewlett-Packard, BMC and CA, for example) are including it in their product suites, and the standalone virtualization vendors are giving it away. Where they differ is in their ability to provide tools for managing, monitoring and optimizing the allocation of virtualized resources. Look for solutions that provide easy-to-use tools for gathering statistics and applying dynamic policies to better allocate your physical resources among the virtual consumers of those resources.
Consequently, the innovation in the virtualization space is happening up the stack. The next-generation products are all about management. VMware has seen the writing on the wall, and has made the shift. Its VMware Infrastructure suite puts everything—servers, storage and the network—into a single resource pool. These enterprise virtualization solutions, such as VMware’s ESX Server, blend CPUs, memory, networking, storage and applications into seamless pools of computing resources.
Virtualization can go a long way toward reducing the physical requirements of the data center, but it can also compound the level of management complexity of those servers. So look for solutions that provide cross-platform systems management for both the virtual and physical machines.
Also, you’ll want the ability to migrate your organization’s legacy applications and existing operating systems, without modification, onto virtual partitions. This migration should make it simpler to enhance the performance of those applications, but you’ll need a solution that supports the integration of virtualization with legacy management tools.
Virtualization is no longer just about server consolidation. Flexibility is another key benefit of the technology. In virtualized environments, it’s easier to move things around, to encapsulate, to archive and to optimize. The leading virtualization vendors are providing “live migration” capabilities that make a network administrator’s life easier and more productive.