by Laurianne McLaughlin

Virtualization at Warp Speed: How One Company Made it Fly

Dec 12, 20077 mins
Data CenterVirtualization

Want to virtualize 95 percent of your production servers within a year? Vincent Biddlecombe did. Here's how the CTO of logistics company Transplace went from having no virtualization expertise in house to running the company's mission-critical app on a VM.

Many CIOs wonder how far and how fast they can run with virtualization right now. Once you get an initial taste of the cost savings, flexibility, and speed of provisioning that server virtualization enables, you want to make a fast break for a larger victory. Vincent Biddlecombe, CTO of Transplace , doesn’t wonder anymore: He just completed an instructive sprint.


SAP Adds VMware Support

Citrix Seals XenSource Deal, Pressures VMWare

Taking Virtual Servers Beyond Data Center Consolidation

How Server Virtualization Tools Can Balance Data Center Loads

How To Do Virtualization Right

Since mid-2007, Biddlecombe has virtualized almost all the production servers at Transplace, a third-party transportation logistics provider. (The company helps customers such as retail chain stores maximize efficiency in their supply chain and shipping activities.) And he’s been running his company’s most critical application—a home-grown transportation system—on a VMware ESX environment for a month now, with no major hiccups.

By the way, Biddlecombe didn’t have any virtualization or VMware expertise in house among his 100 IT staffers when he started this project: “We were a Sun group,” he says. To address this issue, he hired a consulting partner, Catapult Systems , to bring VMware knowledge to his group.

Timing is Everything

For Transplace, the 2007 sprint toward virtualization made sense on both a business level and a technology level, Biddlecombe says. The business desire: Transplace works with its customers via Software-as-a-Service (SaaS) , so the company needs the best scalability, availability and manageability they can get for hosting customer data. Virtualization appealed for both disaster recovery and scalability reasons, Biddlecombe says. “We can simply add capability as we need it.”

On the technology side, Transplace’s internal systems were due for a facelift. In early 2007, Transplace decided to move its production data center from the corporate office in Plano, Texas, to an offsite co-location facility in nearby Dallas. (Transplace also has a test/development and disaster recovery facility in Lowell, Ark.) At about this time, the company was due to upgrade its server hardware, Biddlecombe says, so it made sense to roll out the virtualization effort with that server upgrade.

For Transplace’s database applications, he switched from Sun servers (running Solaris) to IBM mid-range servers (p570 servers using the Power6 processor and running AIX). For Transplace’s middle-tier servers, he switched from Sun servers to Dell PowerEdge 2950 servers, using VMware’s ESX Server software for virtualization. (For storage, Transplace chose Network Appliance’s FAS 3070 storage systems.)

“We wanted to provide an environment where we could have maximum availability between our production and disaster recovery data centers,” Biddlecombe says. “By using a combination of VMware with the storage, we’ve effectively copied our servers out to the disaster recovery center.”

Today, Transplace’s production environment is almost completely virtualized, and Biddlecombe estimates it will be 95 percent virtualized by year’s end. That’s quite an achievement, says Burton Group research analyst Chris Wolf. “From my experience, organizations that are able to virtualize 40 percent of their servers in a year are doing really well,” Wolf says.

In total, Biddlecombe’s IT group now runs about 110 VMs. In fact, the only significant applications that he’s not running on a VM right now are his Microsoft Exchange servers and SQL server databases—both known for being extremely I/O intensive. (They hog resources on physical servers to the point that it doesn’t make sense to virtualize them in many cases).

The Mission-Critical App Goes Virtual

The thought of running mission-critical ERP applications on a virtual machine makes many CIOs nervous—too nervous to try it (even now that ERP giant SAP has announced support for its products running on VMware.) But not Biddlecombe. As for Transplace’s mission-critical app, a transportation management system, the first month of its virtualized run, coming to a close now, has proven pretty uneventful, Biddlecombe says. He saw no major pitfalls or performance issues.

This transportation management system determines, for instance, which orders need to be shipped together for consolidation purposes, how the order should be best shipped (parcel, full truckload or other options), which shipping carrier is optimal, and so on. This system also handles freight audit and payment. Effectively serving as Transplace’s ERP system, the transportation system handles 4 million shipments per year, or about $2.75 billion in transportation spending annually. Developed in-house using Java, it runs on BEA WebLogic application servers and Oracle for database work.

Biddlecombe has dedicated 50 VMs to support the components of the transportation system running on WebLogic, and 50 to 60 VMs for some other components and everything else.

To determine the right number of VMs and balance workloads on the servers running those crucial VMs, the IT team did extensive prototyping. But they had an advantage that not all companies have with their ERP systems: Since the transportation system software was developed in-house, Biddlecombe’s team knew a lot of its performance quirks already. “We’re intimately familiar with what our software needs,” says Biddlecombe, who has been with Transplace for three years and served as CTO for fifteen months.

Interestingly, Biddlecombe has not found it necessary yet to invest in any new third-party management tools from any of the virtualization upstarts, though he is scoping out one emerging need. Favoring a layered monitoring approach, he currently uses HP’s Business Availability Center tools at the top level, HP’s SiteScope at the next level (measuring factors like memory utilization in every app in every VM) and then network and database monitoring tools. He’s also using VMware’s vMotion tool to move VMs around as needed.

“The one area we haven’t addressed is, are all the VMs sized properly,” Biddlecombe says. “I think we’ve given some VMs more memory than they need. Our emphasis to date has been application performance. The last layer will be reducing VM resources so they have just enough,” he says. The IT team can get some of the memory data from the SiteScope tool, but they have to do one VM at a time, he notes. This is the need that’s making him consider finding another management tool.

For securing the virtual environment, Transplace’s IT team applies the same security tools (McAfee antivirus and others) and practices that they would with a physical server, Biddlecombe says.

Provisioning in 30 Minutes or Less

As for metrics to prove his success, Biddlecombe says he wasn’t able to do many before and after comparisons because so many factors changed at once: a new data center location, new hardware and all those new VMs all got wrapped up into the same effort. What he can measure however, is how quickly he can provision a new server or new computing power to the business side. It used to take him a week to provision a server: Now it takes 30 minutes.

“We have gained a dramatically increased capacity to provision new servers, and more scalability,” he says.

The ability to scale to add VMs right away helps Transplace deal with any spikes in data throughput from its customers: “Because we’re SaaS, our customers benefited immediately,” he says.

And when IT wants to create a test and development VM, or a business executive needs a new customer demonstration environment, IT can do it within the half hour, he notes.

In another benefit of the highly-virtualized environment, the servers at the disaster recovery site can serve double duty, Biddlecombe says. They can be test VMs one moment, and disaster recovery the next. “We don’t have to have 100 servers just standing there waiting for disaster,” he says.

What’s next on Biddlecombe’s to-do list with regards to virtualization? He’ll continue to ensure that the backup strategy is solid, he says. “There’s this concept that I’m putting a lot of eggs in one basket,” he says. “We use VMware Consolidated Backup, but you also have to make sure all your OS patches are applied, backups done properly. You want to make sure you’re doing the blocking and tackling.”