by Bernard Golden

How Cloud Computing Is Forcing IT Evolution

Feb 29, 20126 mins
CIOCloud ComputingDeveloper

To say cloud computing is having a dramatic effect on IT is an understatement. The capability and agility of the cloud is forcing a rapid evolution. Just as in living ecosystems, IT professionals who fail to adapt will, inevitably, dwindle into extinction.

I had the privilege of chairing the infrastructure track at last week’s Cloud Connect conference. Three of the presentations were particularly interesting, offering a good perspective on just how dramatic an effect cloud computing is having on IT. Summed up, the capability and agility of cloud computing is forcing an extremely rapid evolution.

In a sense, these effects are akin to what would happen to an established living ecosystem were significant change to occur within. One could expect to see existing species be stressed by the development of new characteristics in the ecosystem, forcing them to adapt rapidly to survive. Those that fail to adapt will, inevitably, dwindle into extinction.

Cloud Allows for Data Center Scale

Two of the presentations at the Cloud Connect conference addressed how organizations are transforming data centers as a result of the need for scale and density. Ron Vokoun, a construction executive with Mortenson Construction, a company that builds data centers, began by noting that the projects his firm is taking on are quickly shifting toward larger data centers. Mortenson is seeing small and medium-size enterprises leaving the business of owning and operating data centers, preferring to leverage collocated and cloud environments, leaving the responsibility for managing the facility to some other entity. The typical project size his firm sees has doubled or quadrupled, with 20,000 square feet the typical minimum.

Associated with this shift are objectives only available to more sophisticated operators, such as high energy efficiency, raised operating temperatures, data center siting to take advantage of cool climates, and the use of modularization/containerization. Each of these requires a level of sophistication on the part of the operator well beyond what a typical enterprise can bring to bear. Combing each of the elements that Vokoun outlined achieves a significant cost advantage compared to typical corporate data centers.

The bottom line is that Mortenson is seeing an increasing tendency for end user organizations to outsource their computing infrastructure to specialized providers who obtain significant advantages compared to traditional corporate environments.

While Vokoun gave a general perspective on data center trends, Mark Thiele, executive vice president of data center technology with Switch spole more specifically about what he sees working for one of the new mega-data center operators. Switch is best known for operating a giant data center outside of Las Vegas called the SuperNAP.

Switch is an exemplar of the new breed of data center operator. Its facility is enormous: 400,000 square feet today, with plans to expand to more than 2 million square feet. The facility draws 100 megawatts of power, delivered to cabinets at a density of 1,500 watts per square foot. It has high levels of security and touts itself as a host and interconnector of clouds — in other words, it is so large that different cloud providers locate their operations inside of Switch, the better to gain economic efficiency and, just as important, to ease cross-cloud connectivity with other providers.

As Joyent CTO and Chief Scientist Jason Hoffman told GigaOm, “There actually is not anything of comparison in the world … not even remotely close [in terms of a general-purpose data center], … They’re the only people who actually sat down in the last 20 years and thought what should a data center look like today, not in 1985.”

The end result of this futuristic view of data center requirements is an enormously scaled, highly efficient (1.24 PUE), cost-effective computing environment that makes the typical corporate data center look like a relic ready for the scrap heap.

A New Breed of Apps, Big Data and Game Changers

But what’s driving the need for corporations to use these external providers? What’s changed in terms of computing needs that would require a fundamentally different approach to computing? The final presentation in the conference’s infrastructure track illuminated how applications are rapidly transforming to support new business requirements.

Michael Peacock is a United Kingdom-based software developer at Smith Electric Vehicles, which manufactures battery-powered commercial vehicles. These aren’t golf cart-sized vehicles, either. They run eight to 13 tons and transport payloads ranging from 7,000 to 16,000 pounds. In a phrase, big iron.

As one might imagine, it’s important to keep track of what happens with these trucks throughout the workday — speeds travelled, power consumption, motor speeds and so on. Smith has extensive telemetry built into its vehicles, so much so that it sends, in near real-time, enough data that it results in 4,000 MySQL inserts per second, totaling 1.5 billion inserts each day.

When Peacock started his project, his company’s computing infrastructure was overwhelmed. The changes put into place to support the need for truck telemetry that he described are eye-opening, to say the least.

Suffice it to say that the IT infrastructure of this rather traditional enterprise (it was founded in 1920) now resembles a big data, cloud-based, NoSQL-using Web-scale company, with the migration to the new infrastructure driven by sheer scale.

What did Smith do to its environment to address its telemetry requirements?

One example: To support traffic prone to bursts and unpredictable processing requirements, Smith shifted to a queue-based task submission architecture, with the queue located in a cloud provider’s infrastructure.

Additionally, the data loads overwhelmed the capacity of Smith’s storage infrastructure, requiring a deep dive into hardware configuration in order to wring as much performance out of the SAN as possible.

A third example: The application’s databases were streamlined with schema redesign and sharding to improve application performance.

Finally, to improve analytics performance, data were pre-aggregated via background batch processing in order to reduce processing time for queries.

IT As a the Business Proess

What is striking about this project is that, as a company, Smith is a medium-sized, fairly traditional firm. But as an IT user, it is right on the frontier of today’s application techniques. This illustrates the fact that IT is moving from supporting business processes to being the business’s processes, and that traditional application designs and infrastructures are inadequate to support today’s needs.

In turn, this is driving the need to move to computing environments that are far beyond what a typical corporate environment, capable of supporting the computing requirements of a decade ago, can provide. The coming computing needs of corporations are driving an enormous transformation in how infrastructure is delivered — and who provides it. To return to the metaphor outlined at the beginning of this post, the computing ecosystem is undergoing a gigantic change, and every participant needs to figure out how it will evolve to meet the future — or face the unpalatable consequences.

Bernard Golden is the vice president of Enterprise Solutions for enStratus Networks, a cloud management software company. He is the author of three books on virtualization and cloud computing, including Virtualization for Dummies.