Demystifying Cloud Computing

Cloud:

noun

1. a visible mass of condensed water vapor floating in the atmosphere, typically high above the ground.

verb

2. figurative [trans.] make (a matter or mental process) unclear or uncertain; confuse."

This is how the New Oxford American Dictionary defines the term cloud'. The first meaning of the term cloud is pretty straightforward. However, when you add 'computing' to it, you get an approximation of the second definition: something unclear and nebulous.

Over time, enterprises have been dealt a number of IT buzzwords that have mostly promised the moon. Some have delivered, others bit the dust. When it comes to offering technology in a pay-as-you-use services model, IT professionals have heard it all from on-demand computing, to software-as-a-service, to utility computing.

The new buzzword, cloud computing, is currently doing rounds of the market and is creating all sorts of new confusions. Some think it is the next big trend in IT. Others feel it is just utility computing silk-wrapped in a newer term.

It isn't only the buzzword that's causing confusion. With few vendors and almost every analyst defining cloud computing differently, the term has taken on an extremely fuzzy aura. To clear the haze and make some sense of the new concept, read on. We'll try to help figure out what cloud computing really means, how disruptive it is, what are its potential advantages and disadvantages and, most importantly, if enterprises are ready for it.

The Story So Far...

In the more traditional information technology setup of an organization, IT shops vehemently believe in the procure-and-provision approach when they have to deploy a new application, infrastructure or service. The problem of IT not being able to keep up with business demands stems from this conventional approach.

More businesses need to react in Internet time and decrease time-to-market if they want to react to changes in the market and stay ahead of their competitors. But today IT is required to fulfill lengthy approval processes to procure infrastructure. And when infrastructure is approved, it still has to be assigned and prepped.

Forrester's principal analyst, James Staten, lays down the three reasons why it is difficult for enterprise IT to respond quickly to the business's dynamic requirements:

Capacity planning is too difficult: determining whether a datacenter can accommodate another service, where it should go, and what moves and what provisioning needs to take place to make room for a new service is time consuming. There's also a lack of good tools in this area. This is the main culprit for long deployment queues in most IT shops.

Balancing time-to-market against asset utilization is too challenging: between controlling IT spend and being responsive to business, IT is stuck in a catch-22 situation. The old 'just give them a server' doesn't fly anymore.

The business wants a quick and dirty way to prototype: the business often comes to IT with requests that don't have budget approval or lack a fully-baked business case, hoping IT can squeeze them in. And IT can't afford to set up and manage an outside-facing play area -- especially with today's security imperatives.

In his report Staten quotes Werner Vogels, Amazon's CTO, who says that the very best at datacenter management holds the solution to the problem of IT shops being unable to respond on time to dynamic business needs.

Leading Web services companies have built their businesses around innovative approaches to IT infrastructure that maximize datacenter efficiency -- investments that have given them a distinct advantage over competitors that came to Web services from a traditional IT foundation, says Staten. This led Vogels to conclude that if managing a massive data center isn't a core competency of your business, then maybe you should pass the responsibility to someone who has:

One: vastly superior economics. The leading providers of Internet apps and services -- whether their own or as a hosting provider -- buy so much datacenter equipment that they have an enormous amount of negotiating power.

Two: better practices for handling dynamic workloads. The leading Internet services companies have invested not only in better processes, but have also built management and administration tools that let them spread applications across thousands of servers and scale them quickly. They have optimized their infrastructures to accommodate new services quickly and without disruption, letting them introduce new capabilities every week.

Three: expertise in dynamic capacity management. For these companies, the productivity of their assets is paramount, as the cost of their services is directly proportional to the ongoing costs of the datacenter. The more productivity they can wring from each square foot, the higher their profitability. Thus, they closely monitor the infrastructure consumption of each app.

Four: consumption-based cost tracking. It is the tight mapping of IT consumption by application that determines the margins on the services they provide. For most of these companies, this reporting is internal, but for an innovative few, this tracking is starting to be exposed as a new kind of offering.

Three streams of evolution are occurring in the enterprise IT space at almost the same time. The first is in supercomputing, which according to Staten, is moving from extremely large, single systems to clusters of inexpensive systems, and is being redefined as high performance computing (HPC). It is now entirely x86-based systems in large quantities grouped together with parallel computing technologies.

This is where Internet Service Providers (ISPs) are heading to as well. As some ISP services are being increasingly commoditized, ISPs are finding it hard to improve margins. So, they are chasing higher levels of enterprise functions. ISPs are moving in two probable directions: software-as-a-Service (SaaS) and managed services provider (MSP).

However, there's a twist. "Being a SaaS provider requires ISPs to have expertise in delivery of apps over the Internet as a pay-per-use service. Most ISPs don't have that. That's why there are a greater number of software companies emerging as SaaS providers, compared to ISPs. ISPs are evolving more as MSPs," points out Staten.

The third evolution is around the data center itself. As Vogels substantiated, managing massive data centers in-house is increasingly becoming unwieldy. The result: more enterprises are looking to outsource a majority of their datacenter functions.

All the three streams seem to be converging. HPC is gaining ground, ISPs are up-scaling their service models and leveraging HPCs to offer value-added services and finally enterprises are increasingly looking to outsource their more commoditized IT functions. Overlap the three and you get a new market opportunity: cloud computing.

"Due to their singular focus on maximizing the efficiency of their Internet services hosting practices, a select few Web services and hosting companies have realized they can deliver the benefits of their unique infrastructure practices to their customers as a new type of hosting service. Enter cloud computing," says Staten.

Welcome Cloud Computing

Staten describes the concept as "a pool of abstracted, highly-scalable, and managed compute infrastructure capable of hosting end-customer applications and billed by consumption."

Simply put, cloud computing is the next generation model of computing services. It combines the concepts of software being provided as a service, working on the utility computing business model, and running on grid or cluster computing technologies. Cloud computing aims to leverage supercomputing power, which can be measured in tens of trillions of computations per second, to deliver various IT services to users through the Web.

In his report, Staten refers to cloud computing as a service delivery platform, which is built on the same basic fundamentals of traditional hosting or SaaS. The building blocks of cloud computing, he says, that take the concept beyond conventional forms of IT service delivery models are:

-- A prescripted and abstracted infrastructure. Fundamental to the cloud computing model is standardization of infrastructure and abstraction layers that allow the fluid placement and movement of services. It starts with a flat implementation of scale-out server hardware that, for some clouds, serves as both compute and storage infrastructure (others are leveraging SAN storage). Their infrastructure enables the cloud and is decided upon solely by the cloud vendor; customers don't get to specify the infrastructure they want -- a major shift from traditional hosting.

-- Fully virtualized. Nearly every cloud computing vendor abstracts the hardware with some sort of server virtualization. The majority employ a hypervisor to keep costs low. Some have solutions that span virtual and physical servers via another middleware element, such as a grid engine.

-- Equipped with dynamic infrastructure software. Most clouds employ infrastructure software that can easily add, move, or change an application with very little, if any, intervention by cloud provider personnel.

-- Pay by use. Most clouds charge by actual use of resources in CPU hours, gigabits (Gbs) consumed, and gigabits per second (Gbps) transferred, rather than by a server or with a monthly fee. Their pricing is compelling.

-- Free of long-term contracts. Most cloud vendors let you come and go as you please. The minimum order through XCalibre's FlexiScale cloud, for example, is one hour, with no sign-up fee. This makes clouds an ideal place to prototype a new service, conduct test and development, or run a limited-time campaign without IT commitments.

-- Application and OS independent. In most cases, the architectures of clouds support nearly any type of app a customer may want to host as long as it does not need direct access to hardware or specialized hardware elements. Cloud vendors told say there's nothing about their infrastructures that would prevent them from supporting any x86-compatible OS.

-- Free of software or hardware installation. You tap into a cloud just as you would any remote server. All you need is a log-in. There's no software or hardware requirement at the customer end nor the need for specialized tools.

"Gartner likes to describe cloud computing as Infrastructure-as-a-Service (IaaS) model. Cloud computing extends the existing capabilities of IT by offering infrastructure or platform-related services on a subscription model. It is a concept that provides computing resources residing on external devices connected to a mesh of grids, allowing the users to deploy their applications or outsource their functions through the Internet without being aware of the whereabouts of the physical infrastructure," says Staten. What's significant to note is that unlike utility or on-demand computing, the provider is in complete control of the infrastructure. This translates easily into a kind of service, which enterprises can leverage and use without significant internal expertise and hassle to control the supporting infrastructure.

Here's an example from Staten. "When an enterprise deploys a new app, it is the role of its IT shop to figure out where to fit it. The queue to get an app implemented is anywhere between four weeks to nine months. If someone in marketing wants to launch a new promotion, they can't wait that long. They would prefer to go to a cloud vendor, and without a lot of technical knowledge, they would get their app loaded and available, and would pay for it with their credit cards."

"Cloud computing gives an edge to enterprises as they can add capabilities and increase capacities on the fly without having to invest in infrastructure, training or licenses. One of the most important features of cloud computing is automated management and reallocation of resources. This means that a user can work on a platform without worrying about adaptability, scalability and elasticity," says Kaustubh Dhavse, deputy director of ICT practice at Frost & Sullivan, South Asia and Middle East.

In addition to online players such as Amazon, Google, Akamai, 3Tera, etcetera, software giants like Microsoft have begun to take note the new concept that might create a seismic shift in the way IT is deployed and consumed.

Window to the Cloud World

Microsoft is not only introducing HPC-related enhancements in its development platform but also service-enabling a host of its enterprise apps to run on a cloud-computing environment. "We strongly feel that people are going to choose the best of both worlds (software and services). Practically, SaaS is just a delivery mechanism. It is not a platform paradigm. Software plus service is actually a platform paradigm and will reflect how people are going to consume software going forward," says Tarun Gulati, GM, marketing and operations, Microsoft India.

Conventionally, HPC has always been the ruling ground of open source software. Now, Microsoft wants some of that action. "Microsoft has made significant inroads in HPC in the last few years. MS Windows has started to fit architecturally with this grid model," points out Staten.

Related:
1 2 Page 1
Page 1 of 2
Get the best of CIO ... delivered. Sign up for our FREE email newsletters!