Of all the reasons to move to public cloud solutions during the past decade, reducing computing costs has perennially ranked as a top driver. That remains true today – in its most recent cloud computing survey of more than 900 IT decision makers, IDG Enterprise found that 42% cited lower total cost of ownership as a key objective behind their cloud initiatives, a higher percentage of mentions than any other cloud driver received.
Public cloud application and infrastructure offerings have always had a compelling TCO value proposition. Rather than purchasing, installing, maintaining, and operating their own computing environments, companies could slash their CapEx and OpEx costs by simply “renting” services and infrastructure on an as-needed basis from public cloud providers. While fundamentally sound, this promise of controlling costs has been tougher to realize in practice than in theory.
Indeed, many organizations today are struggling to keep their public cloud costs under control. Given that cloud computing expenditures are already pushing 30% of the average company’s IT budget, according to IDG Enterprise, the numbers involved are far from trivial.
Why has it proven so challenging for many companies to get their public cloud costs under control? There are three main reasons:
1. DevOps-led cloud deployments. Most of the early generations of public cloud initiatives have been led by DevOps teams whose main objectives have been speed of development and quality of solution, not cost control. In the classic three-way tradeoff of products, you can achieve two of three objectives – speed, quality, and low-cost – but not all three. All too often, low cost has been the odd-man out. With a “better-safe-than-sorry” attitude, many DevOps teams have purchased more cloud capacity and functionality than their solutions required.
2. Complexity of public cloud offerings. As public cloud platforms such as Amazon Web Services (AWS) and Microsoft Azure have matured, their portfolios of service options have grown dramatically. For instance, AWS lists nearly 150 “products” grouped under 20 different categories (e.g. compute, storage, database, developer tools, analytics, artificial intelligence, etc.). That portfolio makes for well over 1 million different potential service configurations. Add in frequent price changes for services, and selecting the best and most cost-effective public cloud options makes comparing cell-phone plans seem like child’s play.
3. Lack of analysis tools and operational visibility. In yet another affirmation of the truism that “you can’t improve what you can’t measure,” companies have found they don’t have good visibility into how much infrastructure their cloud apps actually need to deliver the required functionality and service levels. Without tools that provide such analysis, companies can’t hope to choose the best options, right-size existing public cloud deployments, or to remove “deadwood” cloud apps that never got removed as DevOps teams have moved on to build new cloud solutions.
With public cloud initiatives moving beyond early-stage, it’s time for companies to get serious about optimizing and controlling their use of cloud resources and – in so doing – cutting unnecessary public cloud costs. To do this, they must leverage analytics tools and services that can provide hard data about their cloud deployments, and help them navigate through the jungle of public cloud service and pricing options.