Africa

Americas

Reaping the Benefits of (Public) Cloud … but at What Co$t?

BrandPost By Adnan Khaleel
Jan 21, 2020
AnalyticsBig DataHadoop

Is public cloud too good to be true? The decision to move everything into the public realm can have far-reaching consequences beyond just the cost.

cq5dam.web.1280.12801
Credit: Dell Technologies

Yes, you read that right, I did spell cost with a $ sign. Those of you who have wholeheartedly migrated your most precious workloads and data into the cloud should be paying acute attention to cost. I’ll explain why. But first, let’s take a step back to examine two compelling arguments for using public clouds.

  • Convenience – From the get-go, public clouds have been presented as the most convenient option. First, you don’t need a full complement of IT staff to set up a new system, even if is a small proof of concept. Second, a data analyst can quickly spin up an instance in Amazon Web Services with a few clicks and, voila, you’re in business. This is by far THE most compelling argument for using a public cloud. The time to getting started, coupled with the availability of a wide variety of application programming interfaces (APIs) and micro-services, can dramatically reduce the time (and effort) necessary to start a new IT project. And this, in turn, has very tangible cost and productivity benefits.
  • Capital outlay (CapEx vs. OpEx) – Along with the agility of DevOps development, the public cloud also introduced the concept of pay-as-you-go. In today’s budget-constrained business environment, this can be a life-saver. Public clouds allow for easy expansion as you need it and, with a few clicks, you can grow or decrease your capacity based on demand. There is definitely something to be said for having this convenience, and the enabling technology has clearly come a long way to make this possible. Plus, from a budgeting perspective, it is clearly more desirable to have this sort of pay-per-need flexibility without being saddled with a capital outlay (and its concomitant expenses in infrastructure and staff).

For most decision makers, these two arguments were likely at the top of their list as they decided to pull the plug on their on-premises (on-prem) infrastructure and started writing checks to their favorite public cloud provider, content in their decision that they were increasing their developer productivity while enhancing shareholder value. How often does a manager get to make such a claim?

Far-reaching consequences

Now, if you’ve read this far, I’m assuming you’re beginning to realize that public cloud is too good to be true. Unfortunately, the myopic decision to move everything into the public realm can have far-reaching consequences beyond just the cost. Let’s face it, all along, the real value of the public cloud has never been about cost savings, although public cloud vendors often entice organizations with this argument. The real value has been in speed and flexibility, from both a technical and an operational perspective. However, this flexibility does come at a cost – quite literally.

Even in the early days, security was often cited as the main drawback of the cloud, and there were several instances back then (and even today) where cloud technology was still maturing, and data breaches were possible. Although great strides have been made in data isolation and in compliance improvements with various regulatory bodies, like the Health Insurance Portability and Accountability Act (HIPAA) and International Traffic in Arms Regulations (ITAR), many companies are still quite worried about handing over their most precious asset to the control of a public cloud provider.

And then there’s data … lots of it … and its coming in faster than ever.

Despite the business benefits, public clouds have drawbacks from a technical perspective, especially since we’re now living a very different world than the one where cloud first came to prominence. Sensors, the Internet of Things (IoT), fifth-generation wireless technology (5G) and computing at the edge barely existed a few years ago. Today, they are topics of everyday conversation, as companies try to leverage these latest technologies to make them more competitive.

The exponential growth in data has been anticipated for several years now, but there’s often a lag before the reality of the situation kicks in. Moving all your IT operations into a public could might have seemed like the smart thing to do five years ago, but that was before these new edge technologies exponentially produced more data than many of us might have expected. And, as if bandwidth and storage weren’t already expensive enough, the rate at which sensor data is being generated today may as well be the straw that breaks the camel’s back.

The volume of data is closely related to the next issue – data processing. It’s one thing to store the data, but the real value in all that data needs to be unlocked with data analytics. And here’s where you’re going to run into a fundamental physical limit that means having your analytics compute capability in proximity to where your data resides makes sense.

Most likely, you’re utilizing the cloud to do this analytics stage as well. Although, some companies are using the public cloud predominantly for storage, housing their valuable proprietary algorithms in on-prem systems. These users quickly realized that, not only do they pay a price for moving data out of the cloud, but it also takes time to move such large quantities of data. (Those familiar with the concept of locating compute close to the data refer to this as “data gravity.”) Ultimately where the data lies determines where you pretty much do everything related to that data. And the edge is where more and more data is being generated.

Although network capacities are increasing with 5G, even that can’t keep up with the rate at which data is and will continue to be generated. You should also bear in mind that network technology takes longer for improvements due to the infrastructure cost in upgrading an entire network. Think about existing 3G infrastructure and how 5G will likely take another couple of years to see widespread deployment.

We made a mistake. What now?

Your IT managers are looking at the monthly expenses from the public cloud and realize they’ve made a colossal mistake. What now? For one, you can take comfort in the fact that your IT manager is not alone, but that in and of itself isn’t much help. Your options are

  1. Accept the fact that the cloud is going to be expensive OR
  2. Bite the bullet and take back control of your most valuable asset, i.e. your data.

It will probably cost you a pretty penny to get your data back – in the range of a few pennies per gigabyte. (That’s how communications service providers (CSPs) get you.) However, if you modernize your on-prem system to look very similar to what the cloud providers have, you could save over 60 percent of your spend in the cloud. The choice should be obvious.

“Data repatriation,” as it’s called, is becoming more and more common. In fact, in an IDC report from 2018, 80 percent of enterprises that had migrated to public cloud began a secondary migration (back) within two years, after they realized that the cloud was not all it was cooked up to be.

One data point I regularly revisit is that of a cloud-native startup that was looking to minimize costs. They approached Dell Technologies purely as an exercise, with no real intention of moving off the cloud until they looked at the cost savings. In this particular case, the cloud made perfect sense for their early-stage company that needed agility to get off the ground. However, little did they realize that, once they became a heavy-weight cloud user, the cloud was costing them 3X as much as a dedicated on-prem system with the same capacity!

Here are some pointers Dell Technologies uses to help our customers decide whether the public cloud is a good candidate:

  1. If existing utilization is in excess of 40 percent, then moving to the cloud is a bad idea.
  2. Consider data gravity. Do you need to process your data? If data processing is constantly taking place, you’re better off doing it on-prem. (See the utilization statement above.)
  3. Steady workloads are typically less expensive to run on-prem. However, in instances where there might be an unforeseen spike in demand, a hybrid setup can be very useful. In this case, your on-prem system is sized to meet your steady-state needs, and any excess demand goes to the cloud. This generally works very well, provided you’ve sized your needs correctly and rarely need to burst in the cloud.
  4. With workloads that require low latencies, or those that need to move large volumes of data out of the cloud, cloud egress fees can dominate.
  5. Consider compliance and security. I know certain financial institutions won’t ever move their sensitive data to a datacenter that is not their own, let alone a public cloud. However, I must admit that cloud providers have made massive strides in improving cloud security and compliance to the point where they’re probably as secure, if not more than your own datacenter.

In summary, the cloud does have its benefits and, as with any tool, must be utilized correctly. The more you understand your own workloads and how they are evolving, the better you will be able to judge when a public cloud might a suitable option. If, after reading this, you’re still wondering whether you need to repatriate your data, reach out to the specialists at Dell Technologies, and we’ll be happy to help you decide.

To learn more