Secret ingredient to cloud: paving the way to private cloud

secretingredient
HPE

People are rushing to go to cloud, and understandably so. But what is it that you really want from cloud? And can you get those benefits yourself, on premises?

To answer that, let’s take a look at the advantages of public cloud. Many people think of cloud as renting, instead of buying, hardware. Public cloud does provide faster access to machines, avoiding the wait while hardware is ordered. You also don’t have to commit long-term to particular types of machines, as you do when you buy. But it’s not just hardware you are renting with public cloud: you’re also renting IT resources from the cloud provider.

This idea of renting both hardware and human resources brings us back to the original question: what is it that you want to get from public cloud services? 

A big part of what organizations seek is flexibility. You can start fresh and start fast with a wide variety of applications and projects. This cloud-native approach to agility is particularly attractive for innovative projects such as those that rely on heavy computation, like the model-training phases of AI and machine learning. The ability to scale up quickly and temporarily is also attractive for seasonal-based traffic spikes such as those in retail businesses.

What you get with cloud sounds great, but what do you have to give (or give up) to get it?

Costs and tradeoffs

There are three major areas where you give up something in order to gain the flexibility afforded by public cloud. An obvious one is the expense of renting from a public vendor – and as your needs grow, this is a cost that may be substantial and over which you have little control.

But a second trade-off that people often overlook is the issue of public multi-tenancy. With public cloud you don’t control who’s on the other side of the wall, as represented in this figure.

you

Figure 1. Multitenancy in public cloud: who’s on the other side of the wall?

This lack of control over who shares the resources and how they are shared isn’t primarily a matter of security, although that is something to consider. It’s also a question of who has priority. Public cloud makes sense as a business partly because of optimization of resource usage and costs (for the vendor) through shared resources. But usage may be heavily over-subscribed, and because you are not necessarily the first in line for resources, this can make a big difference when you have heavy workloads. You not only give up control over costs and with whom you share resources, but you may also give up control over necessary levels of performance  when you need it.

Perhaps the biggest trade-off is the third one: location. By not having machines on premises, moving data and applications becomes a challenge. Many people using public cloud services now realize that migration to cloud is harder than it seems. Moving everything all at once to the cloud is not really feasible. You may want to move only certain applications to cloud, but you most likely have many interlocking applications, so it’s hard to move just a few.

What if you could get many of the benefits of public cloud with fewer of the tradeoffs and costs? You can do this by bringing the cloud to you.

Secret ingredient to cloud (consider private cloud)

Private cloud provides much of the flexibility and convenience of public cloud but lets you keep control over costs, security, and how workloads are allocated. To consider what it would take to build and maintain a private cloud, first recognize a key reason why public clouds work as a business model: delegation.

In this context, what does delegation mean for you, and why does it matter? Part of the cleverness of public cloud is that public cloud providers lean heavily on the operator-tenant-user style of resource allocation and management (versus a more traditional IT-user style). In the operator-tenant approach, the responsibility of IT can be split across the vendor (operator) and the customer’s IT and administrative teams (tenant). This delegation model removes the need for highly skilled and intensive IT effort from the tenant and user side of the bargain ­– that’s part of what you’re getting from the cloud vendor – leaving the more basic and customized administration to the customer, hence the benefits of convenience and flexibility.

This delegation of responsibilities also makes sense for the cloud vendor. How can they afford to handle the greater IT burden? The answer is simple: tasks delegated to them are common across tenant/users, and that often means these tasks can be automated. That’s the biggest part of what makes it feasible for cloud vendors to handle IT for such huge multitenancy. Delegation in this context is radically different than just outsourcing IT. With outsourcing, goals are not aligned, and the bill, typically based on hours worked, reflects that. With cloud, the customer is paying for results rather than effort.

table2

If you could adopt this optimized model of operations for your systems, you could bring many benefits of cloud on premises, under your own control. Here’s how.

What is needed to build a private cloud?                  

For private cloud to work, you need a scale-efficient system that truly supports multitenancy for diverse workloads at scale without putting a big burden on IT. Delegating in a controlled way makes a DevOps model possible, where users serve themselves (the roles of tenant/users in the previous model) while allowing IT teams (the operators) to handle much more standardized tasks.

Automating logistical tasks (data motion, data placement, workload balance, data replication, control over data access and usage limits, and self-healing systems) provides the efficiency that makes all this feasible. This efficiency really pays off in convenience and agility as well as resource optimization.

You don’t have to build all this automation yourself. An underlying software infrastructure for data storage and data management that is designed to handle many of the data logistical tasks automatically can reduce the IT effort and provide many self-service options for users. Internal users pay for resource usage in terms of results, not effort, by the IT teams.

Another key aspect of a cloud-native system is the ability to take advantage of containerization of applications, both for agility and to run different workloads in separately defined environments on the same shared data infrastructure. The combination of a container orchestration framework such as Kubernetes plus data infrastructure that can persist data from containerized applications is essential for building a private cloud.

Technologies that support private cloud

To address these requirements for building on-premises cloud systems at scale, HPE provides software-defined, hardware-agnostic solutions. HPE Ezmeral Data Fabric is a highly scalable data infrastructure engineered to handle data logistics automatically at the platform level.

With built-in self-healing capabilities, data fabric provides reliability at scale. Data fabric also offers an efficient management system via data fabric volumes that provides controlled delegation of tasks between users and system administrators. Data fabric’s open multi-API data access makes it ideal to support multitenancy of diverse workloads at scale, and data fabric serves as the data persistence layer for containerized applications orchestrated by Kubernetes. And to make it easier to manage multiple Kubernetes clusters in a large system, HPE provides the HPE Ezmeral Container Platform, with data fabric as the built-in data persistence layer.

To find out more about building an on-premises cloud system or a hybrid private cloud-public cloud deployment, watch the video “HPE Container Strategy” and the on-demand webinar “Data Motion at Scale.”

____________________________________

About Ellen Friedman

ellenfcr
Ellen Friedman is a principal technologist at HPE focused on large-scale data analytics and machine learning. Ellen worked at MapR Technologies for seven years prior to her current role at HPE, where she was a committer for the Apache Drill and Apache Mahout open source projects. She is a co-author of multiple books published by O’Reilly Media, including AI & Analytics in Production, Machine Learning Logistics, and the Practical Machine Learning series.

Copyright © 2021 IDG Communications, Inc.