People are rushing to go to cloud, and understandably so. But what is it that you really want from cloud? And can you get those benefits yourself, on premises?\nTo answer that, let\u2019s take a look at the advantages of public cloud. Many people think of cloud as renting, instead of buying, hardware. Public cloud does provide faster access to machines, avoiding the wait while hardware is ordered. You also don\u2019t have to commit long-term to particular types of machines, as you do when you buy. But it\u2019s not just hardware you are renting with public cloud: you\u2019re also renting IT resources from the cloud provider.\nThis idea of renting both hardware and human resources brings us back to the original question: what is it that you want to get from public cloud services?\u00a0\nA big part of what organizations seek is flexibility. You can start fresh and start fast with a wide variety of applications and projects. This cloud-native approach to agility is particularly attractive for innovative projects such as those that rely on heavy computation, like the model-training phases of AI and machine learning. The ability to scale up quickly and temporarily is also attractive for seasonal-based traffic spikes such as those in retail businesses.\nWhat you get with cloud sounds great, but what do you have to give (or give up) to get it?\nCosts and tradeoffs\nThere are three major areas where you give up something in order to gain the flexibility afforded by public cloud. An obvious one is the expense of renting from a public vendor \u2013 and as your needs grow, this is a cost that may be substantial and over which you have little control.\nBut a second trade-off that people often overlook is the issue of public multi-tenancy. With public cloud you don\u2019t control who\u2019s on the other side of the wall, as represented in this figure.\n\n\nFigure 1. Multitenancy in public cloud: who\u2019s on the other side of the wall?\n\n\nThis lack of control over who shares the resources and how they are shared isn\u2019t primarily a matter of security, although that is something to consider. It\u2019s also a question of who has priority. Public cloud makes sense as a business partly because of optimization of resource usage and costs (for the vendor) through shared resources. But usage may be heavily over-subscribed, and because you are not necessarily the first in line for resources, this can make a big difference when you have heavy workloads. You not only give up control over costs and with whom you share resources, but you may also give up control over necessary levels of performance \u00a0when you need it.\nPerhaps the biggest trade-off is the third one: location. By not having machines on premises, moving data and applications becomes a challenge. Many people using public cloud services now realize that migration to cloud is harder than it seems. Moving everything all at once to the cloud is not really feasible. You may want to move only certain applications to cloud, but you most likely have many interlocking applications, so it\u2019s hard to move just a few.\nWhat if you could get many of the benefits of public cloud with fewer of the tradeoffs and costs? You can do this by bringing the cloud to you.\nSecret ingredient to cloud (consider private cloud)\nPrivate cloud provides much of the flexibility and convenience of public cloud but lets you keep control over costs, security, and how workloads are allocated. To consider what it would take to build and maintain a private cloud, first recognize a key reason why public clouds work as a business model: delegation.\nIn this context, what does delegation mean for you, and why does it matter? Part of the cleverness of public cloud is that public cloud providers lean heavily on the operator-tenant-user style of resource allocation and management (versus a more traditional IT-user style). In the operator-tenant approach, the responsibility of IT can be split across the vendor (operator) and the customer\u2019s IT and administrative teams (tenant). This delegation model removes the need for highly skilled and intensive IT effort from the tenant and user side of the bargain \u00ad\u2013 that\u2019s part of what you\u2019re getting from the cloud vendor \u2013 leaving the more basic and customized administration to the customer, hence the benefits of convenience and flexibility.\nThis delegation of responsibilities also makes sense for the cloud vendor. How can they afford to handle the greater IT burden? The answer is simple: tasks delegated to them are common across tenant\/users, and that often means these tasks can be automated. That\u2019s the biggest part of what makes it feasible for cloud vendors to handle IT for such huge multitenancy. Delegation in this context is radically different than just outsourcing IT. With outsourcing, goals are not aligned, and the bill, typically based on hours worked, reflects that. With cloud, the customer is paying for results rather than effort.\n\nIf you could adopt this optimized model of operations for your systems, you could bring many benefits of cloud on premises, under your own control. Here\u2019s how.\nWhat is needed to build a private cloud?\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\nFor private cloud to work, you need a scale-efficient system that truly supports multitenancy for diverse workloads at scale without putting a big burden on IT. Delegating in a controlled way makes a DevOps model possible, where users serve themselves (the roles of tenant\/users in the previous model) while allowing IT teams (the operators) to handle much more standardized tasks.\nAutomating logistical tasks (data motion, data placement, workload balance, data replication, control over data access and usage limits, and self-healing systems) provides the efficiency that makes all this feasible. This efficiency really pays off in convenience and agility as well as resource optimization.\nYou don\u2019t have to build all this automation yourself. An underlying software infrastructure for data storage and data management that is designed to handle many of the data logistical tasks automatically can reduce the IT effort and provide many self-service options for users. Internal users pay for resource usage in terms of results, not effort, by the IT teams.\nAnother key aspect of a cloud-native system is the ability to take advantage of containerization of applications, both for agility and to run different workloads in separately defined environments on the same shared data infrastructure. The combination of a container orchestration framework such as Kubernetes plus data infrastructure that can persist data from containerized applications is essential for building a private cloud.\nTechnologies that support private cloud\nTo address these requirements for building on-premises cloud systems at scale, HPE provides software-defined, hardware-agnostic solutions. HPE Ezmeral Data Fabric is a highly scalable data infrastructure engineered to handle data logistics automatically at the platform level.\nWith built-in self-healing capabilities, data fabric provides reliability at scale. Data fabric also offers an efficient management system via data fabric volumes that provides controlled delegation of tasks between users and system administrators. Data fabric\u2019s open multi-API data access makes it ideal to support multitenancy of diverse workloads at scale, and data fabric serves as the data persistence layer for containerized applications orchestrated by Kubernetes. And to make it easier to manage multiple Kubernetes clusters in a large system, HPE provides the HPE Ezmeral Container Platform, with data fabric as the built-in data persistence layer.\nTo find out more about building an on-premises cloud system or a hybrid private cloud-public cloud deployment, watch the video \u201cHPE Container Strategy\u201d and the on-demand webinar \u201cData Motion at Scale.\u201d\n____________________________________\nAbout Ellen Friedman\n\nEllen Friedman is a principal technologist at HPE focused on large-scale data analytics and machine learning. Ellen worked at MapR Technologies for seven years prior to her current role at HPE, where she was a committer for the Apache Drill and Apache Mahout open source projects. She is a co-author of multiple books published by O\u2019Reilly Media, including AI & Analytics in Production, Machine Learning Logistics, and the Practical Machine Learning series.