In the age of the multi-cloud, in which enterprises are now most commonly using a variety of public cloud, private cloud, and on-premises infrastructure to manage services, applications, and workloads, it shouldn\u2019t be surprising that we\u2019re seeing major public cloud platforms make moves into private data centers to expand their reach and further their relationships. In this article I\u2019ll explore some of the ways the public cloud is expanding into private data centers, further blurring the lines between the various types of infrastructure within enterprise IT architecture.\nAmazon Web Services (AWS) launched its Snowball Edge appliance in 2015, providing on-premises storage and limited compute capabilities for enterprise cloud users. The original use-case for the Edge devices \u2013 and the AWS Snowmobile truck \u2013 was to help enterprises transfer large amounts of data into the AWS cloud.\nIn July 2018, AWS announced EC2 for its AWS Snowball Edge device, bringing the real power of the AWS cloud to the edge. Customers can now run virtualized applications in local EC2 instances, anywhere that has electricity, and without internet connection. \u00a0Snowball Edge runs AWS Lambda for serverless computing and AWS Greengrass connects the Snowball Edge to the AWS cloud for data processing, including machine learning. Along with new support for Amazon S3 data storage, these enhancements have turned the Snowball Edge from a simple data vehicle to a powerful computing node.\nJust a few days after the AWS announcement, Google Cloud Platform (GCP) announced it would extend its Google Kubernetes Engine (GKE) \u2013 its core service for managing containers to edge devices, supporting on-premise container deployment. \u00a0As long-time Googler Urs H\u00f6lzle stated about the launch, Google intends to end \u201cthe false dichotomy between on-premise and the cloud.\u201d GKE On-Prem will serve as another availability zone in the Google Cloud dashboard \u2013 bringing consistency to the management and monitoring of infrastructure across both on-premises and the public cloud.\u00a0\nWhat is a reverse hybrid cloud?\nReverse hybrid cloud is a phrase that describes an architecture in which an enterprise operates public cloud software and services in their own private data center. Necessarily in this situation the enterprise would also be using the traditional public cloud services as well as have on-premises infrastructure and capabilities to manage hardware in addition to software.\nTypically, a \u201chybrid cloud\u201d has referred to using two or more discreet clouds that work together through a common or proprietary technology \u2013 often a hybrid integration platform. Legacy enterprises just starting to use the cloud, and digital native companies moving away from the cloud for a variety of reasons, often find themselves with a hybrid cloud. The movement of GCP and AWS into private data centers, though, is a novel play by the public cloud giants that will help enterprises on their journey to a right-fit transformed IT.\nWhat are the benefits of a reverse hybrid cloud?\nHere are a few of the ways enterprises can benefit from using a reverse hybrid cloud architecture, bringing the power of the public cloud in-house.\n\nInstead of forcing teams to work on and maintain multiple environments, this setup encourages enterprises to standardize on technologies that can be deployed in any location. With Google Kubernetes Engine, it doesn\u2019t matter if the workloads are processed on-premises or on Google\u2019s machines \u2013 the interface, the applications, the technology, and the requisite skills remain the same.\nGreater control and a wider range of options for large enterprises that want to take advantage of containers and microservices but for any number of reasons needs to operate core hardware and infrastructure internally.\nUsing common technologies across environments, such as by standardizing on Kubernetes, gives IT decision makers improved policy enforcement and compliance. Running cloud-native technology on-premises enables this type of standardization.\nGives IT organizations the opportunity to test and develop cloud -based services completely internally prior to pushing out to the public cloud.\nFor existing cloud-based workloads that require minimal latency, bringing the cloud on-premises can improve performance and potentially save costs associated with WANs.\nFor enormous data transfers, it can save money to load data on edge devices and ship the devices to a cloud provider for hosting.\nRunning EC2 on the edge lets enterprises manipulate their data before sending to AWS.\n\nAs enterprises find the right balance of private and public cloud, both on-premises and not, for their workloads, data, and applications, its time consider bringing the public cloud to the edge. Google and Amazon are keenly aware that IT leaders are moving to the cloud more slowly than originally expected and by bringing the cloud to private data centers, the public cloud leaders may have a new way to accelerate the use of their platform.\nAs Google made clear in July, it doesn\u2019t matter where the cloud is \u2013 teams simply need access to the right tools and methods to operate efficiently.