The notion that all workloads should run in hyperscale public cloud data centers has a certain appeal, but is overly simplistic. An examination of today’s digital leaders shows that their infrastructure strategies are substantially more complex. They typically involve a hybrid mix of legacy environments, private and public clouds (as well as the use of colocation facilities and managed services); more than one public cloud at the infrastructure, platform or applications layer; and a spectrum of computing resources ranging from centralized hyperscale cloud data center infrastructure to its polar opposite, the fog, i.e., a highly dispersed edge, being driven by the Internet of Things and a need to enhance user experience.
Welcome to the brave new world of the hybrid multi-cloud fog. Although it may sound like a winning phrase at Buzzword Bingo, it’s a serious approach to balancing the complex requirements of today’s applications.
Most companies recognize that a mix of their own infrastructure and that of a cloud provider can offer a range of benefits. The exact approach, architecture and benefits vary across firms and applications. For example, GE has committed to going “all in” in the cloud, but also has legacy applications that are prohibitively expensive to rewrite to migrate to the cloud. Netflix is well-known for using the public cloud, but this is primarily for things like transcoding, customer data and its recommendation engine(s). It also owns and deploys infrastructure called Open Connect appliances to stream content to users which it deploys in colocation / interconnection facilities. A variety of companies with spiky or unpredictable demand use a mix as well. Another approach: Dropbox had stored its customers’ data at a public cloud provider but kept metadata in its own facilities, then migrated everything to its own facilities in the U.S., while still using a public cloud provider in Europe.
So every imaginable hybrid is out there: hybrids of legacy and cloud-ready; back end in the enterprise data center and front-end content delivery via the cloud; back end in the cloud and content delivery via owned equipment; hybrids of data and metadata; and a single application architecture which can run in either the enterprise data center; the cloud; or both simultaneously. Solutions such as Microsoft Azure and IBM Cloud are oriented to run the same stack and services on owned equipment and in the cloud. This broad variety of types of hybrids can offer a similarly broad set of benefits: operations cost reduction, capital expense reduction, migration cost minimization, network backbone cost optimization coupled with end-user or thing latency minimization, and so forth.
Most companies utilize a “multi-cloud” approach, that is, subscribe to multiple cloud providers at various levels ranging from infrastructure or even bare metal as a service to software as a service. This approach can be most valuable when a company uses multiple providers that are integrated in some way — say, backing up data from one cloud to another, using one for risk analysis that is integrated with another for mobile app enablement, or leveraging microservices such as the Google geocoding API — rather than just using a collection of disconnected providers each independently focused on areas such as email, presentations, sales automation, CRM, or billing, or worse yet, multiple providers offering similar capabilities, bought by different divisions without even minimal coordination.
One opportunity which is increasingly realistic — thanks to rapidly maturing container and container orchestration technologies — is the ability to move an application or data from one cloud to another (or, from enterprise server to the public cloud). This means that — at least conceptually — a company can pick a cloud provider to run their workload based on the highest availability, performance or even best price, that day or that hour. Prices can vary across cloud providers due to seemingly never-ending rounds of price cuts; due to how application execution drives cost, say, where increasing data intensity drives increasing data transfer costs; or due to dynamic pricing through mechanisms such as spot instances.
Another approach is not just to select one or more of the available cloud providers at a given time, but to use several together in an integrated fashion. For example, Netflix is known for using AWS, but backs its data up in the Google Cloud. This type of provider diversity can enhance reliability beyond mere geographic diversity. Both types are important. Geographic diversity through regions and availability zones can protect against a facility or localized outage, due to a fire, flood, blackout, or the like. Provider diversity can protect against systemic problems which virtually all cloud providers have suffered at one point or another, such as the Elastic Load Balancer issue identified as the root cause of the 2012 Netflix Christmas Eve outage.
Most public cloud providers have been focused on building dozens or even hundreds of hyperscale facilities spanning the globe, for reasons of latency reduction, participating in local markets, and meeting data sovereignty regulations. This represents billions of dollars of capital investment.
However, a complementary strategy is to deploy thousands or tens of thousands of dispersed computing nodes, for reasons including efficiently connecting the Internet of Things to cloud services, ensuring that a retail, branch office, home or mobile location can continue to function in the event of loss of connectivity; reducing bandwidth requirements; offloading centralized resources; and improving response time. This is happening, whether we realize it or not. Content delivery networks are an example. Each car being manufactured today is a mobile fog element that contains dozens of microprocessors that are locally coordinated, yet tie to cloud services such as entertainment, concierge, navigation and accident detection. Each home increasingly contains smart light bulbs, door locks and security cameras. Stores have point of sale devices, banks have ATMs, hospitals have connected radiological equipment, and so on. Some telcos are considering deploying micro-cell computing nodes; and some hardware providers, such as Vapor.IO, are helping to enable them.
The hybrid multi-cloud fog
Each of these individual concepts is not a mutually exclusive alternative, but part of a larger picture. Emerging architectures comprise an integrated mix (the “hybrid”) of owned enterprise resources and pay-per-use, on-demand public cloud services from multiple cloud providers (the “multi-cloud”), supporting an integrated strategy comprising centralized, distributed, and dispersed (the “fog”) processing and storage. By dynamically balancing these elements, enterprises can reduce cost, maximize agility, accelerate time-to-market or time-to-value, improve the user experience, enhance reliability, and so forth.
According to Jeremy King, CTO and SVP of Walmart Global eCommerce and head of WalmartLabs, Walmart is pursuing such a direction right now. It uses OpenStack for its own data centers; and leverages its OneOps acquisition to be able to integrate with (the hybrid) and burst to multiple cloud providers (the multi-cloud), such as the Rackspace Cloud and Microsoft Azure, with the option of additional clouds in the future. They are also considering putting OpenStack-based cloud resources in each store (the fog).
In short, enterprises may want to consider the right balance of centralized and dispersed resources across their own private facilities and multiple cloud service providers. This is the hybrid multi-cloud fog.
This article is published as part of the IDG Contributor Network. Want to Join?