BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts.
By Kumar Sreekanti
According to a recent IDC survey, the enterprise market is at a roughly 50-50 split between running containers on-premises (51.55%) versus in the public cloud (48.45%). The same report found a similar 50-50 split when it comes to running on-premises containers on bare metal servers versus on virtual machines (VMs). The survey respondents reported that 51.79% of their containers are on VMs, while their containers on bare metal total 48.21%.
Enterprise organizations want deployment choices for containerized apps
I share this data because it demonstrates that enterprise organizations want a choice of deployment models—whether it’s to support a specific workload type, to accommodate staff skill, or for economic reasons. They want the flexibility to run containerized applications on bare metal or virtual machines (VMs), in their data centers, on any public cloud, or at the edge.
Enterprise adoption of containers is accelerating because of this inherent flexibility, agility, and portability. But enterprise organizations also want to be more efficient and reduce the total cost of ownership (TCO) for their applications. And they want optimal performance, especially for data-intensive applications like AI and analytics.
That’s why the industry is seeing increased interest in bare metal containers. By removing the hypervisor and hardware virtualization layer, enterprises can eliminate unnecessary overhead, avoid lock-in, and reduce “vTax” licensing.
Pulling a Ferrari behind a horse and buggy?
Virtualization and containerization will likely co-exist for some time. I’ve seen hardware virtualization mature and container adoption accelerate over the past several years. It may be an evolution rather than a revolution.
Some vendors are merging these technologies, embedding Kubernetes with the hypervisor—rationalizing that it allows customers to preserve their virtualization investment, tools, and training. But isn’t it also a strategy for a vendor to stay relevant and maintain a grip on expensive license fees? And doesn’t a hypervisor-centric strategy limit choice while adding cost and complexity for customers?
I liken running containers on VMs to pulling a Ferrari behind a horse and buggy. That’s why I believe a new approach is needed: one that brings agility and speed to accelerate application development powered by Kubernetes with enterprise-grade security and scalability. It’s also one that recognizes the need to transition from virtualization to containerization — to improve efficiency and reduce TCO. It’s time for a container-first approach.
A new approach: Flexibility and choice
HPE’s approach to containerization focuses on flexibility, providing our customers with a choice of deployment models. In early March 2020, we announced that the new HPE Container Platform is generally available. It’s an enterprise-grade container platform designed to deploy both cloud-native applications and monolithic applications with persistent data storage, using pure open source Kubernetes.
The HPE Container Platform is a unified solution based on proven software innovations from HPE’s acquisitions of BlueData (providing the control plane for container management) and MapR (providing a unified data fabric for persistent container storage). It delivers the ability to deploy and manage multiple open source Kubernetes clusters at scale in a secure, multi-tenant environment.
With the HPE Container Platform, our customers have the freedom to deploy containerized applications on any infrastructure (whether VM or bare metal) and in any location (from edge to core to cloud). By accommodating different deployment models, the platform supports a hybrid cloud or multi-cloud strategy—while ensuring enterprise-class security and reliability.
HPE is also backing up our software with new professional services to ensure faster time-to-value, as well as several new reference configurations for a wide range of use cases—including data-intensive application workloads such as AI, machine learning, deep learning, data analytics, edge computing, and IoT. The HPE Container Platform was designed to meet the unique requirements for an increasingly edge-centric, cloud-enabled, and data-driven world.
HPE’s container-first approach
The feedback from the market is clear: enterprises want flexibility and choice in deployment to support their hybrid cloud and multi-cloud strategies. HPE is delivering that, with a container-first approach and the HPE Container Platform.
And HPE is just getting started. We are committed to continued software innovation to help customers drive greater business innovation, modernize app development, reduce costs, and accelerate digital transformation. To learn more about the HPE Container Platform, go to www.hpe.com/info/container-platform.
 Source: Container Infrastructure Software Market Assessment, IDC Special Study, March 2020.
About Kumar Sreekanti
Kumar Sreekanti is Senior VP and CTO of Hybrid IT at Hewlett Packard Enterprise (HPE). Kumar was the co-founder and CEO of BlueData, a leading provider of container-based software solutions acquired by HPE in November 2019. Prior to BlueData, Kumar was vice president of R&D at VMware where he was responsible for new technology innovations such as VSAN, Virtual Volumes, and Virtual Flash.