6 Considerations When Adopting a Kubernetes-Based Container Platform for the Enterprise

BrandPost By Anant Chintamaneni
Dec 22, 2020
Data CenterIT Leadership

Credit: shutterstock

In today’s ultra-competitive digital world, enterprises expect software developers to rapidly build and deploy applications that will help grow their business. To meet this need, developers are turning to container technology. That’s because containers allow them to develop, deploy, and manage software faster and more efficiently at an unprecedented scale.

With open source innovations, it’s now easier than ever for developers to get started with containers. Many are readily downloading Kubernetes (the defacto industry standard for open source container orchestration software) to orchestrate their new container deployments. Developers are using it as a type of playground as they familiarize themselves with the software and its capabilities.

Yet developing and deploying containers at enterprise scale presents a different set of challenges than playground projects. Enterprises need to consider six key factors when adopting a Kubernetes-based container platform or building one themselves for mission-critical, production-scale deployments.

1) Container security

All around you, whether you know it or not, are containerized applications. Look no further than when serving up your online banking or sending a message to your service provider through a mobile app. With real workloads come real responsibilities. Running your business-critical data in containers means container security is one of the most important issues facing the enterprise as they rapidly adopt container technology at scale.

Kubernetes automates application deployment, scaling, and management of containers across clusters of hosts. The problem is Kubernetes is not secure by default; IT must ensure applications onboarded on top of Kubernetes are secure. Once the application is deployed and running, they must also see if malicious activity is occurring. Security needs to be implemented to eliminate any chance of having compromised or misconfigured containers that in turn can lead to unauthorized access to your workloads and compute resources, and even the potential to recreate your application (and its data) somewhere else. These security constructs do not come for free.

IT operations and/or developers must do some heavy lifting to ensure the Kubernetes cluster, the containers running on the cluster, and the code running in the containers are secure. This involves a multitude of security concerns such as authentication, role-based access control, policies, container/code vulnerabilities, and data and communication encryption.

2) Business continuity of production apps

Once security constructs are built into the Kubernetes platform, IT must implement resiliency, disaster recovery, and business continuity processes for mission-critical applications as well as their runtime Kubernetes cluster(s). Keep in mind, when a business runs these applications in production, critical business data is consumed and produced. IT must ensure this data is not only highly available but can be recovered quickly in case of failure of the underlying infrastructure.

Kubernetes is smart enough to provide some amount of resilience, transferring applications from one container to another if failure occurs. But what if the entire data center fails, such as a power failure? For mission-critical applications, it is imperative to plan for business continuity. Key capabilities to support data availability and backup and recovery of applications with minimal to no downtime must be considered.

3) Multiple Kubernetes clusters

Because Kubernetes is open source, it is available to anyone to download for free. Kubernetes has become a developer’s go-to container software, and they love the flexibility and autonomy it gives them to quickly build and iterate on applications.

The availability of Kubernetes is great for developers, yet the central IT team is concerned with the lack of visibility and control. And without proper control, the enterprise is more vulnerable to serious problems. As I mentioned above, security is one of the biggest threats. Another is a lack of consistency. Each developer can download a different version of Kubernetes along with a wide variety of tools, based on what works best for a specific application. This inconsistency is fine when an application is in the playground stage, but it becomes a major headache later in the development cycle.

The challenge for an enterprise CIO is twofold: gain control while still providing developers with the flexibility they need.  The CIO must promote innovation without disrupting it.

4) Supporting machine learning and data-intensive, non-cloud native apps

When deploying Kubernetes clusters, the enterprise must ensure developers have easy access to the right set of building blocks. One of the reasons the public cloud is so popular is developers are given everything they need – immediately at their fingertips. The developer doesn’t have to submit a request to IT and then wait for days or months to get the tools needed to build a successful application.

The same concept is important when deploying container technology on premises in the enterprise. The more essential building blocks you provide to developers, the faster they can build applications that generate business value.

Developers need tools that support all types of workloads, yet targeted workloads for data-centric artificial intelligence (AI) and machine learning (ML) are vital. These types of applications require notebooks, ML/AI centric runtimes such as Spark, and other toolkits preconfigured and prewired to wrangle data—all of which takes additional time, effort, and cost.

5) Highest performance container deployment

A hotly debated topic today is whether enterprise IT should run containers on virtual machines or bare metal. I side with those who believe running containers on bare metal servers, especially for data-intensive workloads – as it typically provides better performance and lower operational and administrative overhead. Because containers are just another form of virtualization, it doesn’t make sense to add another layer to it. From a tech perspective, running containers on virtual machines is akin to using horses to pull a Ferrari.

Today’s data center is heavily virtualized for running legacy applications; and yes, some of those may benefit from continuing to run on a VM.  Yet, this deployment scenario should be the exception – not the norm. Especially when deploying modern applications, you no longer need virtual machines. You can get the same level of elasticity, flexibility, and much better utilization running containers directly on bare metal. Plus, you remove the administrative and licensing costs.

6) Robust technology partnerships

Kubernetes-based containerized applications can be complex to deploy; and once deployed, they can be even more complex to maintain. Modern applications depend on an ever-changing and expanding technology landscape. This means integration with many partner technologies is a key consideration for any organization working within this environment.  A centralized marketplace is needed to simplify and streamline the use of ISV apps.

For example, an organization may need an ISV marketplace to help ensure multiple development and data science teams are enabled on a common container platform, which requires interoperability with 3rd party ISV or open source tools. Additionally, they may need to confirm organizational security integration or provide a common and persistent storage layer. The marketplace could also provide a place where ISVs and hardware vendors can design, test, and market products that integrate with an organization’s platforms and products.

HPE – working to solve Kubernetes challenges for the enterprise

To address these common Kubernetes challenges, HPE delivers the HPE Ezmeral Container Platform and  HPE Ezmeral MLOps with an integrated HPE Ezmeral Data Fabric. This single pane of glass is the control plane software providing a broad set of tools organizations demand to manage their workloads.

HPE provides a robust security model that enhances what open source Kubernetes provides, especially in the context of supporting multiple Kubernetes clusters from a single control point. Also, the HPE Ezmeral Container Platform is constructed from the ground up with many levels of high availability. In terms of variety of Kubernetes clusters, HPE can easily see and manage them all through a control plane. HPE Ezmeral MLOps also offers a set of services for modern data center use cases such as AI, ML, data wrangling, and data analytics. With a persistent storage layer built in, the enterprise doesn’t have to think about where to store data or how to access that data—because all the data is incorporated.

HPE has deep experience running containers on bare metal and can effectively leverage, expose, and use accelerators (like GPUs that are needed for high performance, compute intensive workloads). Lastly, HPE is focused on high value ISV products in the AI and ML space, partnering for deep integration into the HPE Ezmeral Container Platform.

With HPE on your team, enterprises can empower their developers to embrace Kubernetes – then move quickly and seamlessly from playground to production! Learn more about HPE Ezmeral; you may also request a demo.


About Anant Chintamaneni

anant about
Anant Chintamaneni is Vice President and GM, HPE Ezmeral at Hewlett Packard Enterprise, where he is responsible for product development and GTM strategy to help enterprises on their digital transformation journey with next generation hybrid cloud software. Anant has more than 19 years of experience in enterprise software, business intelligence and Big Data/Analytics infrastructure. Anant previously led product management and engineering teams at BlueData, Pivotal, DellEMC, and NICE Systems.