By\u00a0Erik Kaulberg, Vice President, Infinidat\nReleased in 2014 as an open-source system for automating the deployment and management of containerized applications, Kubernetes has come a long way in the past seven years. It was first created by Google and then turned over to a vendor-neutral body, the Cloud-Native Computing Foundation (CNCF), to manage it as an open-source project. But we are only now starting to see mature Kubernetes deployments at mainstream enterprises.\nAs more companies transition from monoliths to microservices, there has been an increase in the use of container technologies. As microservices proliferated, an increase in applications were comprised of hundreds or even thousands of containers, creating a challenge to manage all these applications. A need for orchestration technologies emerged.\nKubernetes is an orchestration tool that helps developers manage containerized applications and manage them in different environments, such as cloud, virtual and physical. Applications are run in isolated user spaces called containers, representing a form of virtualization.\nTogether, Kubernetes and containers enable application-oriented data centers. Essentially, the application environment is encapsulated by the containers. The focus is on managing applications, rather than the traditional way of managing machines.\nContainerized applications are increasingly becoming mainstream services that enterprises want to run alongside other application workloads and services. Container environments are emerging as tier-one environments, alongside VMware environments \u2012 in fact, with VMware\u2019s Tanzu portfolio capabilities, containers may well be part of the VMware environment for many large enterprises.\nOrganizations with more of a classic open-source inclination tend to focus on Red Hat OpenShift, the dominant commercial Kubernetes distribution. In any case, petabyte scale is becoming a realistic target for leading-edge enterprise Kubernetes deployments.\nThis all would not be possible without the standardized approach enabled by the Container Storage Interface (CSI), which is a mechanism to manage storage directly within container environments. Released in early 2019, CSI has facilitated the construction of production-level container environments that deliver the core enterprise requirements \u2013 stability and predictability \u2013 when paired with effective backend storage solutions.\nBoth the availability of the CSI standard and the VMware Tanzu implementation of Kubernetes have been instrumental in turning an open-source solution that was often considered a \u201cscience project\u201d into a viable, robust environment for the real world, just as virtual machines (VMs) are consumed in enterprise environments today. Overall, the realignment around Kubernetes has been critical to drive enterprise adoption of container environments beyond side projects or highly customized environments.\u00a0\nCSI as a Gateway\nAn effective Kubernetes implementation provides assurance that applications are always accessible by users. An application loads fast, and users get a high response rate. Kubernetes also has emerging backup and restore features and functionality.\u00a0\nBut one of the most interesting things about CSI is that it acts as a gateway to expose the true potential of the underlying attached storage. A well-designed CSI driver can help make it easier to bring in advanced storage capabilities, such as scalable snapshots and Neural Cache data placement mechanisms, which are both increasingly becoming of interest to large enterprises as they scale their Kubernetes environments.\u00a0\nA good Kubernetes implementation delivers high availability with no downtime, as well as scalability and disaster recovery. As usage goes up, volumes will need to be scaled on an as-needed basis, so flexible consumption-based purchasing models are a good fit for Kubernetes environments. And attention must always be paid to the economics \u2013 both direct cost of infrastructure and ongoing implementation\/support costs, which can far outweigh the direct cost of the infrastructure.\nMost organizations are ultimately aiming to build their Kubernetes environments into private clouds. Indeed, a centralized private cloud using Kubernetes and CSI keeps control in the hands of the CIO and IT team of a large enterprise \u2013 while delivering the power to the developers and DevOps teams to move as the business evolves.\nCSI Is Evolving\nAs Kubernetes features and functionality are continually being improved, CSI continues to rapidly evolve. However, providing a new release every six weeks yields more churn than value for typical enterprises. As an enterprise storage solutions leader, we do not want to get too far out ahead of the standards and strive for a balance between regular additions of new functionality and enterprise stability expectations.\nKubernetes will continue to evolve and improve as containers are taking a more prominent place in the enterprise platform stack. Even today though, by becoming the industry standard approach for deploying containers in production, Kubernetes has finally gone mainstream.\nFor more information about Kubernetes, click here.\nAbout Erik Kaulberg\nErik Kaulberg is a Vice President at Infinidat, leading cloud strategy, key alliance partnerships including VMware, and special projects. He has broad expertise in enterprise storage and frequently engages key customers, partners, and analysts. Erik previously ran worldwide enterprise storage strategy and business development for IBM, after he sold all-flash array innovator Texas Memory Systems to the company.