By Siva Sreeraman VP, CTO and Modernization Tribe Leader at Mphasis\n\nIn the decades past, developers faced many errors when porting applications created for a specific computing environment. Configuration differences such as versions of compilers, loaders, runtime libraries, middleware and operating system in new environments created incompatibility and unreliability, and led to undesired increases in project effort, cost, and timelines.\n\nContainers provide an elegant solution to this problem. Each container leverages a shared operating system kernel and encapsulates everything needed to run an application (application code, dependencies, environment variables, application runtimes, libraries, system tools etc.) in an isolated and executable unit. Differences in operating system distributions and underlying infrastructure configurations are thus abstracted away, allowing application programs to run correctly and identically even when deployed to different environments.\n\nHow we got here\n\nContainerization originated in 2001 as a project allowing several general-purpose Linux servers to run on a single box with autonomy and security. Subsequent projects at IBM, Red Hat, and Docker moved this technology forward over the years. In 2014, Google launched their container orchestration platform Kubernetes (K8s) and declared that it started over 2 billion containers on a weekly basis. In 2020, the Cloud Native Container Foundation released data that indicated an overwhelming preference for Kubernetes among companies that used containers in production. \n\nMany organizations today decouple their complex monolithic applications into modular, manageable microservices packaged in containers which can be linked together. Container orchestrators such as Kubernetes further automate installation, deployment, scaling, and management of containerized application workloads on clusters, perform logging, debugging, version updates, and more. \n\nHow it works\n\nContainers in Kubernetes, the most widespread container orchestrator, are implemented using Linux kernel features called namespaces and cgroups (control groups). Namespaces limit what system resources (CPU, memory, disk I\/O, network traffic etc.) a containerized process or a set of processes can see. Cgroups limit the system resources that a containerized process or a set of processes can use. Together, they enable strong isolation, preventing containers from gaining control over each other\u2019s resources.\n\nContainers are grouped into deployable computing units called pods, which contain shared network and storage resources and specifications on how to run the containers. Pods run on nodes \u2013 physical or virtual machines containing a set of CPU and RAM resources. Nodes are managed by the container orchestration layer and pool together into more powerful machines called clusters. Clusters distribute work among individual nodes as needed to execute programs. If any nodes are attached or removed, the cluster manages this, and it remains transparent to the program. \n\nAdvantages\n\nContainers appeal to the software development community because of the agility, uniformity, and portability they provide in creating and deploying applications and their consistent performance of code execution irrespective of the run time environment \u2013 a \u2018write once, run anywhere\u2019 approach across different infrastructures, on premise, or in the cloud. Container images can be quickly rolled back in case of any issues observed. They can be rapidly spun up, adding business functionality and scalability on demand, and torn down, reducing resource usage and infrastructure costs.\n\nSince containers do not need to run a full operating system and share the host machine\u2019s operating system kernel among each other, they are lightweight, and do not have the same resource utilization needs as virtual machines do.\u202fContainers are faster to start up, drive higher server efficiencies, and reduce server and licensing costs.\n\nContainers allow developers to focus on business functionality and not worry about the underlying configurations of applications. A consistent and short deployment process enables faster delivery of new applications. 75% of companies using containers achieved a moderate to significant increase in application delivery speed. \n\nA great benefit of isolating applications into containers is the inherent security provided. As images are the building blocks of containers, maliciously introduced code as well as unnecessary components can be prevented from entering containers by using trusted image registries, enhanced access control methods, and strict policies applied to both accounts and operations. Whenever changes are made to container configurations, or containers started, auditability must be implemented. \n\nChallenges\n\nThough containers solve a lot of security problems compared to traditional virtualization methods, they also introduce new security challenges. As the Kubernetes cluster attack surface vector area is so large and increasing exponentially \u2013 there are layers upon layers of images that span thousands of machines and services \u2013 this has provided many opportunities for cybercriminals to launch coordinated attacks on Kubernetes to access company networks by taking advantage of any misconfigurations.\n\nRecent attacks have introduced cryptojacking, wherein an organization\u2019s vast compute resources on the cloud are unsuspectingly diverted towards mining cryptocurrency. As Kubernetes manages other machines and networks, enterprises should continuously strengthen their security postures and take proactive measures to defend themselves.\n\nThough container cluster managers such as Docker Swarm and Apache Mesos have enabled developers to build, ship, and schedule multi-container applications, and access, share, and consume container pools through APIs, container scaling is still evolving. Container orchestration tools and container cluster managers have not fully integrated with each other. Cluster managers today are not able to provide security at enterprise-class levels, and a common set of standards is lacking.\n\nContainerization best practices\n\nCurrent best practices for container operations include:\n\nIn conclusion\n\nDespite challenges, containers present many benefits, and offer enterprises an attractive choice for software application development. 61% of container technology adopters expect more than 50% of their existing and new applications to be packaged on containers over the next two years. By 2026, Gartner estimates that 90% of global organizations will be running containerized applications in production. \n\nThe usage of managed public cloud Container-as-a-Service (CaaS) such as Amazon Web Services (AWS) Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) is widespread among enterprises today. Container-based Platform-as-a-Service (PaaS) offerings such as Google Cloud Anthos, Red Hat Open Shift, VMWare Tanzu Application Service, and SUSE Rancher are also prevalent. Lightweight Kubernetes distributions (with half the memory needed for K8s, and smaller binary sizes) like SUSE Rancher K3s and Mirantis K0s. can be seen in Edge, Internet of Things, and Reduced Instruction Set Computing applications. \n\nWhile the introduction of containers may add some vulnerabilities, the speed, efficiency, and savings they provide in return are well worth the easily managed risk. Thanks to these considerable benefits, container technology will continue to be a foundational element of the enterprise software technology stack over the coming years. Companies should continue to invest in and utilize containerization in their digital transformation journeys.\n\nTo learn more, visit us here.