By Siva Sreeraman VP, CTO and Modernization Tribe Leader at Mphasis
Many developers faced difficulties porting applications developed for a particular computing environment decades ago. Incompatibility and unreliability caused by configuration differences such as versions of compilers, loaders, runtime libraries, middleware, and operating systems in new environments contributed to increased project effort, cost, and timelines.
Containers provide an elegant solution to this problem. Each container leverages a shared operating system kernel and encapsulates everything needed to run an application (application code, dependencies, environment variables, application runtimes, libraries, system tools etc.) in an isolated and executable unit. Consequently, operating system distributions and underlying infrastructure configurations are abstracted from application programs, allowing them to run correctly and identically regardless of the environment.
How we got here
Containerization originated in 2001 as a project that allowed several general-purpose Linux servers to run on a single box with autonomy and security. This technology has since been improved by Red Hat, IBM, and Docker. Google launched its container orchestration platform Kubernetes (K8s) in 2014, announcing the launch of over 2 billion containers weekly. 2020 Cloud Native Container Foundation’s data stated an overwhelming preference for Kubernetes among companies that used containers in production.
Many organizations today decouple their complex monolithic applications into modular, manageable microservices packaged in containers which can be linked together. Container orchestrators such as Kubernetes further automate installation, deployment, scaling, and management of containerized application workloads on clusters, perform logging, debugging, version updates, and more.
Software developers prefer containers for their mobility, uniformity, and portability in creating and deploying applications, and the consistent performance of code execution regardless of the run time environment – a ‘write once, run anywhere’ approach across different infrastructures on-premises or in the cloud. In case of issues, container images can be rolled back quickly. On-demand, they can be quickly spun up, adding functionality and scalability, and they can be quickly disassembled, reducing infrastructure costs and resource usage.
Containers are lightweight – they do not need to run a full operating system and share the host machine’s operating system kernel with each other. They don’t need to utilize the same resource as virtual machines do. Containers are faster to start up, drive higher server efficiencies, and reduce server and licensing costs.
Containers allow developers to focus on business functionality and not worry about the underlying configurations of applications. 75% of companies using containers achieved a moderate to significant increase in application delivery speed.
A great benefit of isolating applications into containers is the inherent security provided. As images are the building blocks of containers, maliciously introduced code as well as unnecessary components can be prevented from entering containers. Whenever changes are made to container configurations, or containers started, auditability must be implemented.
Though containers solve a lot of security problems compared to traditional virtualization methods, they also introduce new security challenges. Since Kubernetes cluster attack surface vector area is so large and continuously expanding – there are layers upon layers of images that span thousands of machines and services – cybercriminals can take advantage of any misconfiguration to launch coordinated attacks on Kubernetes to access company networks.
Recent attacks have introduced cryptojacking, wherein an organization’s vast compute resources on the cloud are unsuspectingly diverted towards mining cryptocurrency. As Kubernetes manages other machines and networks, enterprises should continuously strengthen their security postures and take proactive measures to defend themselves.
Though container cluster managers such as Docker Swarm and Apache Mesos have enabled developers to build, ship, and schedule multi-container applications, and access, share, and consume container pools through APIs, container scaling is still evolving. Container orchestration tools and container cluster managers have not fully integrated with each other. Cluster managers today are not able to provide security at enterprise-class levels, and a common set of standards is lacking.
The usage of managed public cloud Container-as-a-Service (CaaS) such as Amazon Web Services (AWS) Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) is widespread among enterprises today. Container-based Platform-as-a-Service (PaaS) offerings such as Google Cloud Anthos, Red Hat Open Shift, VMWare Tanzu Application Service, and SUSE Rancher are also prevalent.
Despite challenges, containers present many benefits, and offer enterprises an attractive choice for software application development. 61% of container technology adopters expect more than 50% of their existing and new applications to be packaged on containers over the next two years. By 2026, Gartner estimates that 90% of global organizations will be running containerized applications in production.
Container technology will continue to be a foundational element of the enterprise software technology stack over the coming years.
To learn more, visit us here.