In an effort to reinvent their operating model and strengthen their digital capabilities, organizations are increasingly striving for greater speed, agility and efficiency. Many of them aim to accelerate innovation by shortening release and deployment cycles in order to provide a better digital experience to their customers and thereby outpace competition.
Due to the rise of containers and microservices, the rate at which new application projects are moving into production environments is growing exponentially. In 2016, 79 percent of respondents said that their organizations run container technologies, with 76 percent of them in production environments, according to a study published by ClusterHQ. However, the deployment of these applications to production environments inevitably creates various new challenges for IT operations teams.
The DevOps paradigm
As an integral part of their digital roadmap, many organizations are embracing agile development frameworks to gain time-to-market by combining development, quality assurance and operations tasks. This new paradigm leverages continuous integration and continuous development (CI/CD) methodologies and breaks down silos to streamline processes and enforce close interaction. Savvy organizations are opting for container technology such as Docker to complement their arsenals.
A Docker container includes a complete file system that encompasses the code, runtime, system tools and libraries – in other words, all components an application needs to run. This not only makes code more portable, but also makes applications more resilient and allows for rapid deployment.
Why microservices are gaining traction
Microservices enabled by containers are gaining traction because they enable developers to isolate functions easily, which saves time and effort, and increases overall productivity. Unlike monoliths, where even the tiniest change involves building and deploying the whole application, each microservice deals with just one concern.
A recent Docker survey concluded that, on average, organizations are seeing a 13-fold increase in the release frequency of their applications. However, since the use of containers and microservices is not limited only to new applications, a solid 85 percent reported improvements in their overall IT operations after opting for Docker containers. More than 63 percent improved bug fixing, expressed in their mean-time-to-repair (MTTR).
5 possible speed bumps to overcome on your journey
1. Capacity management
Memory plays a crucial role when it comes to containers. Cluster managers look at the total memory capacity available on the host compared to the memory requested by containers to determine on which host to deploy it. In case of insufficient free capacity, a container will not be deployed. Typically, each container runs in a single encapsulated process with shared infrastructure underneath. With their own operating environment attached, images can easily reach a couple of hundred megabytes in size. Thus, lifecycle management – especially retiring old images – requires constant attention to free up shared resources and avoid capacity constraints.
2. Network layer
Other potential bottlenecks can be the network used within the cluster and the network virtualization layer used to connect containers across multiple clusters, which requires close monitoring in terms of performance, load balancing, and seamless interaction between the two. Moving forward, the necessity will further increase when keeping in mind that one out of five respondents will be having over 50 percent of their apps in the cloud by end of 2017, as recently reported by F5 Networks. This is even more important when operating at scale in a hybrid scenario or a multi-cloud environment comprising shiploads of containers spread across multiple service providers.
3. Lifecycle management and orchestration
Both containers and microservices can easily be replaced and therefore tend to have a relatively short lifespan. According to findings from Datadog, across all companies that adopt Docker, containers have an average lifespan of 2.5 days, while traditional and cloud-based VMs last for 23 days. Organizations employing container orchestration frameworks to automate the start and stop of containers achieve even higher churn rates, with the typical container lasting for less than a day. Those running Docker without orchestration use their containers for 5.5 days on average. The short lifespan combined with the enormous density lead to an unprecedented number of items that require monitoring.
Many traditional IT monitoring tools don’t provide visibility into the containers that make up those microservices, leading to a gap somewhere between hosts and applications that is ultimately off the radar. As soon as the attached applications go live, IT operations teams could suddenly find themselves either blind or flooded with a tsunami of alerts. Both extremes are no good. Inconsistent and fragmented alerts could occupy huge amounts of resources in avoidable troubleshooting. Despite all the enthusiasm around microservices and cloud-native applications, legacy applications won’t disappear anytime soon. Thus, organizations need to put one common monitoring in place comprising both worlds and covering the entire IT stack – from the bottom to the top.
Organizations need to ensure that they have allocated sufficient manpower to manage an exponentially growing estate of microservices. Not only will all the APIs that the containers need to invoke require continuous housekeeping, but once employed, microservices have a strong tendency to multiply unless managed diligently. All too often, developers are tempted to add new functionality by creating yet another microservice. In no time, organizations find themselves attempting to manage an army of containers and countless microservices competing for the same IT infrastructure underneath. Organizations must therefore employ analytics tools that discover duplicative services, and detect patterns in container behavior and consumption to prioritize access to systems resources.
CIOs are under huge pressure to create more agile IT environments that support their organization’s digital business strategies. Containers and microservices will play an important role in accomplishing these objectives by enabling the organization to accelerate release cycles and gain efficiency.
In theory, microservices are designed to make managing IT easier. However, without solid planning and continuous housekeeping, organizations might be overwhelmed and soon find themselves confronted with exacerbating long-standing issues rather than solving them. In fact, it takes time and effort to operate containers and microservices at enterprise-grade. Both natively offer only limited visibility, which can be painful when it comes to processes that are business critical. If something goes wrong in the digital era, it’s not just the application that is down; entire streams of revenues might suddenly be cut off. Not being able to fix or even detect the problem due to missing visibility into the application landscape is certainly not what the CIO wants to confess in the boardroom.
Organizations deploying and managing microservices and containers wisely can achieve great success and gain a number of tangible benefits as outlined before. However, to cross the chasm and operate at scale, it requires sufficient manpower as well as ironing out bottlenecks such as capacity management or networking issues, and putting orchestration, monitoring and management frameworks in place.