If you’re in the early stages of container adoption, you’re probably asking yourself questions such as: How do I know what my virtual private servers are up to? How do I decide which container system to use, and, once I’ve done that, how do I distribute it to an unknown number of servers running at unknown IP addresses?
Wouldn’t it be nice to have a guidebook? We thought so. Our No-Nonsense Guide to Deploying Containerized Applications walks you through the essential steps for deploying a containerized application, from conception to deployment and beyond. Here’s a peek at what’s covered in the guide.
Are containers the right choice?
Containers bring significant benefits, but they may not be the right choice for every application or scenario you face today. If you’ve got a mature application in your environment that runs well on bare metal or virtualized, rebuilding it to take advantage of containers may not be cost effective.
One of the benefits of containerization is the idea that containers are ephemeral – they can come and go as the application needs. That may actually be a challenge for stateful applications. You would have to configure separate persistent storage for containerized apps to write to. There are also security considerations around containers.
Many organizations adopting containers are taking a hybrid approach, making deliberate choices about what applications can be built with containers and which will use traditional infrastructure.
The who’s who of containers
When you’re ready to dive into containers, who does what? There are a lot of tools out there for building and managing container deployments, and it can get overwhelming quickly. We’ll introduce you to some of the big names and concepts. Let’s first draw a distinction between containers and orchestration tools:
- Containers and container technology provide the platform on which an individual container or cluster of containers will run. To run containers at all, you need some sort of container platform installed.
- Orchestration tools help you manage your containerized apps, especially once they grow beyond a single container and into clusters of containers in one or more environments on-prem or in the cloud. They’ll help you scale up, scale out, and scale back as needed, with varying levels of automation. If you’re running a container or two on your development laptop, you don’t need an orchestrator, but to run at scale in production, they’re a necessity.
Next, let’s look at the major players in the space:
Docker, by Docker, Inc. is the most popular container platform available today. Docker runs on-prem and in the cloud with public cloud services like Amazon Web Services, Microsoft Azure, and Google Cloud Platform providing specific container services offerings built on Docker. Docker, Inc. also provides a container orchestration tool called Docker Swarm and a tool for creating multi-container applications called Docker Compose.
Google’s Kubernetes has emerged as one of the most widely adopted container orchestration tools. Even though Docker Swarm provides container orchestration, Kubernetes has enjoyed wide adoption because it isn’t limited by the Docker API and can provide better fault tolerance for container clusters than Swarm. Some developers also prefer Kubernetes because it’s open source. The tradeoff is that Kubernetes can be more complex to setup initially than Swarm.
Mesos, by Apache Foundation, is a cluster manager. Many people think of Mesos as a container orchestrator in the same vein as Kubernetes. It can definitely perform those functions, but where Kubernetes is designed specifically to provide container orchestration, Mesos is agnostic in the types of clusters it can manage. It can manage Java clusters, Apache Spark clusters, Hadoop clusters, and more.
In the cloud or on-prem?
When getting started with containers, a lot of developers start with Docker installed on their development machines, building, deploying, and testing applications locally. When it comes time to move into larger scale testing and deployment of your apps, though, the choices today are familiar ones: cloud or on-prem.
Public cloud is great for containers. You can deploy quickly, and the elasticity of public cloud is a perfect fit for the scalability of containers. With public cloud, containers realize the same value other cloud deployments experience; you pay only for the resources you consume so you can keep development costs low and flexibility high.
The major public cloud providers now have special offerings for containerized applications that include Docker containers with Kubernetes as an orchestrator to help simplify the creation, configuration, maintenance, and operation of containerized apps.
When you build a container environment on-prem, you take on the tasks of setting up, maintaining, and monitoring your environment. And while public cloud offerings make running containers easier than ever, those options aren’t all there for on-prem deployments. Ease aside, there are some good reasons to deploy containers on-prem: Security-sensitive organizations may want local control over the security of container environments; your apps may need access to data that only exists on-prem; or you might be working in an edge environment, like a factory in a remote area, where latency to and from a cloud environment is too high. If you’ve got the compute resources available your costs may be lower than in cloud.
One of the benefits of containers is that when it comes to cloud vs. on-prem, it’s not either/or. Hybrid approaches, in which an orchestration tool intelligently brings up containers in the right environment for the current application needs, are increasingly common.
To dig deeper into containers and orchestration, read our No-Nonsense Guide to Deploying Containerized Applications, designed to help you determine a smart starting point for containerization initiatives.