Waiting for the right build has been a historical problem with test environments, while differences between development, test and production have caused defects to escape in production. Virtual Machines solve these problems by sharing a copy of system data, but they can be slow and take gigabytes of disk space.\u00a0\nEnter Docker, a lightweight, fast virtualization tool for Linux.\u00a0\nThe opportunity Docker presents\u00a0\nFirst, anyone on a technical staff can create a test environment on the local machine in a few seconds. The new process hooks into the existing operating system, so it does not need to \u201cboot.\u201d With a previous build stored locally, Docker is smart enough to only load the difference between the two builds.\u00a0\nThis kind of simplicity is common for teams that adopt Docker; if the architecture extends to staging and production, then staging and production pushes can also be this simple.\u00a0\nAnother slick feature is the capability to create an entire new, virtual infrastructure for your server farm that consists of a dozen virtual machines, called the \u201cgreen\u201d build. Any final regression testing can occur in green, which is a complete copy of production. When the testing is done, the deploy script flips the servers, so now green is serving production code. The previous build, the \u201cblue\u201d build, can stick around \u2013 in case you need to roll back. That's called blue\/green deployment, and it\u2019s possible with several different technologies.\u00a0\nDocker just makes it easy.\u00a0\nWhy Docker?\u00a0\nWhere Windows-based software compiles to a single installer, Web-based software has a different deliverable: The build running on a server. Classic release management for websites involves creating three or four different layers: development, test, production and sometimes a staging environment. The strategy involves at least one server per layer, along with a set of promotion rules. When the software was ready for the next promotion, the build could be deployed to the next level server.\u00a0\nVirtual Machines changed all that, allowing the server to create as many different servers as the team has members. That allowed each branch to be tested separately, then merged into the mainline for final testing, without spending tens of thousands of dollars on new hardware. Having a virtual machine each also makes it possible for a developer to debug a production problem on a local machine while a tester re-tests a patch to production on a second machine. A tester checks for regressions with the release about to go out, while another five testers test features for the next release, and five developers work on new features in new branches.\u00a0\n[Related: Top cloud Infrastructure-as-a-Service vendors]\u00a0\nThe problem with virtual machines is size and speed. Each VM contains an entire host operating system, and creating a virtual machine means allocating gigabytes of space, creating an entire new operating system, then installing the "build" onto that operating system. Even worse, the operating systems runs in application space on your computer \u2013 it is literally like having an operating system inside of the host operating system. The boot\/install process for a virtual machine can take anything from several minutes to an hour, which is just enough to interrupt flow. Technical staff will likely only be able to host one or two virtual machines on a desktop without a serious loss of speed; trying to get virtual machines created on the network on-demand is an entire "private cloud computing" project.\u00a0\nInstead of running in application space, Docker runs in the kernel. In other words, it makes itself a part of the operating system. Running in the operating system does limit Docker to modern kernels of Linux, both host machine and container, but it also massively simplifies the task-switching process of the operating system. Having Docker in the kernel eliminates many redundancies that typical VMs would have (it needs one kernel, not one per container) and means that Docker containers do not \u201cboot up,\u201d as they are already up.\u00a0\nAll this combines to make Docker an incredibly fast way to create machines \u2013 machines that are exact copies of what will go into production, based on disk images \u2026 not a patch to an existing server.\u00a0\nThe capability to stop and save a container in a broken state, then debug later, makes debugging much easier under Docker. If the debugging destroys the environmental condition, or \u201cdirties\u201d the environment in some way, restoring to the broken state is trivial. Docker is also capable or running any applications on any Linux server;\u00a0 the quick startup and disposable nature of containers makes it fantastic for things like batch processing.\u00a0\n[Related: Why the open container project is good news for CIOs]\u00a0\nThere are some tools out there that help you configure and even simulate entire infrastructures with Docker containers, making life easier for the team. The most popular one is Docker Compose. This can reduce what used to be ultra-complex setup processes to a single command.\u00a0\nDocker in production\u00a0\nDocker on your local machine and a couple cloud servers is one thing; making it production-ready is a different matter entirely. The early days of Docker were like the Wild West when it came to production. The commonly thrown around phrase is "Container Orchestration," which is the practice of taking Dockerized apps and services, and scheduling them onto clusters of compute resources. That means organizations don't care where the containers are running, just that they\u2019re running and serving the right requests, whether that be web traffic, internal services and databases, or messaging queues.\u00a0\nToday\u2019s big players in orchestration are AWS EC2 Container Service, Docker Swarm and Mesos. Typically orchestration services can manage containers well, but they also may come with other bells and whistles like blue\/green deploys, container healing, load balancing, service discovery and inter-container networking.\u00a0\nWhen evaluating Docker for production, there are certainly other challenges like logging, and environment variable configuration. One great place to start and see if you are ready to move towards Docker is seeing how close you are to optimal 12 Factor App.\u00a0\nDon Taylor's tutorial on Docker at CodeMash walked the audience through installing Docker on a Linux machine, creating a container and executing commands on that container. Best of all, the labs are on github for you to follow along.\u00a0\nSo install a Linux virtual machine, put Docker inside it, explore how to create containers, and decide for yourself if this is a technology worth using in your organization.\u00a0\nJared Short contributed to this article.