One of the most interesting developments in the cloud industry today is Docker. If you’re looking to launch the most server application instances possible onto the least amount of hardware without managing a bunch of virtual machines to do it, you’ve probably heard the name “Docker” come up. And for good reason.
With Docker, you can increase density, realize big gains in efficiency and set up nimble testing environments. It utilizes what are called “containers” to quickly deploy images without the large overhead of a virtual machine. In fact, many are using Docker to save administration, licensing costs, management and power. How, you ask? Let’s dive in and see.
What does Docker do?
It’s important to understand what Docker makes possible. At its core, Docker makes it easy to deploy application instances without the constraints and physical demands of virtual machines. What it unleashes in terms of time and effort saved is much more however. QA/UAT and Dev environments are often tedious to set up and replicate to production configurations, especially when you have many rolling releases or multiple teams working on different parts of the product. Docker deployments can contain many different applications within one ‘container’, isolating these applications in quickly-reproduced configurations. This allows development teams to simultaneously work on application environments that are configured exactly the same and matching what is currently in production. Development, management and deploying these applications becomes much easier, more agile and faster through such a platform. Configuration management changes in this world of containers is being done today with some interesting tools like Puppet. If this sounds like a fancy new tool in the DevOps arsenal, that’s because it is.
We all know virtual machine hypervisors are mired in emulating hardware. That means heavy system requirements that aren’t the case in Docker. Applications aren’t necessarily tied to a virtual machine which makes the platform incredibly flexible and light. Docker runs on a single instance of Linux, and is fundamentally different from the world of VM’s. That means extreme efficiency and consumable metrics are saved, a move that could eventually end up saving the industry significantly in operating costs such as power and hardware. Early Docker adopters report that a single container running on one Linux instance could increase the density of applications on the same hardware significantly when compared with VM’s on the exact same hardware. These savings combined with the ability to easily package and ship programs is why Docker is creating such a buzz.
What are the benefits of using Docker?
Docker means better isolation of processes than ever before, application portability, diminished tampering from the outside world and better management of resources. Complicated interdependencies are diminished, making applications much more stable. It’s all a part of this ever-simplified and improving construct and we’re very excited about this.
Docker in action
Let’s get a concrete look at Docker’s power with a palpable example. Here is an example setup process for a WordPress site that could be launched and destroyed in no time flat:
Steps to launch WordPress on the Codero Cloud with Docker:
- Create a 512MB Docker instance
- Log in to the instance
- Pull the WordPress image: docker run -p 80:80 tutum/wordpress
- Launch container and assign ports: docker run -p 80:80 tutum/wordpress /bin/bash
- Log in to WordPress and have a nice day!
As you can see, the setup process was completely short-circuited because the app was containerized with its dependencies. The result is a very mobilized deployment process, completely in line with our core technology goals of automation and efficiency.