What is Docker and why use it?(Part I)
The Docker series
Docker is the “poster child” of the containerization movement but is it here to stay? Despite the buzz, many application developers are still undecided about using containers in production. In this series of articles we will talk about:
- - the need for containerization
- - Docker’s benefits and best use cases
- - containerization alternatives in the market
- - going into production with Docker in enterprise-grade environments
What are containers?
The advent of web based software, services and platforms along with commoditization of computing services means that scaling has become of paramount importance. Even more, new paradigms in software development - agile techniques, which shorten the path between the developer and the production environments, also contributed to an increase in adoption among enterprises and startups alike. These new development practices and a new set of requirements, in terms of continuous integration and continuous deployment meant that there was a proper seeding ground for ideas like containerization.
Limited offer! Try 14 days of Docker on Full Metal for free - limited to 100 applicants. Start here.
Containerization tools enable ‘immutability’ in the infrastructure: container apps that are built ‘on the spot’ and ‘in-place’ with the same configuration and same dependencies as the original authors intended. This is one of the main reasons (or perhaps even the main driving force) of their rapid adoption and increased usage. These tools allow application developers to focus on the application instead of focusing on the infrastructure while bringing versioning to application image distribution. It even goes as far as bringing concepts that “traditionally” belonged to a different area: you can do pulls, pushes and commits on Docker images, concepts borrowed from Source Code Management Software (like Git and Mercurial).
To get any confusion out of the way, Docker refers both to an open-source project (https://github.com/docker/docker) as well as the company behind it - Docker Inc. (formerly dotCloud). dotCloud was a PaaS player who built Docker for internal use. Once they realized its potential, they pivoted and focused exclusively on developing Docker. The importance of the project in the greater ecosystem was also recognized by most of the industry, thus allowing Docker Inc. to raise 40M$ in the most recent funding round. Following the pivot dotCloud (the PaaS offering of the company) was ‘relocated’ to Berlin-based cloudControl.
Docker is just one of the available containerization offerings and, as many similar projects, relies on kernel features that (in Linux) have been available for more than 6 years (since around 2007). However, as with other technologies (see e.g. boom in smartphone sales with the introduction of iPhone) Docker has only put a user (and developer) friendly interface around pre-existing components. The same concepts appeared 2-3 years earlier in Solaris OS (build 51 in Solaris 10), known as Solaris Containers w/ Solaris Zones. While initially Docker was a wrapper for LXC (LinuX Containers), nowadays it manages libcontainer - a unified interface for cgroups and kernel namespaces. The gist is: containerization through kernel spaces enables multiple user spaces, while the users ‘occupying’ these spaces have no - okay, you got me, they have a little - knowledge that other users exists.
Docker vs. Virtualization
While there are a ton of similarities there are also at least two tons of differences between the two. The most important difference is that, in Docker, there’s no overhead introduced by the need to emulate an entire OS for the virtual machine. As such, Docker provides better performance (in terms of speed and capacity) when for custom applications in a container, compared to virtual machines, provided that the VM host and LXC host have the same hardware characteristics. Another important difference is that, with Docker, the host and the container have to run the same type of OS.
A less known fact about virtualization is that it has appeared close to 50 years ago at IBM (where else?), in their CP-40 mainframe and was later introduced into several other product lines (such as iSeries and zSeries). Just like a plethora of other technologies, the continuous demand for better performance, better management and easier deployment and the advances in hardware and software made the concept of virtualization reach the pinnacle of its evolution in the recent years.
Both containers and Virtual Machines address the same problem - isolation and control of components of an application - but this is achieved in different way as containers give up some of the isolation for a more efficient usage of the (host) system resources.
Read the second part of the Docker series: “Why developers love Docker”.
This is a guest post written by Felix Crisan.
Felix Crisan – CTO of Netopia (company behind mobilPay and web2sms services), has more than 15 years of experience in IT, payments and telecom. He went from startups to corporate and then back to startup life, building architectures for IBM and HP and as well as games like Moorhuhn. From employee to entrepreneur, his passion has always been the technology and programming, lately being quite a Big Data aficionado.
Follow Felix on Twitter and LinkedIn.