The rise of Docker highlights a change in the way applications are being developed for today and the future.
Today, having a successful business means pushing the boundaries with technology. In my previous post on DevOps, The 3 Ways and Cloud, I talk about this and the importance of using I.T as an enabler for all business goals. The impact is seen most obviously in Silicon Valley, there is much more focus on having top coders. Some of the best developers have agents managing the selection of companies they work for and which projects are best for their career. They’re the new rock stars.
Because of this, developers are getting what they want and need more than ever. So what do developers want? Typically they ask for more speed, more agility, more flexibility and more control. They don’t want to be constrained.
What are containers?
Linux containers aren’t a new concept, they’ve been around for years. Recently are they being utilised more than ever and this is partly because of the requirement to create massively scalable web applications.
Docker is an open source project and the most popular container technology, it’s the only one people really talk about these days. One of the reasons Docker has become so prolific is because of its App store equivalent, the docker hub. This stores a wealth of containers built by the community ready to pull down at an instance.
The easiest way to describe containers is by showing the differences between container technology and virtual machines (see diagram).
When you deploy a virtual machine, you deploy a full instance of an operating system and all of the features that come with it, whether you like it or not. As you see with Docker, you can deploy multiple applications within isolated containers that share the same operating system. This removes the overhead of the OS and results in containers being faster to deploy. Not to mention the savings in resources.
The Docker architecture at a high level has two parts:
Docker Host – This is where the docker daemon is run and the containers are stored.
Docker Client – This is where the commands to control Docker are executed.
Doing the basics is remarkably easy. Here are 3 Linux commands to get you started with containers after you’ve installed Docker:
systemctl start docker – This command will initialize the docker engine
docker pull nginx – The “pull” command downloads the container from the hub. In this example its a small web server called Nginx
docker run --name some-nginx – The “run” command creates the container instance itself. Different containers will have different options, which are documented in the hub.
If you have a Mac (It has a Linux base remember) and are interested in playing with Docker then I highly recommend a look at Kitematic. It installs docker on your machine and has a graphical interface that allows you to browse the docker hub, and has the ability to pull and run containers as easy as downloading an app from the App Store.
What are the benefits?
We already talked about the performance increases due to the reduction in overheads, but there’s a few other important benefits too:
Portability – With containers you’re able to move your applications or micro-services around the datacenter, into the public cloud and anywhere else you can think of. They are completely wrapped up, so you don’t have the dependencies that VMs have relating to the OS.
Collaboration – Developers can write code for their application wherever they like, deploy it into a container which can then be sent to another developer or pushed into a repository, or a testing environment.
Micro-services – Containers are an enabler for web-scale/cloud-native apps. Because you have removed the bloat, you can start to decouple your applications for scalability. I will go through mico-services another time as it warrants another full post.
Cloud Operating Systems
Docker can be run on bare metal, i.e. with no hypervisor. It’s an application which runs on an OS. You can then start to pack your server full of containers. The most common way to deploy containers though is within VMs. At first this seems counterproductive, but containers run best in VMs for all the same reasons that we use VMs today; Flexibility, HA, Enterprise Stability, etc.
So because containers everywhere around the world are being run in VMs, this would traditionally negate the performance benefits, because deploying a full VM with a full OS takes time – minutes instead of seconds. This has led to the creation of a new type of OS, called Cloud Operating Systems. Some examples are CoreOS and Photon.
Cloud operating systems are essentially un-bloated Linux, which only contains the bare minimum to run a container. No frills.
These minimalistic solutions allow you to combine the benefits of containers with virtualization; fast deployment into VMs with limited overheads. There are also other technologies like linked clones which reduce the overhead even further.
VMware even allow you to deploy and manage containers within vSphere itself (see diagram), so you can run containers and traditional VMs side by side in the same server, using the same operational tools.
What does this mean for Cloud?
All of the typical benefits of cloud apply to containerised deployments. Both for public and private cloud. Container solutions like Docker have APIs which your cloud will integrate with to provide automation. If you are running your containers in VMs with operating systems like CoreOS or Photon, there isn’t much difference to your cloud, the VM is cloned with the OS already on (ideally using linked clones) and you’re ready to run commands.
The importance of Blueprints is amplified when you decouple your application with a containerised micro-services model. Then instead of a 3 tier application to deploy, you might have a number of tiers containing for example 50 micro services, with complex networking rules between them. Blueprints will save a lot of effort by creating standardisation and repeatability, and with so many components, blueprints will give you that added flexibility to change components in and out.