Machine virtualization is nothing new. It was a technology around in the mid 90’s although mainstream adoption came much later. It allows the IT industry to consolidate workloads via the use of virtual machines. Nowadays it offers much more than simply consolidation of workload and benefits can include:
- Run multiple operating systems on one physical machine
- Divide system resources between virtual machines
- Provide fault and security isolation at the hardware level
- Provision or migrate any virtual machine to any physical server
Although virtualization provides all these benefits it has one glaring weakness and this is…
Let me elaborate here and narrow it down to OS duplication. Every application you deploy on a VM has its own OS sitting underneath it. This is done regardless as to what the application the server is running.
You may simply wish to host a small PHP application around 5mb in size on an apache server (10mb) however a core Centos installation will use around 800mb of disk-space which your application doesn’t actually need!
Obviously, when you scale this up to the 1000s that’s terabytes of data you don’t actually care about but are paying for.
There is another way, namely containers.
(Although Docker is not the only container engine it is the one I am most familiar with and is the most popular I may mention some docker specific terms going forward.)
Containers address this issue by moving where the magic happens and down a layer. This effectively leads to no need for an OS because the container is sharing the host OS kernel while keeping a level of isolation from the other applications.
Image credit: docker inc
as you can see containers solve a very real problem, It is possible to use kernel cgroups to even assign each container resources just like a VM so we can place restrictions on how much resources each application has.
( As docker is the most widely used container engine I am going to delve into more specific issues with container security in regards to docker below)
Docker security has come along way in the recent years. With the introduction of the ability to run a container as someone other than root security and changing the Daemon Bind to a UNIX socket rather than a web, docker has certainly reduced docker attack surface. Although by default containers still run as root, therefore map UIDs from the host so root in the container is pretty much root on the host. This can easily be changed.
Docker also supports SELinux and AppArmor by default. Docker is reasonably secure with the majority of the issues ironed out assuming you run as a non-root user inside the container.
The glorious PID 1
Containers are designed to do one process at a time whether that starts a web server or run a bash script. When that process comes to an end the container dies. This is confusing at first but you quickly come to loving this way of working. By splitting your containers up so that each has a specific function you simplify what has to happen inside each container. This also opens up a lot of options in regards to self-healing. If Apache crashes you can simply spawn a new container and cause it to restart instantly from the working image.
Should I switch to containers from VMs?
The answer to this question is really depends on you as an IT department and your technical skill level. Containers provide massive benefits and are incredibly scalable. However, as with any new technology bring with them a slew of technical issues to resolve. Furthermore, it’s my belief that containers true potential lies within there ability to be orchestrated by an engine such as Docker Swarm or Kubernetes.
Because of the differences between containers and VMs being so much greater than the difference between VMs and physical machines the greatest challenge is changing to a containerized infrastructure is the change of mindset.
Despite all of this I truly believe that containers are the future. And would highly recommend you at least look into using them for any projects you have ongoing.