A little container history

Just like virtual machines are not invented by VMware, virtualization technology is far older than VMware. Just look at some mainframes that already have virtualization technology available. However VMware has done a great job in making virtualization easy to use for everyone.

The same thing applies for container and Docker. Container technology is not invented by Docker just like VMware, it predates way before we ever heard of Docker and containers, like we have today. So just like virtual machines, containers are far from new. However Docker made it more accessible for everyone, but more of that later in this post.

Before the VM and container era

Before containers and even virtual machines we only had physical server. You know those expensive and sometimes extremely loud machines running somewhere in the basement of your organization ๐Ÿ˜‰

Before deploying an application, you first need to ask the IT department to deploy a physical server somewhere in the data center, on which you could deploy your application.
However it could take some time before you get your server. It was (and for some organizations it still is) not that strange that you need to wait weeks or even months before the server is ready for you to use.

Not that great if you urgently need to deploy that killer app. Because at the time the server and your application have been deployed, someone else could have done the same thing ๐Ÿ™

VMware

So at some point in time (if I’m not mistaken it’s around 1998) VMware was founded, a year later VMware released their first product VMware Workstation. However things really started to change in the server market somewhere around 2000/2001 with the introduction of VMware ESX Server.

With this platform it is possible to deploy (much faster than before) new virtual instances of an operating system like Linux or Windows. So when a developer needs a server to deploy their application, it would no longer take weeks or months to get it up and running, but days or even hours to get your app running.

When looking at the resource usage virtualization has made a big change as well. Before virtualization a typical server would consume around ~10 to 15% of the total resources (depending on the application off-course). This means there is a lot of wasted resources, mainly because a typical server would only have one application running.

With virtualization you could and should stack multiple applications on the same physical server, and still keep them running separated from each other inside a virtual machine. This resulting in lower wasted resources, so more bang for the buck.

However virtualization and virtual machines are not the holy grail, they have some side effects ass well. You still need a full operating system running in order to get the application running. Also the IT department still needs to install this operating system, just like they need on a physical server and this still takes time no mater if it’s automated or not.

From a resource perspective, because you still need a full instance of the operating system running there is a lot of overhead both in time and resources (compute and storage) when running applications in virtual machines.

Before Docker

Like I said before containers are far from new, they have been used for many, many years. I have read somewhere that it’s being used in a IBM System/360 and that’s a system from 1964, that’s around 54 years ago!
However I don’t think you could run something like; docker run -it hello-world on this system ๐Ÿ™‚

And than we have Solaris Zones and Jails (FreeBSD) without going deep in these topics I would recommend you to check out this excellent blog post from Jessie Frazella about Containers vs. Zones vs. Jails vs. VMs.

Google has been using container like technologies for some time now, they started way back around 2000 with donating cgroups to the Linux kernel. With cgroups it’s possible to “provides a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behavior…”.

Cgroups will become one of the crucial component of LXC (LinuX Containers) which combines cgroups with network namespaces. So it’s a much easier way of deploying containers and let them communicate with services outside the container over the network infrastructure.

So we end up with having all kinds of things, that are more or less related or similar to containers, but not something that is standardized like we have these days.

Docker

Before Docker there was dotCloud. DotCloud was founded by Solomon Hykes (the current CTO of Docker) and the company hosted a PaaS platform that in the background leveraged Linux containers to hosts it’s customer applications. Funny fact is that internally this container solutions was called Docker, however at that time it wasn’t publicly available.

In October 2013 Ben Golub announced that dotCloud Inc will be re-branded to Docker Inc, they got rid of their PaaS platform and focused on Docker and bring containers to the tech industry.

So yeah Docker is only 5 years old or young, this seems not that much, but in this tech market it kinda long.

When talking about Docker it could be three different things, we have Docker Inc, Docker the container engine (runtime) container orchestration and the Docker open source project.

Docker Inc is the company behind the Docker Engine and the open source project. However they also provide commercial support for the Docker products and they have a complete container solution named Docker Datacenter.

Docker Engine and Orchestration are the products that Docker and the community have created to run containers. For example it’s the Docker client, Docker daemon and all the other tools that you could use to create container image and running containers.

The open source project, this is where most of the code is written, new things pop up and where other magic happens. At DockerCon US 2017 the project was renamed to Moby Project, and the famous GitHub repo docker/docker was moved to moby/moby.

You could see Moby as something similar as what Fedora is for Red Hat. Docker the upstream project for Docker.

So when reading this post you might think containers and Docker are the same thing? Well it depends who you are asking, but no they are not the same thing! There are many project and products out there that could run and manage containers just like Docker does. To see a list of other solutions besides Docker take a look at the Cloud Native Computing Foundation Landscape. However Docker is more or less the first and by far the biggest player today.

The future

So what does the future will bring us?

Well predicting the future is not an easy task, especially in the IT industry where things are always changing in a every faster page.

Personally I think three major things will change over the couple of years;

  • Container orchestration (both Docker and others) will be more and more merged in to something like Kubernetes.
  • More and more (both existing and new) vendors will get in to the container ecosystem, this could imbalance the ecosystem and eventually a shakeout would happens (just like when virtualization was a hot market).
  • Serverless (not the best name in my opinion), Function as a Service would become a greater player than it is today and eventually become something like containers is today.

However containers are here to stay, just like we have virtual machines today, but eventually even containers will be the next virtual machines and something newer, better and cooler will find it’s way, for example serverless, Function as a Service or OpenFaaS.

 


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.