Day 27 — CS Fundamentals December — Containerization — Containers vs VMs

Yashvardhan Kukreja
6 min readDec 28, 2019

Containerization or Container Virtualization is an extremely important topic for any computer science student.

That’s because from small scale to large scale projects, usage of containers play a major role in making things much more convenient and efficient.

So, this article is going to be about that!

Let’s dive in!

Introduction

So, in one of my old articles, I talked about virtualization and VMs.

So, now the thing with VMs is that they are heavy and slow.

Imagine, you want to run a NodeJS application in a virtualized environment on your computer.

So, that would still require a hypervisor to be installed, then a VM to be installed over it, then a guest OS to be installed over that VM, then finally some binaries to install over that guest OS and FINALLY, running that nodeJS application over the VM.

Phew !

Now, the problem with this approach, in this situation is:

  • It is slow because it involves installing a heck lot of things although I just wanted to run a simple Nodejs app.
  • It is resource intensive and consumes unnecessary amounts of host resources because it involves installing an entire OS over hypervisor which is not needed. All I want is to run a very tiny Nodejs application which would hardly consume 50MB of RAM. But the VM approach forces me to install an entire guest OS over hypervisor which has its own resource requirements.
  • Shutting down, starting up and restarting a VM takes a lot of time like a normal computer.
  • It is tedious
  • It is inconvenient
  • AND MANY MORE, I’LL MENTION THEM LATER ON!

Basically, the VM approach is pretty heavy and inconvenient.

So, enter container virtualization!

So, instead of spinning up VMs, we will launch containers which will make things pretty light-weighted and fast.

Now what are containers?

Containers are isolated environments which directly run applications over them without the need for an underlying guest OS. They share the same kernel i.e. the kernel of that host OS.

“And who manages the containers?”

Just like a hypervisor manages the VMs, a Container Engine manages the containers.

Why are containers so light-weighted?

First of all, let’s look at the below diagram and see how container virtualization looks like:

So, unlike VMs, containers don’t need an entire guest OS to run apps on them.

That makes things so light-weighted and quick.

Due to this light-weighted nature of containers and the flexibility to keep things rightfully decoupled and still connected, and even the power of portability, the DevOps culture depends a heck lot on containerization technology.

How containers save a developer’s life working on a team project?

Imagine you get recruited into a company and you start working on an ongoing team based project.

Now, that project runs on python and it requires python 2.7

But your laptop has python 3.6

Still, you start working on the project and you add some things and to test it, you try to run it on your computer. Now there are two possibilities:

  • You’ll forget that the project’s python version requirements (2.7) are different from the one on your laptop (3.6). And you simply run the code, and due to the different version being found by the code than the required one, the code might give up some unexpected errors.
  • You’ll remember about the different python version requirements and you will firstly uninstall the python version on laptop (3.6) and install the one required by the project i.e. 2.7. Now, this will make sure that the project code runs fine. But the problem is that you might be having some old personal projects which run on python 3.6. So, to run them again in the near future, you would have to again uninstall version 2.7 and reinstall 3.6.

In both of the cases, the developer might (will) end up pulling his hair.

So, to save his life (and hair), he can utilize the power of containerization.

So, that team-based project can be packed into a docker image.

(Just like VM’s are iso files running, containers are “images” running).

And that image can be simply run on your computer.

Now, on running this image, a container will be created in your computer which will be like an isolated environment containing all the required versions for running the project’s code.

And all of that would be abstracted away and independent from your computer because the project will be running on an isolated containers.

And even if things mess up, for example, due to messy code changes, then just stop and remove that container, fix the code and restart that container.

See! Your host OS remains safeguarded from any version conflicts or errors associated from the project’s code requirements.

Another cool thing about this

Running heavy projects through their images is incredibly convenient because at times, the project might require, say 50 lines of code, themselves to run the project, especially if the project is decoupled and microserviced.

In that case too, you would just have to wrap those 50 lines of code into the image and that’s it!

After that, you just have to run that image with a single command and behind the scenes, that image will setup everything, run those 50 lines of code and start the isolated container rightfully and beautifully.

So, why are containers wayyy cooler than VMs?

With containers, you do not need to install an OS, you just have to run the application’s image directly over the container engine.

And it hardly takes some seconds to run a container unlike a VM which takes minutes at times because it has to start an entire guest OS.

Due to their lightweight nature, the resource are much more perfectly utilized. Clearly, in case of containers, almost all the resources are going to be utilized entirely by the app and they won’t be uselessly dissipated on unnecessary things like guest OS.

Another reason why containers are so light-weighted is because all the containers over a host share the same kernel i.e. the host’s kernel, unlike VMs which have their own kernel over guest OS for each VM.

One of the best use cases — Container Networking

So, imagine that you deployed two containers of the Nodejs application.

Now, you have a new requirement in which the Nodejs application will access a python script.

  • Now, in the case of VM approach, you would have to go to each and every VM running Nodejs application, and there, you would have to start the python application in each VM and then, inside every VM, the Nodejs application will access that VMs python application.
  • But well, in the container-based approach, you just have to create one container running python application and run it and that’s it! With the power of container networking, containers can communicate with each other and hence, all the Nodejs containers can communicate with the python application container. Hence, saving a lot of resources, time and convenience.

Also, imagine if you want to edit the code of the python application, how tedious and messed up it is going to be to do that in the case of VM approach because you would have to edit and re-run the python application on each VM.

But in the case of container, you just have to update it only once on the single python container and that’s it, no matter how many containers you running, 10, 100 or even 500, you would have to update it only once on that one single python container.

That’s it!

Thanks for reaching till here :)

I hope you understood the article and got a good idea about container virtualization.

Stay tuned for another article which is going to come tomorrow associated with some other interesting CS fundamental.

LinkedInhttps://www.linkedin.com/in/yashvardhan-kukreja-607b24142/

GitHubhttps://www.github.com/yashvardhan-kukreja

Email — yash.kukreja.98@gmail.com

Adios!

--

--

Yashvardhan Kukreja

Software Engineer @ Red Hat | Masters @ University of Waterloo | Contributing to Openshift Backend and cloud-native OSS