Day 28 — CS Fundamentals December — Container Orchestration

Yashvardhan Kukreja
6 min readDec 29, 2019

Now a days, most of the application are divided into microservices causing them to be launched on multiple containers across multiple worker machines.

And managing a highly distributed and redundant microserviced application is a huge task in itself.

So, for that, container orchestration saves the day by providing next level capabilities to handle and manage containers at scale.

So, this article is going to be about container orchestration and how it makes an application work next level!

Let’s dive in!

Introduction

I hope you gave a read to my last article about container virtualization because it covered a lot of fundamental concepts revolving around containers which are going to be needed in understanding container orchestration.

In real life, the applications are very decoupled and microserviced. Meaning that instead of the entire application having one large single codebase, it’s going to have multiple numerous smaller codebase working on their own and linked to each other. These individual smaller codebases are called as microservices.

And with the power and efficiency of containers, it started making sense to run all the individual microservices on individual containers. And as I said, at times, microservices need to talk to each other or exchange some data and well, that is accomplished pretty conveniently with the power of container networking as I mentioned in my last article.

But just that wasn’t enough because although microservice-way of arranging code or the so-called microservice architecture has insanely high number of advantages but one disadvantage and that is, management of it.

Managing the microservices is very tedious. Things like debugging, monitoring, deploying the microservices the right way, all of this is pretty tedious with microservices.

So, to sort that out, container orchestration came to the rescue.

So, what is container orchestration?

Container orchestration is the process of scheduling, running and managing containers at scale especially for microserviced application by providing capabilities for packaging, deployment and monitoring insights.

So, let’s talk about an example!

So, let’s say we have a normal 3-tier application which has 3 microservices running:

  • Front-end microservice: It will be running the HTML, CSS, JS code.
  • Back-end microservice: It will be running the server side script in PHP or Nodejs or Django or whatever.
  • Database microservice: It will be running the database side in MongoDB or MySQL or Neo4j or whatever.

So, clearly, the communication between these microservices will look like this:

The end user will be dealing just with front end. Behind the scenes, the front end will be dealing and talking to the Back-end. And behind the scenes, backend, in turn, will be dealing and communicating with the database microservice.

So, the developer’s focus will be only on these microservices and nothing else.

But well, there’s more!

There will be an underlying container orchestration layer which will be managing the scheduling, scaling and deployment of this 3-tier containerized application.

Let’s call it the master layer.

So, Yash! As you said, the developers will only focus on the microservices and nothing else. So, who will manage this container orchestration master layer?

Good question!

So, for dealing with the container orchestration layer, we will have a dedicated DevOps team or System Administrators to deal with it and make it next level!

Because remember,

“System Administrators exist because even Developers need heroes!”

So, there’s a lot of things involved with container orchestration, which are going to be done with the container orchestration platform:

Utility 1). Deploying the containerized application:

So, the above containerized application will be deployed to a computer or VM, let’s call it a worker node. So, the worker node is going to be running this containerized application meaning and it will be running the frontend, backend and database microservice.

Utility 2). Scaling the containerized application:

Now, in 1). , the application consumed a lot of RAM and other resources of the worker node. So, with container orchestration, we can scale the application to other multiple worker nodes. So, we can have, say 2 more worker nodes, where each worker node will be running its own identical copy of the microserviced application. So, all the 3 worker nodes will be individually running their own copy of frontend, backend and database microservice.

Utility 3). Networking outside and within the containerized application:

Now, in 2). , a lot of things like load balancing amongst worker nodes and even, communication between containers and microservices would require some sort of reliable networking mechanism. So, the container orchestration platform will provide numerous networking capabilities like DNS service discovery for sorting out this networking issue and it will make extremely extremely convenient for performing networking through this entire container-orchestrated application.

So, with this networking capability, the entire collection of front end microservices will be exposed by a single service point. So, the traffic will be accessing the service point, and then, the service point will lead to “that” front end container which will be most “free”.

Similarly, the entire collection of backend microservices will be exposed by another single service point. So, any front end microservice will be accessing that service point and then, that service point will lead to “that” backend container which will be most free.

And similarly is going to be case for backend to database communication.

So, this is called service discovery.

So, all of this explains the real role of a container orchestration platform.

The container orchestration platform will do the following:

Load Balancing: Take the incoming traffic and distribute the traffic load rightfully amongst the worker nodes by sending traffic to the most free worker node.

Health check maintenance: The platform will perform health checks to determine which worker nodes are perfectly working and which worker nodes are crashing or giving errors. And hence, the platform will only load-balance the traffic amongst healthy nodes.

Auto-scaling: If the traffic suddenly increases, then the platform will automatically create new worker nodes and start running copies of application on them, so as to distribute the increased traffic amongst higher number of worker nodes.

And last but not the least!

Utility 4). Capabilities of monitoring and insights:

Well, as I mentioned before, dealing and managing microserviced application is very tedious in the cases like debugging. Finding out where exactly the error has happened in the microserviced application, in itself, is a big deal. So, container orchestration also provide capabilities to monitor the worker nodes and the individual containers running inside them with their logs so as to quickly find out which container is crashing or failing and why and hence, helping in quickly debugging and resolving the issue.

Also, with the power of monitoring, you can draw insights like which microservice is facing the highest amount of load or which microservice is crashing the most and hence, use these insights to make future decisions around re-defining the architecture of the application for saving costs as much as possible.

That’s it!

Thanks for reaching till here :)

I hope you understood the article and got a good idea about container orchestration and how it works!

If you liked this article, do give it some claps :D

LinkedInhttps://www.linkedin.com/in/yashvardhan-kukreja-607b24142/

GitHubhttps://www.github.com/yashvardhan-kukreja

Email — yash.kukreja.98@gmail.com

Adios!

--

--

Yashvardhan Kukreja

Software Engineer @ Red Hat | Masters @ University of Waterloo | Contributing to Openshift Backend and cloud-native OSS