Day 10 — CS Fundamentals December — About Operating Systems — Deadlocks

Deadlock is another important and fundamental topic when dealing with the subject of Operating Systems.

Without further adieu, I will get started with the article.

Let’s dive in!


So, it is the role of CPU to take up processes from the ready queue and provide them resources for their execution and CPU does that with the intention to make sure that no resource stays free at any time so as to make sure maximum utilization and hence, throughput per unit time.

But at times, there might be a situation, where say, there are two processes in the ready queue :- Process 1 and Process 2

Now, process 1 requires resource 1 and then, resource 2 for completion.

And, process 2 requires resource 2 and then, resource 1 for completion.

So, when the CPU sees the ready queue, it says,

“Well, the resources 1 and 2 are free, so let me call and pull process 1 which needs the control to resource 1”

And it pulls the process 1 to take control of resource 1 and make it start processing.

So, now only resource 2 is free as resource 1 is occupied by process 1.

Now, CPU again says,

“Okay, so, only resource 2 is free but process 2 requires resource 2 so I will call and pull it as well because it requires resource 2 to start its processing”

And it pulls the process 2 to take control of resource 2 and make it start processing.

So, CPU does not have any free resources.

Now, remember what I said. Process 1 requires resource 1 and then, resource 2 to complete its processing.

So, there comes a point where process 1 starts requiring resource 2 but it cannot get it because process 2 already has control of it. Hence, it has to wait for process 2 to complete so that resource 2 can be freed and hence, be used by it (process 1).

“So, Yash! It’s cool right? Process 1 just has to wait for process 2 to complete. Simple!”

Actually not! Remember, I also mentioned that process 2 requires resource 2 and then, resource 1 to complete its processing.

So, there comes another point where process 2 starts requiring resource 1 for its completion but can’t because process 1 has it on hold. Hence, it has to wait for process 1 to complete so that resource 1 can be freed and hence, be used by it (process 2).

“Haa! So, process 2 has to wait for process 1 to comple….. Ohh wait!!!”

Yep! The above quote depicts the same condition of CPU in that moment.

So, now, processes 1 and 2 are stuck as they can’t complete because one of these processes always requires a resource possessed by the other one to complete.

And Hence, they are stuck forever making resources 1 and 2 occupied forever.

So, what is deadlock, basically?

Deadlock is a situation where multiple processes get on hold because they can’t complete because each process is holding a resource required by some other process which requires that resource for completion.

But when exactly deadlock is about to happen, logically?

So, there are four conditions which need to be satisfied simultaneously for a deadlock to happen:

  • Mutual Exclusion: No resource is shareable. One resource can be used by only one process at a time.
  • No Preemption: A resource cannot be taken from a process until the process completes or terminates itself.
  • Hold and Wait: Every process is holding atleast one resource and waiting for some other resources.
  • Circular Dependency: The processes are waiting for each other to provide resources, in a circular form.

So, breaking any one of the following conditions can break the deadlock.

For example (Breaking “no preemption”), you can add preemption where the CPU can, any time, pause a process, provide its resources to some other process in need, wait for its completion and get back to completing the older paused process. See, no deadlock!

Another example (Breaking “mutual exclusion”), add sharability to the resources so that one process doesn’t have to wait for another process’s completion to get its corresponding resources.

Ways of handling deadlock

There are three ways of handling deadlocks:

  • Deadlock prevention: The idea here is to make sure that the system does not even get to or reach the state of deadlock in the first place. We can do so by making sure that the above four deadlock conditions are never satisfied simultaneously. One of the famous deadlock avoidance/prevention algorithm is Banker’s algorithm.
  • Deadlock detection and recovery: The idea here is to allow the system to have a deadlock and then, we perform recovery by adding preemption. So, the deadlock will happen, CPU will detect it, CPU will then pause a process (hence, adding preemption), provide the paused process’s resources to other process in need and recover from deadlock.
  • Deadlock ignorance: This idea is next level XD. Here, if the problem of deadlock is very rare, then CPU can just ignore Deadlock problem and say, “Sab moh maaya hai! Hari Om!”. And whenever deadlock occurs, you just have to restart the system. That’s why, this idea is acceptable only when the deadlock problem is very very rare in the system otherwise the user will flip out restarting the system numerous times.

That’s it!

Thanks for reaching till here :)

I hope you understood the article and got a fairly basic idea of the WHATs, HOWs and WHYs of deadlocks.

Stay tuned for another article which is going to come tomorrow associated with some other interesting CS fundamental.

Find me on



Email —


Site Reliability Engineer @ Red Hat | ex-Grofers | Contributing to {Cloud, Kubernetes}-native OSS