Entire Operating Systems Explained in One Article!

Dhruv Badaya's photo
Dhruv Badaya
·Nov 18, 2022·

20 min read

Entire Operating Systems Explained in One Article!

Subscribe to our newsletter and never miss any upcoming articles

Play this article

Table of contents

An Operating System is a software that acts as an interface between computer hardware components and the user. The Operating System helps us to communicate with the computer without knowing the computer's language. Each computer must have at least one Operating System to perform tasks and run other programs.

So, how do we visualize an operating system? We can say that an operating system is like a housemaid who manages everything at home.

Below are some of the examples of operating systems:

  • Windows
  • Macintosh
  • iPad OS
  • Android
  • Linux
  • BSD
  • Solaris

Functions of Operating System

Before we dive into the types of operating system, we must discuss the functions of operating system. The following are the functions of operating system:

File Management: The Operating system manages all the files, directories, and folders on a system. It can be manipulated via user command such as renaming a file, changing a directory location, modifying a file access privilege/permission, and so on.

Memory Management: It is the process that is responsible for assigning/allocating memory blocks (memory spaces) to several running applications and coordinating with the computer's main memory to optimize the overall performance of a system.

Resource Management: The OS's key responsibility is to manage the several resources available to it (main memory space, I/O devices, processors) and schedule their use by the several active processes.

Process Management: The OS helps you to create, schedule, and terminate the processes which are used by CPU.

Apart from this, OS also handles security of the system. We will discuss each aspect of this later.

Types of Operating System

There are several kinds of operating systems. Let us look at a few of them that are relevant in today's scenario.

1. Batch Operating System

This kind of operating system was very popular in 1970s, though it is not used now. In this kind of operating system, similar kinds of tasks were grouped (or batched) together and executed in time. The purpose of this operating system was mainly to transfer control from one job to another as soon as the job was completed. However, this operating system had several disadvantages. These included:

Starvation: Suppose an operating system has six jobs to perform: A,B,C,D,E and F. If the execution time of A is very high, then the other five jobs will never be executed, or they will have to wait for a very long time. Hence the other processes get starved.

No Direct Interaction: In this operating system, the computer and user have no direct interaction. So, if a program requires the user two entre two numbers, it will never get executed.

Difficult to use: Computer operators must have full knowledge of batch systems. Moreover, it is very difficult to debug.

To overcome these shortcomings of the Batch Operating System, other kinds of operating systems came into being.

2. Multi-Programming Operating System

Multiprogramming is interleaved execution of many tasks on one computer system. In multiprogramming operating systems, while a program waits for some I/O operating to finish, some other program uses the CPU for that time being. It's, therefore, possible for many tasks to share the CPU time.

Multiprogramming is interleaved execution of many tasks on one computer system. In multiprogramming operating systems, while a program waits for some I/O operating to finish, some other program uses the CPU for that time being. It's, therefore, possible for many tasks to share the CPU time.

No two processes will ever actually be allowed to utilize a common resource such as the processor at the same time. Yet, the mechanism of accessing these resources is not even similar to sequential/serial. In the sequential method, one program would be assigned the CPU. Once this job has completed execution, the CPU will be assigned to another job, and so on, that is, in a serial fashion.

Multiprogramming allows for a more efficient resource utilization by introducing a number of concepts such as preemption, prioritizing, scheduling, etc. A program, simply put, does not have to wait for the process that currently is utilizing the CPU, to execute to completion.

The following are the advantages of Mutli-Programming Operating System:

  • CPU is utilized most of the times and does not become idle, until our processes have completed execution.
  • The system is fast because all the jobs run parallel amongst themselves. We can say that Response time has reduced.

The only major disadvantage of this kind of operating system is the user interaction. While a program executes, there cannot be any interaction between it and the user.

3. Multi-Processing Operating System

In Multiprocessing, Parallel computing is achieved. There are more than one processor present in the system which can execute more than one process at the same time. This will increase the throughput of the system. The advantages of this kind of operating system are obvious. The first one would be increased reliability. This is because due to the multiprocessing system, processing tasks can be distributed among several processors. This increases reliability as if one processor fails, the task can be given to another processor for completion. The second one would be increased throughput. As several processors increase, more work can be done in less.

There are other kinds of operating systems as well, but we will not discuss them as they are out of the scope of this article. Now, we will look at some CPU terminologies that will help us out later on.

CPU Terminologies

When we are dealing with some CPU scheduling algorithms then we encounter some confusing terms like Burst time, Arrival time, Exit time, Waiting time, Response time, Turnaround time, and throughput. These parameters are used to find the performance of a system. So, let us learn about these parameters.

Burst Time: Every process in a computer system requires some amount of time for its execution. This time is both the CPU time and the I/O time. The CPU time is the time taken by CPU to execute the process. While the I/O time is the time taken by the process to perform some I/O operation. In general, we ignore the I/O time and we consider only the CPU time for a process. So, Burst time is the total time taken by the process for its execution on the CPU.

Arrival Time: As the name suggests, arrival time is the time when a process enters into the ready state and is ready for its execution.

Exit Time: Exit time is the time when a process completes its execution and exit from the system.

Response Time: Response time is the time spent when the process is in the ready state and gets the CPU for the first time.

Response time = Time at which the process gets the CPU the first time - Arrival time

Waiting Time: Waiting time is the total time spent by the process in the ready state waiting for CPU.

Waiting time = Turnaround time - Burst time

Response time is the time spent between the ready state and getting the CPU for the first time. But the waiting time is the total time taken by the process in the ready state.

Turnaround Time: Turnaround time is the total amount of time spent by the process from coming in the ready state for the first time to its completion.

Turnaround time = Burst time + Waiting time
Turnaround time = Exit time - Arrival time

Throughput: Throughput is a way to find the efficiency of a CPU. It can be defined as the number of processes executed by the CPU in a given amount of time. For example, let's say, the process P1 takes 3 seconds for execution, P2 takes 5 seconds, and P3 takes 10 seconds. So, throughput, in this case, the throughput will be (3+5+10)/3 = 18/3 = 6 seconds.

Now that we have a very brief idea of all these terms, we can move forward in our journey on understanding Operating System.

CPU Scheduling

CPU scheduling is the task performed by the CPU that decides the way and order in which processes should be executed. There are two types of CPU scheduling - Preemptive, and non-preemptive.

1. Non-Pre-emptive Scheduling

In the case of non-preemptive scheduling, new processes are executed only after the current process has completed its execution. The process holds the resources of the CPU (CPU time) till its state changes to terminated or is pushed to the process waiting state. If a process is currently being executed by the CPU, it is not interrupted till it is completed. Once the process has completed its execution, the processer picks the next process from the ready queue.

2. Pre-emptive Scheduling

Preemptive scheduling takes into consideration the fact that some processes could have a higher priority and hence must be executed before the processes that have a lower priority. In preemptive scheduling, the CPU resource are allocated to a process for only a limited period of time and then those resources are taken back and assigned to another process (the next in execution). If the process was yet to complete its execution, it is placed back in the ready state, where it will remain till it gets a chance to execute once again.

It is desirable for us to maximize CPU utilization and throughput and to minimize turnaround time, waiting time and response time. We will discuss different CPU scheduling algorithms used by operating systems. For simplicity, we will consider only one CPU burst per process. However, in real life scenarios, each sequence consists of several hundred CPU bursts and I/O bursts.

CPU Scheduling Algorithms

1. First-Come First-Served Scheduling

This is the simplest CPU Scheduling algorithm out there. The FCFS (First Come First Served) Policy is simple. The process that requires CPU first is allocated to the CPU first. It is obvious that it is non-pre-emptive in nature. Let us look at a case. Suppose we are given five processes with arrival and burst time as follows:

ProcessArrival TimeBurst Time
A3 units4 units
B5 units3 units
C0 units2 units
D5 units1 unit
E4 units3 units

Now, we will draw a chart which we call the Gantt Chart. It is very easy to draw this chart. This chart tells us what task gets executed at what time.

Here, we can see that C arrives first. So, it will occupy the processor first. Its burst time is 2 units. So, it will be completed in 2 unit time. Next, A arrives. Its arrival time is 3 units. So, we can say that the processor is free between 2 and 3. From 3 units to 7 units, the processor is occupied by A. In mean time, B, D and E have arrived. Now, since E arrived first, it will occupy the processor first. So, from 7 to 10, E will occupy the processor. Now, B and D arrived at the same time, so we can run any of them next. So, B will occupy the processor from 10 to 13. Next, we have D so it will occupy the processor from 13 to 14. So, the Gannt chart will be as follows:

ganntchartone.jpg

Now, we can see that Gannt chart tells us the exit and arrival time for each process. We can use this information to calculate the turnaround time. This is because turnaround time is exit time minus arrival time. We can also calculate Wait Time. It is the time between arrival time and the time the process was allocated to the CPU. We can calculate Wait Time as Turnaround Time minus Burst Time.

ProcessArrival TimeBurst TimeExit TimeTurnaround TimeWait Time
A3 units4 units7 units4 units0 units
B5 units3 units13 units8 units5 units
C0 units2 units2 units2 units0 units
D5 units1 unit14 units9 units8 units
E4 units3 units10 units6 units3 units

Now, we can calculate the average turnaround time by taking out the average of all the turnaround times. Similarly, we can find the average wait time as well. Although FCFS algorithm is the simplest scheduling algorithm, it has some shortcomings as well.

  • It is a Non-Preemptive CPU scheduling algorithm, so after the process has been allocated to the CPU, it will never release the CPU until it finishes executing.
  • Smaller processes have to wait for a long time for bigger processes to finish before they can start executing. Sometimes, this leads to starvation of the process. This is known as Convoy Effect.
  • The Average Waiting Time is high.

2. Shortest Job First Scheduling

Out of all the available processes (that have arrived), CPU is allotted to the process having the smallest burst time requirement. If two processes have the same burst time, then we use First Come First Serve Algorithm to break the tie. This type of algorithm can be pre-emptive as well as non pre-emptive.

When we consider the non pre-emptive approach, this algorithm will work normally as it is supposed to. Of all the processes that have arrived, it will allocate the CPU to the process with the smallest burst time. Once the process is done, it will allocate the CPU to some other process. However, when we consider its pre-emptive approach, it will compare the current process with other processes whenever the other processes arrive. If there is some other process with burst time less than the current process' completion time, then the CPU will be re-allocated to the process with lower burst time and current process will be sent to the waiting queue. For this reason, pre-emptive SJF scheduling is also called shortest remaining time first scheduling.

Suppose we are given five processes with arrival and burst time as follows:

ProcessArrival TimeBurst Time
A3 units1 units
B1 units4 units
C4 units2 units
D0 units6 unit
E2 units3 units

So, let us draw the Gannt chart for this first. At time 0, we only have one process, so we will execute it as we have no other option. It will execute till 6 unit time. At this stage, all the other processes have arrived as well. So, we will execute them one by one in order of their burst time. Now, from the table, it is clear that the order should be: A -> C -> E -> B.

ganntcharttwo

Now, let us find the turnaround time and the waiting time. We know that turnaround time is nothing but the exit time minus arrival time and waiting time is nothing but turnaround time minus burst time. So, let us find turnaround and waiting time.

ProcessArrival TimeBurst TimeExit TimeTurnaround TimeWait Time
A3 units1 units7 units4 units3 units
B1 units4 units16 units15 units11 units
C4 units2 units9 units5 units3 units
D0 units6 unit6 units6 units0 units
E2 units3 units12 units10 units7 units

Now, what if we were using a pre-emptive approach, instead of a non-pre-emptive approach? Let us solve the same problem using the pre-emptive approach of the SJF Algorithm. The burst time and arrival time of five processes is given below:

ProcessArrival TimeBurst Time
A3 units1 units
B1 units4 units
C4 units2 units
D0 units6 unit
E2 units3 units

Let us draw a Gannt chart for this.

The first process that will arrive is obviously D and it will start executing. However, at 1 unit time, another process arrives, which is B. At this point, the remaining time of D is 5 units and the burst time of B is 4 units. So, the CPU will be allocated to B and D is now in waiting queue. Next, B starts executing and E enters. Here, the remaining time of B and the burst time of E is the same. So, CPU will not switch. So, B will continue executing. Now, A arrives and since the burst time of A is 1 and remaining time of B is 2, the CPU will be allocated to A. A will take one unit time to complete. In this time, C will arrive which has the burst time of 2. Now, we have D with remaining time of 5 units, B with remaining time of 2 units, E with burst time of 3 units and C with burst time of 2 units. Now since all processes have arrived, the process from now on will be very similar to non-pre-emptive scheduling. So, from the remaining time of processes, we can easily say that the order now will be B -> C -> E -> D.

ganntchartthree

We can note here that no matter what our approach is: pre-emptive our non pre-emptive all processes get completed in fixed time intervals. In pre-emptive approach, all processes were completed in 16 unit time and similarly in non-pre-emptive approach, all processes were completed in 16 unit time. Now, let us calculate the waiting and turnaround time quickly.

ProcessArrival TimeBurst TimeExit TimeTurnaround TimeWait Time
A3 units1 units4 units1 units0 units
B1 units4 units6 units5 units1 units
C4 units2 units8 units4 units2 units
D0 units6 unit16 units16 units10 units
E2 units3 units11 units9 units6 units

It is interesting to note that the pre-emptive approach of SJF algorithm guarantees minimum average waiting time. This is because it is a purely greedy algorithm. Some people consider the pre-emptive approach of SJF as the most optimal approach in scheduling.

The following are the advantages of SJF algorithm:

  • Its pre-emptive approach guarantees minimal waiting time.

The following are the disadvantages of SJF algorithm:

  • This algorithm cannot be implemented and there is no way to know the burst time of the process. This might come as a shock to you, but let me explain. This is because we cannot tell the burst time of any process in a practical scenario. We can only predict the burst time, even then, there would be many uncertainties. We can say that this is only a theoretical algorithm and cannot be implemented practically.
  • SJF may cause starvation if shorter processes keep coming.

3. Priority Scheduling

Priority Scheduling is a method of scheduling processes that is based on priority. In this algorithm, the scheduler selects the tasks to work as per the priority. The processes with higher priority should be carried out first, whereas jobs with equal priorities are carried out on a round-robin or FCFS basis. Priority depends upon memory requirements, time requirements, etc.

Some systems use low numbers to represent higher priority order, while some use higher numbers to represent higher priority order. Here, we will consider low numbers as high priority. Priority scheduling can be pre-emptive or non pre-emptive. A pre-emptive priority scheduling algorithm will preempt the CPU if the priority of newly arrived process is higher than the priority of current running process. A non preemptive priority scheduling will simply put the new process at the head of ready queue and wait for the current process to get over.

A major problem with priority scheduling is starvation. Priority scheduling can leave some low priority process to be put on hold forever, leading to starvation. A good example of this is IBM 7094. When the shut down the computer in 1973, they found a low priority process that had been submitted in 1967 and had never been run. A solution to this problem is aging. Aging involves gradually increasing the priority of the process that wait in the system for a long time. For example, if priorities range from 127 (low) to 0 (high), we can increase the priority of waiting process by 1 every 15 minutes. This will make sure that even a process of lowest priority will be eventually executed with less chances of starvation. Let us look at an example of Priority Scheduling.

The burst time and arrival time of five processes is given below:

ProcessArrival TimeBurst Time
A0 units4 units
B1 units3 units
C2 units1 units
D3 units5 unit
E4 units2 units

4. Round Robin Scheduling

Another scheduling algorithm (and the last we will discuss here) is Round Robin algorithm. It is especially designed for time sharing systems. It is a pre-emptive scheduling approach. It treats the ready queue as a circular queue. The CPU scheduler goes around the ready queue, allocating CPU to each process for a particular time. This time is known as time quantum. No process can hold CPU for more than one quantum. When a process is executing using the Round Robin algorithm, one of the two things might happen:

  • The process may have a CPU burst of less than 1 quantum. In this case, the process will release the CPU voluntarily.
  • If the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and it will cause an interrupt to the operating system. A context switch will be executed and the process will be put at the tail of ready queue. The CPU scheduler will then select the next process in the ready queue.

In the Round Robin scheduling, no process is allocated to CPU for more than 1 time quantum, unless it is the only running process. In modern systems, this time quantum range from 10 to 100 milliseconds. If the time quantum is too large, Round Robin will be similar to FCFS algorithm. The average waiting time in this algorithm is very high. Let us take an example of the round robin algorithm. The burst time and arrival time of five processes is given below:

ProcessArrival TimeBurst Time
A0 units5 units
B1 units3 units
C2 units1 units
D3 units2 unit
E4 units3 units

It is given that the time quantum for this algorithm is 2. Now, let us draw the Gannt chart for this.

The first process that enters is A. It will run for 2 units. In this time, B and C enter. Now, the queue is [B,C,A]. The remaining time of A is 3 units.

Now, B enters. It will run for 2 units. In this time, D and E enter. Now, the queue is [C,A,D,E,B]. B has remaining time of 1 unit. Now, it will just keep running a loop.

C runs for 1 units and it goes out. So, the queue is now [A,D,E,B]. A runs for 2 units. It has remaining time of 1 unit. So, the queue is now [D,E,B,A]. D runs for 2 units and goes out. The queue is now [E,B,A]. E runs for 2 units. It has remaining time of 1. The queue is now [B,A,E]. B runs for 1 unit and it goes out. A runs for 1 unit and it also goes out. Finally E runs for 1 unit and this finishes all the processes. So now, we can draw the Gannt Chart.

ganntchartfour

Now, let us find the turnaround and waiting time for the process. Turnaround time is obviously exit time minus arrival time and wait time is turnaround time minus burst time.

ProcessArrival TimeBurst TimeExit TimeTurnaround TimeWait Time
A0 units5 units13 units13 units8 units
B1 units3 units12 units11 units8 units
C2 units1 units5 units3 units2 units
D3 units2 unit9 units6 units4 units
E4 units3 units11 units7 units4 units

The following are the advantages of Round Robin algorithm:

  • Every process gets an equal share of the CPU.
  • RR is cyclic in nature, so there is no starvation.

The following are disadvantages of Round Robin algorithm:

  • Setting the quantum too short increases the overhead and lowers the CPU efficiency, but setting it too long may cause a poor response to short processes.
  • The average waiting time under the RR policy is often long.
  • If time quantum is very high then RR degrades to FCFS.

This was all about various CPU Scheduling algorithms.

 
Share this