Operating System Interview Questions

Updated 5/11/2026

Operating System Interview Questions

An Operating System (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. It acts as an intermediary between users and the computer hardware.

This comprehensive guide covers essential OS interview questions from basic concepts to advanced topics including process management, memory management, file systems, deadlocks, and inter-process communication.

Why Operating System is Important for Placements

Having a strong understanding of Operating System concepts is crucial for technical interviews, particularly for roles related to system-level programming, software engineering, and IT infrastructure.

  • Core Knowledge — OS fundamentals are essential for understanding how computer systems function
  • Real-World Relevance — OS knowledge is crucial for handling memory, processes, and scheduling in applications
  • Performance Tuning — Understanding OS concepts helps optimize system performance
  • Problem-Solving Skills — OS questions test analytical skills in resource allocation and synchronization
  • Industry Demand — Professionals proficient in OS concepts are in high demand across various industries

Basic Operating System Concepts

1. What is an Operating System?

An Operating System (OS) is system software that acts as an intermediary between computer hardware and the user. It manages hardware resources and provides a platform for application programs to run.

Key characteristics:

  • First program loaded into memory when a computer starts
  • Remains in memory throughout the computer's operation
  • Controls all other programs running on the computer
  • Examples: Windows 11, macOS, Linux, Android, iOS

2. Main Functions of an Operating System

  • Process Management — Creates, schedules, and terminates processes; allocates CPU time
  • Memory Management — Tracks RAM usage, allocates/deallocates memory, implements virtual memory
  • File System Management — Organizes data into files and directories; handles permissions
  • Device Management — Controls communication between software and hardware through drivers
  • Security & Access Control — Enforces authentication, authorization, and auditing
  • Networking — Provides TCP/IP networking stack for communication
  • User Interface — Offers CLI or GUI for user interaction

3. What is a Kernel?

The kernel is the core component of an OS — loaded first at boot and stays in memory at all times. It acts as a bridge between software applications and hardware.

Core responsibilities:

  • CPU Scheduling — decides which process gets CPU and for how long
  • Memory Management — allocates and deallocates physical and virtual memory
  • Device Management — interacts with hardware through drivers
  • System Calls — provides safe interface for user programs
  • Interrupt Handling — responds to hardware and software interrupts

Types: Monolithic (Linux), Microkernel (QNX), Hybrid (Windows NT, macOS)

4. User Mode vs. Kernel Mode

Modern CPUs operate in two privilege levels to protect the OS from misbehaving applications.

Aspect User Mode Kernel Mode
Privilege Level Low (restricted) High (unrestricted)
Hardware Access Restricted Unrestricted
Memory Access Own process only Full system
Crash Impact Application only Entire system

Mode Switching: When a user program needs an OS service, it issues a system call → CPU switches to kernel mode → executes service → returns to user mode.


Process Management

5. What is a Process?

A process is a program in execution — an active instance of a program that has been loaded into memory and is being executed by the CPU.

A process includes:

  • Program code (Text section)
  • Program Counter — address of next instruction
  • CPU Registers — current working values
  • Process Stack — temporary data (function parameters, local variables)
  • Heap — dynamically allocated memory
  • Data Section — global and static variables

6. Different States of a Process

New — Process has just been created; resources being allocated

Ready — Process is loaded in memory, waiting for CPU time

Running — Process is currently executing on the CPU

Waiting (Blocked) — Process cannot continue until an event occurs (I/O completion)

Terminated — Process has finished execution; resources being reclaimed

7. What is a Process Control Block (PCB)?

A PCB is a data structure maintained by the OS that contains all information needed to manage a specific process. Each process has exactly one PCB.

Information stored:

  • Process ID (PID) and parent PID
  • Process state (New, Ready, Running, Waiting, Terminated)
  • CPU registers and Program Counter
  • CPU scheduling information (priority, CPU time used)
  • Memory management information (page tables, base/limit registers)
  • I/O status (open files, I/O devices)
  • Accounting information (CPU time, timestamps)

8. What is Context Switching?

Context switching is the process of saving the state of a currently running process and restoring the state of a previously paused process so that the CPU can switch between processes.

Steps:

  • Save current process's CPU registers, PC, stack pointer into its PCB
  • Update PCB state (Running → Ready or Running → Waiting)
  • Scheduler selects next process from ready queue
  • Restore selected process's saved state from its PCB into CPU registers
  • CPU continues the new process from exactly where it stopped

Overhead: Context switches are pure overhead (1–10 microseconds). TLB flush and cache cold start add additional costs.

9. Difference Between Process and Thread

Feature Process Thread
Definition Independent program in execution Unit of execution within a process
Address Space Own private address space Shares address space with other threads
Creation Overhead High (new address space, PCB) Low (just stack + registers)
Communication IPC (pipes, shared memory, sockets) Direct shared memory access
Isolation Crash doesn't affect others Crash can kill entire process

10. What is Process Synchronization?

Process synchronization is the coordination of concurrent processes that share resources to prevent race conditions and ensure data consistency.

Race Condition: When multiple processes access shared data concurrently, the outcome depends on execution order, leading to unpredictable results.

Critical Section Problem requires:

  • Mutual Exclusion — only one process in critical section at a time
  • Progress — decision to enter made in finite time
  • Bounded Waiting — finite limit on wait time

Mechanisms: Mutex Locks, Semaphores, Monitors, Condition Variables, Spinlocks


CPU Scheduling

11. What is CPU Scheduling?

CPU scheduling is the activity of deciding which process from the ready queue should be given the CPU next and for how long. It's central to achieving efficient CPU utilization and system responsiveness.

Scheduling Queues:

  • Job Queue — all processes in the system
  • Ready Queue — processes loaded in memory, ready to run
  • I/O Wait Queues — separate queue for each I/O device

12. Preemptive vs. Non-Preemptive Scheduling

Non-Preemptive: Once a process gets the CPU, it keeps it until it voluntarily releases it. Simpler but risks CPU monopolization.

Preemptive: OS can forcibly take the CPU from a running process based on time quantum expiry or higher-priority process arrival. Ensures fairness and responsiveness but more complex.

13. Common Scheduling Algorithms

FCFS (First Come First Served): Processes executed in arrival order. Simple but suffers from convoy effect.

SJF (Shortest Job First): Process with shortest CPU burst scheduled first. Minimizes average waiting time but can starve long processes.

Round Robin (RR): Each process gets a fixed time quantum. Preemptive and fair. Performance depends on quantum size.

Priority Scheduling: Highest-priority process runs first. Can be preemptive or non-preemptive. Risk of starvation (solved with aging).


Deadlocks

14. What is a Deadlock?

A deadlock is a state where a set of processes are permanently blocked — each waiting for a resource held by another in the set, with no process able to proceed.

Example: Process A holds Resource 1 and waits for Resource 2. Process B holds Resource 2 and waits for Resource 1. Neither can proceed.

15. Four Necessary Conditions for Deadlock (Coffman Conditions)

For a deadlock to occur, all four conditions must hold simultaneously:

  • Mutual Exclusion — at least one resource is non-shareable
  • Hold and Wait — a process holds resources while waiting for more
  • No Preemption — resources cannot be forcibly taken; released only voluntarily
  • Circular Wait — a circular chain of processes waiting for resources

Eliminating even one condition prevents deadlock.

16. Deadlock Handling Strategies

1. Prevention — Eliminate one Coffman condition by system design

2. Avoidance — Use Banker's Algorithm to dynamically check if each request keeps system in safe state

3. Detection + Recovery — Allow deadlocks; run detection algorithm periodically; recover by terminating processes or preempting resources

4. Ignorance (Ostrich Algorithm) — Pretend deadlocks don't exist. Used by most general-purpose OSes (Linux, Windows)


Memory Management

17. What is Memory Management?

Memory management is the OS function responsible for tracking, allocating, and reclaiming physical memory (RAM) across all running processes.

Goals:

  • Efficiency — maximize RAM utilization; minimize wasted space
  • Isolation — each process sees only its own memory
  • Transparency — each process behaves as if it has large, contiguous address space
  • Support virtual memory — programs larger than RAM can run

18. Physical Address vs. Logical Address

Logical Address (Virtual Address) — generated by CPU during program execution. The program uses these addresses; starts from 0 for each process.

Physical Address — actual location in RAM. Used by memory hardware. Determined by OS and MMU.

Translation: MMU hardware maps logical → physical using page tables or segment tables.

19. Explain the Concept of Paging

Paging is a memory management scheme that eliminates the need for contiguous physical memory allocation by dividing both physical and logical memory into fixed-size blocks.

  • Physical memory divided into fixed-size frames
  • Logical address space divided into same-size pages
  • A page table maps each page number to a physical frame number

Benefits: No external fragmentation, supports virtual memory, enables process isolation, allows memory sharing

20. What is Virtual Memory?

Virtual memory gives each process the illusion of having a large, contiguous, private address space — even if physical RAM is smaller or fragmented.

How it works:

  • Each process has a virtual address space
  • OS + MMU maintain page tables mapping virtual pages to physical frames
  • Pages not currently in RAM are stored on disk (swap space)
  • Accessing an invalid page triggers a page fault → OS loads from disk

Benefits: Programs larger than RAM can execute, more processes fit in memory, process isolation, memory sharing

21. What is a Page Fault?

A page fault is a hardware exception that occurs when a process accesses a virtual memory page that is not currently in physical RAM.

Page Fault Handling:

  • CPU detects invalid page table entry → triggers page fault exception
  • OS checks if address is valid (if not → segmentation fault)
  • OS finds a free physical frame (run page replacement if needed)
  • OS reads needed page from disk into the frame
  • Update page table entry: set valid bit, record frame number
  • Restart the faulting instruction (now succeeds)

22. Page Replacement Algorithms

FIFO (First In, First Out): Evict oldest page. Simple but suffers from Belady's Anomaly.

LRU (Least Recently Used): Evict page that hasn't been used for longest time. Good approximation of optimal; immune to Belady's Anomaly.

Optimal (OPT): Evict page not needed for longest time in future. Theoretical benchmark (requires knowing future).

23. Internal vs. External Fragmentation

Internal Fragmentation: Wasted memory inside an allocated block. Occurs when allocation units are larger than needed.

Example: Page size = 4 KB, process needs 5 KB → allocated 2 pages (8 KB) → wastes 3 KB

External Fragmentation: Wasted memory between allocated blocks. Enough total free memory exists but scattered in non-contiguous pieces.

Example: Three 2 KB free holes (6 KB total), but 5 KB request fails because no single hole is large enough


File Systems

24. What is a File System?

A file system is the OS component that organizes, stores, retrieves, and manages data on storage devices, providing the abstraction of files and directories.

File System Structure:

  • Boot Block — bootstrap code for loading OS
  • Superblock — global metadata (total blocks, free blocks, block size)
  • Inode Table — file metadata and block pointers
  • Data Blocks — actual file content

Examples: ext4 (Linux), NTFS (Windows), APFS (macOS), FAT32, exFAT

25. What is an Inode?

An inode (index node) is a data structure in Unix/Linux file systems that stores all metadata about a file — everything except the file's name and content.

Inode Contains:

  • File type (regular file, directory, symbolic link, device)
  • Permissions (read/write/execute for owner, group, others)
  • Owner (user ID and group ID)
  • File size in bytes
  • Timestamps (access, modification, inode change)
  • Link count (number of hard links)
  • Block pointers (addresses of disk blocks storing file data)

26. Hard Links vs. Symbolic Links

Feature Hard Link Symbolic Link
Inode Same as original Own inode (contains target path)
Cross-filesystem No Yes
Link to directory No Yes
If original deleted Data persists Becomes dangling (broken)

Inter-Process Communication (IPC)

27. What is IPC?

IPC (Inter-Process Communication) provides mechanisms that allow separate processes (running in isolated address spaces) to communicate, coordinate, and share data.

Two fundamental models:

  • Shared Memory — processes establish shared memory region; fast but needs synchronization
  • Message Passing — processes send/receive explicit messages through OS; simpler but slower

28. IPC Mechanisms

Pipes: Unidirectional byte-stream channel. Anonymous pipes require parent-child; named pipes (FIFOs) allow unrelated processes.

Message Queues: Kernel-managed linked list of messages. FIFO ordering or priority-based.

Shared Memory: Fastest IPC. Processes map same physical memory pages; requires explicit synchronization.

Sockets: Bidirectional communication endpoints. Support both local (Unix domain) and network (TCP/UDP) communication.

Semaphores: Integer-based synchronization primitive for controlling access to shared resources.

Signals: Lightweight async notifications (SIGTERM, SIGKILL, SIGINT).

29. What is a Semaphore?

A semaphore is an integer-based synchronization primitive used to control access to shared resources in concurrent programs.

Two atomic operations:

  • wait(S) / P(): If S > 0, decrement S and proceed. If S = 0, block until S > 0
  • signal(S) / V(): Increment S. If processes are blocked, wake one

Types:

  • Binary Semaphore — value 0 or 1; acts like a mutex
  • Counting Semaphore — value 0 to N; manages pool of N resources

30. Mutex vs. Semaphore

Feature Mutex Semaphore
Value range Binary (locked/unlocked) Integer (0 to N)
Ownership Owned by specific thread No ownership
Primary use Mutual exclusion Signaling + resource counting
Release Only acquirer can release Any thread can signal

Advanced Concepts

31. What is Thrashing?

Thrashing is a severe performance condition where the OS spends more time swapping pages than executing process instructions, resulting in near-zero useful throughput.

Cause: Too many processes with insufficient frames to hold their working sets → constant page faults

Solution: Reduce multiprogramming, add more RAM, use working set model, implement page fault frequency control

32. What is a Race Condition?

A race condition occurs when two or more processes access shared data concurrently and the final result depends on the unpredictable order of execution.

Example: Two threads both execute counter++ on shared variable. This is read-modify-write, not atomic. If interrupted between operations, updates are lost.

Prevention: Mutex locks, atomic operations, immutable data, thread-local storage

33. Zombie vs. Orphan Process

Zombie Process: A process that has finished execution but still has an entry in the process table because its parent hasn't called wait() to read its exit status. Consumes no CPU/memory, only a process table slot.

Orphan Process: A still-running process whose parent has terminated. In Unix/Linux, orphans are adopted by init/systemd (PID 1).


Master Operating Systems Through Practice! This guide covers essential OS concepts from fundamentals to advanced topics. The key to success is understanding the "why" behind each mechanism, practicing implementation problems, and connecting concepts to real-world system behavior.

Additional Resources

Best of luck with your OS interviews! Focus on understanding core concepts deeply, practice problem-solving with scheduling algorithms and deadlock scenarios, and connect theoretical knowledge to practical system behavior. Remember that OS knowledge is foundational for understanding how software interacts with hardware.

← Back to JobScoutify