Introduction
In computer memory, an Array is like a block of terraced houses: every element lives right next door to its neighbour. This makes finding the $5^{th}$ house instant ($O(1)$ lookup), but if you want to build a new house in the middle, you have to bulldoze and shift all the subsequent houses to make room ($O(n)$ insertion).
A Linked List is like a treasure hunt. Each piece of data (the treasure) comes with a clue (a pointer) telling you exactly where the next piece of data is hidden in memory. The data can be scattered randomly everywhere.
Because of this structure, inserting a new element in the middle is as simple as updating a couple of clues ($O(1)$ insertion, once you are there), but finding the $5^{th}$ element requires following the trail from the very beginning ($O(n)$ lookup).
The Anatomy of a Node
A Linked List is made up of objects called Nodes. Every node contains two things:
- The data (an integer, a string, an object).
- The pointer (or reference) to the next node.
// Java representation of a Node
class Node {
int data;
Node next; // Points to the next node in memory
public Node(int data) {
this.data = data;
this.next = null; // by default, it doesn't point to anything
}
}
The Linked List itself just keeps a reference to the very first node, called the Head. The last node’s next pointer is explicitly set to null to indicate the end of the list.
Types of Linked Lists
1. Singly Linked List
The simplest form. Nodes only point forward. You can traverse it in one direction: Head $\rightarrow$ Tail.
[Head: 10] -> [20] -> [30] -> null
2. Doubly Linked List
Every node has two pointers: one pointing forward (next) and one pointing backward (prev). This uses more memory, but allows you to traverse the list backwards, making deletions much easier if you only have a reference to the node you want to delete.
null <- [10] <-> [20] <-> [30] -> null
(Java’s java.util.LinkedList is implemented this way).
3. Circular Linked List
The next pointer of the final node points back to the Head, creating an infinite loop. This is useful for scheduling algorithms where processes take turns in a round-robin fashion.
[10] -> [20] -> [30]
^ |
|----------------|
Core Operations (Singly Linked List)
1. Traversing the List
To visit every node, we start at the head and follow the next pointers until we hit null.
public void printList(Node head) {
Node current = head;
while (current != null) {
System.out.print(current.data + " -> ");
current = current.next; // move to the next node
}
System.out.println("null");
}
2. Inserting at the Head — $O(1)$
This is incredibly fast compared to an array (which requires shifting every element).
public Node insertAtHead(Node head, int newData) {
Node newNode = new Node(newData);
newNode.next = head; // The new node points to the old head
return newNode; // The new node is now the head
}
3. Deleting a Node (in the middle) — $O(1)$ to delete, $O(n)$ to find
To delete a target node, we simply tell the previous node to bypass it and point directly to the next node. The target node becomes disconnected and is eventually cleaned up by Java’s Garbage Collector.
public Node deleteNode(Node head, int targetData) {
if (head == null) return null;
// Special case: deleting the head
if (head.data == targetData) {
return head.next;
}
Node current = head;
// We must stop ONE node before the target so we can alter its 'next' pointer
while (current.next != null && current.next.data != targetData) {
current = current.next;
}
// If we found it, bypass it
if (current.next != null) {
current.next = current.next.next;
}
return head;
}
Array vs Linked List Comparison
| Operation | Array (Dynamic) | Singly Linked List | Why? |
|---|---|---|---|
Access get(i) | $O(1)$ ✅ | $O(n)$ ❌ | Arrays do math to find memory addresses. Linked lists must walk the chain. |
| Insert at head | $O(n)$ ❌ | $O(1)$ ✅ | Arrays must shift everything right. Linked lists just update one pointer. |
| Insert at tail | $O(1)$ amortised | $O(n)$ or $O(1)$ | $O(1)$ if you maintain a tail reference, else $O(n)$ to walk there. |
| Memory usage | Less per item | More per item ❌ | Arrays only store data. Nodes store data + pointer(s). |
| Cache locality | High ✅ | Low ❌ | Contiguous memory is faster for CPU caches to read. |
Real-world verdict: Despite the theoretical $O(1)$ insertions, dynamic arrays (
ArrayList) heavily outperformLinkedListsin modern applications due to CPU cache locality. Arrays are packed tightly in memory, while list nodes are scattered, causing slow CPU cache misses.
Common Interview Techniques
Linked Lists are a favourite topic in coding interviews because they test your ability to manipulate pointers.
The Fast & Slow Pointer Strategy
Want to find the middle of a linked list in one pass? Use two pointers. The fast pointer moves 2 steps at a time, while the slow pointer moves 1 step. When the fast pointer reaches the end, the slow pointer will be exactly in the middle!
You can see a variation of this pattern in our Two Pointer Technique post.
The Dummy Node Strategy
When modifying a linked list (like merging two lists or deleting nodes), you often have to write messy if-else statements to handle changes to the head. Instead, create a temporary “dummy” node that points to the head, do all your operations, and return dummy.next at the end. It saves massive headaches.
Practice Problems
- Reverse a Linked List – LeetCode #206 — A rite of passage. Reverse the
nextpointers in-place. - Middle of the Linked List – LeetCode #876 — Practice the fast and slow pointer strategy.
- Linked List Cycle – LeetCode #141 — If a list is circular, a fast rabbit pointer will eventually lap a slow tortoise pointer.
- Merge Two Sorted Lists – LeetCode #21 — Combine two lists by carefully adjusting their pointers using the dummy node strategy.
What’s Next?
We just finished Phase 2! We have covered searching and linear data structures (arrays, stacks, queues, and linked lists).
In Phase 3, we move away from straight lines and explore hierarchical and interconnected data: Binary Trees — Traversals & Operations.