What Is an Algorithm?
Imagine you are cooking your favourite dish. You have a recipe — a series of steps you follow in a specific order to turn raw ingredients into a finished meal. An algorithm is exactly that, but for a computer: a well-defined, step-by-step set of instructions that takes some input, processes it, and produces an output.
Here are a few everyday examples that are secretly algorithms:
| Everyday Task | Input | Steps | Output |
|---|---|---|---|
| Looking up a word in a dictionary | The word | Open to the middle, decide if your word is before or after, repeat | The definition |
| Sorting your bookshelf by author | Unsorted books | Compare two books, swap if out of order, repeat | Sorted bookshelf |
| Finding the cheapest flight | Departure & arrival city | Compare prices across airlines | The cheapest option |
In computer science, we study algorithms to find the fastest and most memory-efficient way to solve a problem. But how do we compare two algorithms that solve the same problem? That is where Big-O notation comes in.
Why Do We Need Big-O Notation?
Suppose two developers write different programs to sort a list of 1,000 numbers. Developer A’s program finishes in 2 milliseconds, and Developer B’s takes 5 milliseconds. Does that mean A’s algorithm is better?
Not necessarily. The timing depends on:
- The hardware (a gaming PC vs. an old laptop)
- The programming language (C vs. Python)
- The specific input (already sorted? reverse sorted? random?)
We need a way to compare algorithms that is independent of hardware, language, and specific inputs. Big-O notation gives us exactly that — it describes how an algorithm’s running time grows as the input size increases.
Understanding Big-O: The Growth Rate
Big-O notation answers the question: “If I double the size of my input, how much longer will this algorithm take?”
Think of it like comparing how different vehicles handle increasing distances:
| Distance (km) | Walking (slow growth) | Car (medium) | Rocket (fast) |
|---|---|---|---|
| 10 | 2 hours | 10 min | 1 sec |
| 100 | 20 hours | 1.5 hours | 2 sec |
| 1,000 | 8 days | 10 hours | 3 sec |
The rocket scales incredibly well — even a 100× increase in distance barely changes its time. An $O(\log n)$ algorithm behaves like the rocket.
The walking speed is fine for short distances but becomes impractical as distance grows. An $O(n^2)$ algorithm is like walking — manageable for small inputs, painful for large ones.
Common Big-O Complexities
Here are the most common complexity classes, from fastest to slowest growth:
$O(1)$ — Constant Time
The running time does not change regardless of input size. No matter if you have 10 elements or 10 million, the operation takes the same amount of time.
public static int getFirstElement(int[] arr) {
return arr[0]; // always one operation, no matter the array size
}
Real-world analogy: Looking at the first page of a book. It doesn’t matter if the book has 100 pages or 10,000 — opening to page 1 always takes the same effort.
$O(\log n)$ — Logarithmic Time
The running time grows very slowly. Each step cuts the problem in half. If you have 1,000 elements, you only need about 10 steps. For 1,000,000 elements, only about 20 steps.
public static int binarySearch(int[] sortedArr, int target) {
int low = 0;
int high = sortedArr.length - 1;
while (low <= high) {
int mid = low + (high - low) / 2; // find the middle
if (sortedArr[mid] == target) {
return mid; // found it!
} else if (sortedArr[mid] < target) {
low = mid + 1; // target is in the right half
} else {
high = mid - 1; // target is in the left half
}
}
return -1; // not found
}
Real-world analogy: Looking up a word in a physical dictionary. You open to the middle, see if your word comes before or after, and then repeat with the remaining half. You never read every single page.
$O(n)$ — Linear Time
The running time grows directly in proportion to the input size. If the input doubles, the time doubles.
public static int findMaximum(int[] arr) {
int max = arr[0];
for (int i = 1; i < arr.length; i++) { // visits every element exactly once
if (arr[i] > max) {
max = arr[i];
}
}
return max;
}
Real-world analogy: Checking every student in a classroom to find the tallest one. If the class size doubles, you have to check twice as many students.
$O(n \log n)$ — Linearithmic Time
This is the sweet spot for efficient sorting algorithms like Merge Sort and Quick Sort. It grows faster than linear but much slower than quadratic.
For 1,000 elements: ~10,000 operations. For 1,000,000 elements: ~20,000,000 operations (not 1,000,000,000,000 like $O(n^2)$!).
We will see concrete examples in the Merge Sort and Quick Sort posts.
$O(n^2)$ — Quadratic Time
The running time grows with the square of the input size. Doubling the input makes the algorithm four times slower. This typically happens when you have nested loops, each iterating over the entire input.
public static void printAllPairs(int[] arr) {
for (int i = 0; i < arr.length; i++) { // outer loop: n times
for (int j = 0; j < arr.length; j++) { // inner loop: n times for each outer
System.out.println(arr[i] + ", " + arr[j]);
}
}
// Total operations: n × n = n²
}
Real-world analogy: A handshake problem — at a party of n people, if everyone shakes hands with everyone else, the number of handshakes grows roughly as $n^2$.
$O(2^n)$ — Exponential Time
The running time doubles with every single additional element. Even small inputs can take an extremely long time.
// A naive recursive approach to calculate Fibonacci numbers
public static int fibonacci(int n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2); // two calls at each level
}
With $n = 30$, this makes over 1 billion function calls. With $n = 50$, your computer would likely run for hours.
Real-world analogy: Imagine you have a combination lock with n binary switches (on/off). To try every combination, you need $2^n$ attempts.
The Big-O Cheat Sheet
| Big-O | Name | 10 elements | 100 elements | 1,000 elements | Example |
|---|---|---|---|---|---|
| $O(1)$ | Constant | 1 | 1 | 1 | Array index access |
| $O(\log n)$ | Logarithmic | 3 | 7 | 10 | Binary Search |
| $O(n)$ | Linear | 10 | 100 | 1,000 | Finding max in array |
| $O(n \log n)$ | Linearithmic | 33 | 664 | 9,966 | Merge Sort |
| $O(n^2)$ | Quadratic | 100 | 10,000 | 1,000,000 | Bubble Sort |
| $O(2^n)$ | Exponential | 1,024 | $1.27 \times 10^{30}$ | ∞ (impractical) | Naive Fibonacci |
Space Complexity — The Other Half
So far, we have been talking about time complexity — how long an algorithm takes. But algorithms also use memory (RAM), and we measure that with space complexity using the same Big-O notation.
// O(1) space — uses a fixed number of variables regardless of input size
public static int sum(int[] arr) {
int total = 0; // just one variable
for (int num : arr) {
total += num;
}
return total;
}
// O(n) space — creates a new array proportional to the input size
public static int[] copyArray(int[] arr) {
int[] copy = new int[arr.length]; // new array of the same size
for (int i = 0; i < arr.length; i++) {
copy[i] = arr[i];
}
return copy;
}
When choosing an algorithm, you often face a trade-off: you can save time by using more memory, or save memory by spending more time. This is called the time-space trade-off, and it is a theme we will revisit throughout this series.
How to Determine Big-O: A Simple Guide
Here are practical rules for quickly determining the Big-O of code:
- Single loop over input → $O(n)$
- Two nested loops over input → $O(n^2)$
- Loop that halves the input each time → $O(\log n)$
- Recursive function with two calls → $O(2^n)$ (often)
- No loops, no recursion — just arithmetic or access → $O(1)$
Worked Example
public static boolean containsDuplicate(int[] arr) {
for (int i = 0; i < arr.length; i++) { // runs n times
for (int j = i + 1; j < arr.length; j++) { // runs up to n-1 times
if (arr[i] == arr[j]) {
return true;
}
}
}
return false;
}
Analysis:
- The outer loop runs $n$ times.
- For each iteration of the outer loop, the inner loop runs roughly $n/2$ times on average.
- Total operations: approximately $\frac{n \times (n-1)}{2}$
- In Big-O, we drop constants and lower-order terms: $O(n^2)$
When to Use / When NOT to Use
| Situation | What to Consider |
|---|---|
| Small input (< 50 elements) | Big-O differences are negligible. Use whatever is simplest and most readable. |
| Medium input (1,000–100,000) | Choose $O(n \log n)$ over $O(n^2)$. The difference is noticeable. |
| Large input (> 1,000,000) | Even $O(n \log n)$ vs $O(n)$ matters. Optimise aggressively. |
| Real-time systems | Worst-case Big-O matters more than average-case. |
Practice Problems
Try these problems to solidify your understanding:
- Running Sum of 1D Array – LeetCode #1480 — Can you identify the time and space complexity?
- Contains Duplicate – LeetCode #217 — Try solving it in $O(n^2)$, then optimise to $O(n)$ using a HashSet.
- Binary Search – LeetCode #704 — Implement it and verify it is $O(\log n)$.
What’s Next?
Now that you understand how to measure an algorithm’s efficiency, it is time to see these concepts in action. In the next post, we will explore the simplest sorting algorithm: Bubble Sort.