Big-O Notation — How We Measure Algorithm Speed
What Is Big-O Notation?
Big-O notation describes how an algorithm’s runtime or memory usage grows as the input size increases. It does not measure exact seconds or bytes — it captures the growth rate.
When someone says an algorithm is O(n), they mean: as the input doubles in size, the runtime roughly doubles too. O(n²) means doubling the input quadruples the runtime. O(1) means the runtime stays the same no matter the input size.
Big-O is the shared language for comparing algorithms. Without it, you can not have a meaningful conversation about "which approach is faster."
The Common Complexities
O(1) — Constant
Array access by index. Hash table lookup. Pushing onto a stack. The input could have 10 elements or 10 million — same speed.
O(log n) — Logarithmic
Binary search. Balanced BST operations. Each step halves the remaining work. A million elements? About 20 steps.
O(n) — Linear
Scanning an array. Linked list traversal. You touch every element once. Double the input, double the work.
O(n log n) — Linearithmic
Merge sort. Quick sort (average). The best possible time for comparison-based sorting.
O(n²) — Quadratic
Bubble sort. Nested loops over the same array. Fine for n < 1000, painful beyond that.
O(2ⁿ) — Exponential
Brute-force subset problems. Recursive Fibonacci without memoization. Grows so fast it becomes unusable beyond n ≈ 25.
import time
def constant_time(arr):
return arr[0]
def linear_time(arr):
total = 0
for x in arr:
total += x
return total
def quadratic_time(arr):
count = 0
for i in arr:
for j in arr:
count += 1
return count
sizes = [100, 1000, 5000]
for n in sizes:
arr = list(range(n))
start = time.time()
constant_time(arr)
t1 = (time.time() - start) * 1000
start = time.time()
linear_time(arr)
t2 = (time.time() - start) * 1000
start = time.time()
quadratic_time(arr)
t3 = (time.time() - start) * 1000
print(f"n={n:5d} O(1)={t1:.3f}ms O(n)={t2:.3f}ms O(n²)={t3:.3f}ms")| Big-O | Name | n=10 | n=100 | n=1000 | n=1M |
|---|---|---|---|---|---|
| O(1) | Constant | 1 | 1 | 1 | 1 |
| O(log n) | Logarithmic | 3 | 7 | 10 | 20 |
| O(n) | Linear | 10 | 100 | 1,000 | 1,000,000 |
| O(n log n) | Linearithmic | 33 | 664 | 9,966 | 19,931,569 |
| O(n²) | Quadratic | 100 | 10,000 | 1,000,000 | 10¹² |
| O(2ⁿ) | Exponential | 1,024 | 10³⁰ | ∞ | ∞ |
Rules of Thumb
- Drop constants — O(2n) is just O(n). O(n/2) is O(n). Constants don’t change the growth rate.
- Drop lower-order terms — O(n² + n) is O(n²). The quadratic term dominates for large n.
- Nested loops multiply — a loop inside a loop over n elements is O(n²).
- Sequential loops add — two separate loops over n elements is O(n + n) = O(n).
- Space complexity — same notation, but measures memory. Creating a new array of size n is O(n) space.
Key Takeaways
- Big-O describes growth rate, not exact speed. It tells you how an algorithm scales.
- O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ) — memorise this hierarchy.
- Drop constants and lower-order terms. 5n² + 3n + 7 is just O(n²).
- Nested loops multiply complexities. Sequential operations add them.
- Every algorithm in this course will include its Big-O analysis. Now you know how to read it.