Skip to main content

Subjects

Big-O Notation — How We Measure Algorithm Speed

big-ocomplexitymathematicsbeginnerfree
Understand Big-O notation — the language programmers use to describe how fast an algorithm runs and how much memory it uses.
Share:

What Is Big-O Notation?

Big-O notation describes how an algorithm’s runtime or memory usage grows as the input size increases. It does not measure exact seconds or bytes — it captures the growth rate.

When someone says an algorithm is O(n), they mean: as the input doubles in size, the runtime roughly doubles too. O(n²) means doubling the input quadruples the runtime. O(1) means the runtime stays the same no matter the input size.

Big-O is the shared language for comparing algorithms. Without it, you can not have a meaningful conversation about "which approach is faster."

The Common Complexities

O(1) — Constant

Array access by index. Hash table lookup. Pushing onto a stack. The input could have 10 elements or 10 million — same speed.

O(log n) — Logarithmic

Binary search. Balanced BST operations. Each step halves the remaining work. A million elements? About 20 steps.

O(n) — Linear

Scanning an array. Linked list traversal. You touch every element once. Double the input, double the work.

O(n log n) — Linearithmic

Merge sort. Quick sort (average). The best possible time for comparison-based sorting.

O(n²) — Quadratic

Bubble sort. Nested loops over the same array. Fine for n < 1000, painful beyond that.

O(2ⁿ) — Exponential

Brute-force subset problems. Recursive Fibonacci without memoization. Grows so fast it becomes unusable beyond n ≈ 25.

python
import time

def constant_time(arr):
    return arr[0]

def linear_time(arr):
    total = 0
    for x in arr:
        total += x
    return total

def quadratic_time(arr):
    count = 0
    for i in arr:
        for j in arr:
            count += 1
    return count

sizes = [100, 1000, 5000]
for n in sizes:
    arr = list(range(n))
    
    start = time.time()
    constant_time(arr)
    t1 = (time.time() - start) * 1000
    
    start = time.time()
    linear_time(arr)
    t2 = (time.time() - start) * 1000
    
    start = time.time()
    quadratic_time(arr)
    t3 = (time.time() - start) * 1000
    
    print(f"n={n:5d}  O(1)={t1:.3f}ms  O(n)={t2:.3f}ms  O(n²)={t3:.3f}ms")
How operation counts grow with input size
Big-ONamen=10n=100n=1000n=1M
O(1)Constant1111
O(log n)Logarithmic371020
O(n)Linear101001,0001,000,000
O(n log n)Linearithmic336649,96619,931,569
O(n²)Quadratic10010,0001,000,00010¹²
O(2ⁿ)Exponential1,02410³⁰

Rules of Thumb

  • Drop constants — O(2n) is just O(n). O(n/2) is O(n). Constants don’t change the growth rate.
  • Drop lower-order terms — O(n² + n) is O(n²). The quadratic term dominates for large n.
  • Nested loops multiply — a loop inside a loop over n elements is O(n²).
  • Sequential loops add — two separate loops over n elements is O(n + n) = O(n).
  • Space complexity — same notation, but measures memory. Creating a new array of size n is O(n) space.

Key Takeaways

  • Big-O describes growth rate, not exact speed. It tells you how an algorithm scales.
  • O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ) — memorise this hierarchy.
  • Drop constants and lower-order terms. 5n² + 3n + 7 is just O(n²).
  • Nested loops multiply complexities. Sequential operations add them.
  • Every algorithm in this course will include its Big-O analysis. Now you know how to read it.