What Is Big O Notation?
Big O notation is how developers describe the performance or complexity of an algorithm — specifically how its runtime or memory usage grows as the input size increases. It doesn't tell you exactly how fast your code runs; it tells you how it scales.
Understanding Big O is essential for writing efficient code, passing technical interviews, and making smart architectural decisions.
Why Does It Matter?
An algorithm that works fine on 100 items may completely break down on 1,000,000. Big O gives you the vocabulary to reason about this before it becomes a production problem.
The Most Common Complexities
O(1) — Constant Time
The operation takes the same amount of time regardless of input size. Accessing an array element by index is O(1).
arr[5] // Always one operation, no matter how big arr is
O(log n) — Logarithmic Time
The input is divided in half each step. Binary search is the classic example. Very efficient for large datasets.
O(n) — Linear Time
Runtime grows directly with input size. Looping through every item in a list is O(n).
for item in my_list:
print(item)
O(n log n) — Linearithmic Time
Common in efficient sorting algorithms like Merge Sort and Quick Sort (average case). Much better than O(n²) for large inputs.
O(n²) — Quadratic Time
Nested loops over the same dataset. Bubble Sort is a classic O(n²) algorithm. Fine for small inputs, painful for large ones.
for i in range(n):
for j in range(n):
# This runs n * n times
O(2ⁿ) — Exponential Time
Often seen in brute-force recursive solutions (e.g., naive Fibonacci). Doubles with each additional input element — avoid for any meaningful input size.
Quick Reference Table
| Notation | Name | Example | Scales? |
|---|---|---|---|
| O(1) | Constant | Array index access | ✅ Excellent |
| O(log n) | Logarithmic | Binary search | ✅ Great |
| O(n) | Linear | Single loop | ✅ Good |
| O(n log n) | Linearithmic | Merge sort | ⚠️ Acceptable |
| O(n²) | Quadratic | Nested loops | ⚠️ Poor at scale |
| O(2ⁿ) | Exponential | Naive recursion | ❌ Avoid |
Practical Tips for Everyday Code
- Avoid nested loops on large data — look for O(n log n) alternatives.
- Use hash maps (dictionaries/objects) to bring O(n²) lookups down to O(n).
- Don't over-optimize early — O(n²) is fine for n = 50.
- Recognize the pattern — divide and conquer usually signals O(log n) or O(n log n).
Space Complexity Matters Too
Big O applies to memory usage as well. An algorithm can be fast but memory-hungry. Always consider both time and space complexity when evaluating a solution.
Summary
Big O notation is not about memorizing formulas — it's about developing intuition for how your code behaves under pressure. Start by identifying loops, recursion, and data structure operations in your code. Then ask: as n grows, how does this scale? That habit alone will make you a significantly better developer.