Understanding Algorithm Analysis

algorithms
teaching
Author

Claire To

Published

June 16, 2025

I’ve noticed that a lot of students – whether they’re currently taking algorithms or have already completed the course – often confuse cases with bounds when discussing algorithm analysis. I know I personally have had to wrestle with this concept myself for quite a while.

My hope is that this helps clear up some of that confusion and provides an intuitive, tangible overview of these concepts. It’s not meant to be a comprehensive or formal discussion, as there are already countless resources online, but rather a more approachable supplement for those who already have some background.

Case Analysis

When analyzing how long an algorithm takes, we consider how it behaves in different cases, depending on the input.

Best Case

The shortest possible time an algorithm takes on any input.

For example, when searching for a number in a list, the best case could be finding it at the first index. In insertion sort, the best case is when the array is already sorted.

This is less useful in practice, since it’s too optimistic and best case scenarios are unlikely to happen.

Worst Case

The longest possible time an algorithm takes on any input.

For example, the worst case in searching for a number in a list is when the number is found at the very last position checked or not found in the list at all. In insertion sort, the worst case occurs when the array is reverse-sorted.

This is the most pessimistic, but also a reliable way to measure an algorithm’s performance because it guarantees that the algorithm will always run within a certain time.

Average Case

The expected runtime over all inputs, assuming a certain input distribution.

That means we average the running times of the algorithm over all possible inputs, weighted by their probability.

Probability Distributions

  • Uniform distribution: Models all input where each instance is equally likely. Most common in average-case analysis of general algorithms.

  • Bernoulli distribution: Useful when inputs are sequences of binary events (success/failure). Most common in probabilistic or randomized algorithms.

  • Geometric distribution: Models the number of trials until the first success. Most common in probabilistic or randomized algorithms.

Asymptotic Bounds

In algorithm analysis, we often use asymptotic bounds to describe how an algorithm’s running time grows as the input grows larger.

You can visualize this by plotting the runtime for increasing input sizes and comparing it to well-known functions like \(n\), \(n \log n\), or \(n^2\). This is the core idea behind asymptotic comparison of growth rates.

Upper Bound

An upper bound usually describes the maximum amount of work a specific algorithm requires to solve a problem.

An upper bound for a problem is established by providing an algorithm that achieves that bound. Thus, an algorithm’s upper bound also provides an upper bound for the problem it solves.

Upper bounds are often proven using:

  • Direct algorithm analysis
  • Recurrence relations
  • Amortized analysis

Lower Bound

When discussing lower bounds, it’s important to distinguish between a bound on a specific algorithm and a bound on the inherent difficulty of a problem.

For a Specific Algorithm

A lower bound for an algorithm indicates the minimum amount of work that specific algorithm requires to solve a problem in a given case, usually in the worst case.

Lower bounds for algorithms are often proven using:

  • Direct algorithm analysis

For a Problem

A lower bound for a problem indicates the minimum amount of work any correct algorithm requires to solve a problem, always in the worst case.

This is a much stronger statement about the fundamental, inherent difficulty of the problem itself. Lower bound proofs for problems establish the theoretical best performance achievable under a defined model of computation.

For example, searching for an element in an unsorted list requires \(n\) comparisions because we must check every element. Otherwise, we might miss the one we didn’t check, and that could’ve been the correct one.

Lower bounds for problems are often proven using:

  • Counting arguments
  • Decision trees
  • Adversary arguments
  • Information theory
  • Reductions (P ?= NP)

Asymptotic Notation

In theoretical computer science, unless specified otherwise, asymptotic notation usually refers to the worst case. In practice, especially with randomized algorithms or heuristics, expected or average case complexity may be more relevant.

Big O: \(O(f(n))\)

An upper bound that shows the algorithm \(A\)’s running time grows no faster than some function \(f(n)\) as the input size grows.

The function \(f(n)\)’s graph is asymptotically above or equal to algorithm \(A\)’s graph.

Big Omega: \(\Omega(f(n))\)

A lower bound that shows the algorithm \(A\)’s running time grows no slower than some function \(f(n)\) as the input size grows.

The function \(f(n)\)’s graph is asymptotically below or equal to algorithm \(A\)’s graph.

Theta: \(\Theta(f(n))\)

This indicates a tight bound. In other words, for a specific algorithm, its upper and lower bounds are equivalent up to constant factors. This gives us a very precise understanding of how that algorithm’s runtime grows.

The function \(f(n)\)’s graph is asymptotically equivalent to algorithm \(A\)’s graph.

Solving recurrance relations, such as by using the Master Theorem, gives us a tight bound on that algorithm’s performance.

An algorithm is considered asymptotically optimal for a problem when it has a tight bound that matches the problem’s proven lower bound, meaning we have found the fastest possible algorithm for that problem. If an algorithm’s worst-case upper bound matches the lower bound of the problem it solves, then it is implied that the algorithm has a tight bound.

Little o: \(o(f(n))\)

Similar to Big O, but shows the algorithm \(A\)’s running time grows strictly slower than \(f(n)\).

The function \(f(n)\)’s graph is asymptotically above algorithm \(A\)’s graph.

For example, \(n\) grows strictly slower than \(n \log n\).

Little omega: \(\omega(f(n))\)

Similar to Big Omega, but shows the algorithm \(A\)’s running time grows strictly faster than \(f(n)\).

The function \(f(n)\)’s graph is asymptotically below algorithm \(A\)’s graph.

For example, \(n \log n\) grows strictly faster than \(n\).


Remember, cases describe an algorithm’s performance on specific types of input, while asymptotic bounds and its notation describe its growth rate relative to increasing input size. I hope this helps clear up some common misconceptions!