Amortized analysis

In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute.

The motivation for amortized analysis is that looking at the worst-case run time can be too pessimistic.

The technique was first formally introduced by Robert Tarjan in his 1985 paper Amortized Computational Complexity,[1] which addressed the need for a more useful form of analysis than the common probabilistic methods used.

Amortization was initially used for very specific types of algorithms, particularly those involving binary trees and union operations.

The basic idea is that a worst-case operation can alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost.

[4] Consider a dynamic array that grows in size as more elements are added to it, such as ArrayList in Java or std::vector in C++.

If we started out with a dynamic array of size 4, we could push 4 elements onto it, and each operation would take constant time.

The next three push operations would similarly take constant time, and then the subsequent addition would require another slow doubling of the array size.

This reasoning can be formalized and generalized to more complicated data structures using amortized analysis.

[4] Shown is a Python3 implementation of a queue, a FIFO data structure: The enqueue operation just pushes an element onto the input array; this operation does not depend on the lengths of either input or output and therefore runs in constant time.

After copying n elements from input, we can perform n dequeue operations, each taking constant time, before the output array is empty again.

Amortized analysis of the push operation for a dynamic array