In combinatorial mathematics and theoretical computer science, heavy-light decomposition (also called heavy path decomposition) is a technique for decomposing a rooted tree into a set of paths.
Leaf nodes of the tree that are not the endpoint of a heavy edge may be considered as forming paths of length zero.
Equivalently, the path tree has height at most log2 n. Heavy path decomposition was introduced by Sleator & Tarjan (1983) as part of the amortized analysis of their link/cut tree structure,[2] and by Harel & Tarjan (1984) as part of their data structure for lowest common ancestors,[3] The link/cut tree data structure uses a partition of a dynamic tree into paths that is not necessarily the heavy path decomposition; its analysis uses a potential function measuring its distance from the heavy path decomposition, and the small height of the path tree implies that each data structure operation performs only a small number of steps that cannot be charged against improvements to this function.
[2] In the lowest common ancestor data structure, the decomposition is used to embed the input tree into a complete binary tree of logarithmic depth, allowing each query to be solved by constant-time bitwise operations.
[3] Subsequent applications of heavy path decomposition have included solving the level ancestor problem,[4] computing the edit distance between trees,[5][6] graph drawing and greedy embedding,[7][8][9] finding a path near all nodes of a given graph,[10] fault diagnosis in fiber-optic communication networks,[1] and decoding grammar-based codes,[11] among others.