In graph theory and theoretical computer science, the level ancestor problem is the problem of preprocessing a given rooted tree T into a data structure that can determine the ancestor of a given node at a given distance from the root of the tree.
On the path to the root of the tree, every ancestor of a node can be visited and therefore reported.
Storing the paths through the tree in a skew binary random access list allows the tree to still be extended downward one O(1) step at a time, but now allows the search to proceed in O(log(p)), where "p" is the distance from v to the requested depth.
This approach is feasible when the tree is particularly wide or will be extended online and so can't be effectively preprocessed, as the storage and access time is purely determined by path length.
The ith element of this array points to the 2ith ancestor of v. Using such data structure helps us jump halfway up the tree from any given node.
When the algorithm is asked to process a query, we repeatedly jump up the tree using these pointers.
The ladder algorithm [1] is based on the idea of simplifying a tree into a collection of paths.
Consider a path P consisting of n nodes rooted at a node r. We can store the path into an array of size n called Ladder and we can quickly answer a level ancestor query of LA(v, d) by returning Ladder[d] if depth(v)≤d.
This stages starts off by finding the longest root-to-leaf path in the tree.
In order to reach the optimal query time, we need to process the results in a second stage described below.
This will extend each array twice its original size at most which will result in 2n total number of nodes in all the ladders.
Although a node v can be listed in multiple paths now but its ladder is the one that was associated to v in the first stage of the algorithm.
The main observation is that LA(v,d) is the first node of depth d that appears in the Euler tour after the last appearance of v. Thus, by constructing the Euler tour and associated information on depth, the problem is reduced to a query on arrays, named find smaller (FS).
This idea yields O(1) query time, with a preprocessing algorithm of complexity O(n log n).