For example, the problem of determining which window of a graphical user interface contains a given mouse click can be formulated as an instance of point location, with a subdivision formed by the visible parts of each window, although specialized data structures may be more appropriate than general-purpose point location data structures in this application.
A brute force search of each face using the point-in-polygon algorithm is possible, but usually not feasible for subdivisions of high complexity.
Several different approaches lead to optimal data structures, with O(n) storage space and O(log n) query time, where n is the total number of vertices in S. For simplicity, we assume that the planar subdivision is contained inside a square bounding box.
The simplest and earliest data structure to achieve O(log n) time was discovered by Dobkin and Lipton in 1976.
More specifically, Sarnak and Tarjan sweep a vertical line l from left to right over the plane, while maintaining the segments that intersect l in a Persistent red-black tree.
Herbert Edelsbrunner, Leonidas J. Guibas, and Jorge Stolfi discovered an optimal data structure that only uses the edges in a monotone subdivision.
Third, testing whether a point is on the left or the right side of a monotone subdivision takes O(n) time if performed naïvely.
As we need to perform another nested binary search through O(log n) chains to actually determine the point location, the query time is O(log² n).
To achieve O(log n) query time, we need to use fractional cascading, keeping pointers between the edges of different monotone chains.
Kirkpatrick gives a data structure for point location in triangulated subdivisions with O(n) storage space and O(log n) query time.
Because the subdivision is formed by triangles, a greedy algorithm can find an independent set that contains a constant fraction of the vertices.
Backwards analysis, a form of analysis commonly used for this sort of randomized incremental geometry algorithm, shows that the expected number of trapezoids created for each insertion is bounded by a constant, and therefore that the total number of steps of this algorithm, outside of the point locations, is linear.
The space for the data structure is proportional to the number of trapezoids created throughout this refinement process, which in expectation is O(n).
[6] There are no known general point location data structures with linear space and logarithmic query time for dimensions greater than 2[citation needed].
Therefore, we need to sacrifice either query time, or storage space, or restrict ourselves to some less general type of subdivision.
In the same fashion as in the slab decomposition, the similarity between consecutive data structures can be exploited in order to reduce the storage space to O(n log n), but the query time increases to O(log² n).
An arrangement of n hyperplanes defines O(nd) cells, but point location can be performed in O(log n) time with O(nd) space by using Chazelle's hierarchical cuttings.