space complexity where d is the depth of the solution (the length of the shortest path) and b is the branching factor (the maximum number of successors for a state), as it stores all generated nodes in memory.
Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance,[2] as well as by memory-bounded approaches; however, A* is still the best solution in many cases.
[3] Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968.
For Dijkstra's algorithm, since the entire shortest-path tree is generated, every node is a goal, and there can be no specific-goal-directed heuristic.
A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions.
Nils Nilsson originally proposed using the Graph Traverser algorithm[5] for Shakey's path planning.
[7] Peter Hart invented the concepts we now call admissibility and consistency of heuristic functions.
[8] The original 1968 A* paper[4] contained a theorem stating that no A*-like algorithm[a] could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen.
A "correction" was published a few years later[9] claiming that consistency was not required, but this was shown to be false in 1985 in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm.
[10] A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.).
Typical implementations of A* use a priority queue to perform the repeated selection of minimum (estimated) cost nodes to expand.
To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor.
With a consistent heuristic, A* is guaranteed to find an optimal path without processing any node more than once and A* is equivalent to running Dijkstra's algorithm with the reduced cost d'(x, y) = d(x, y) + h(y) − h(x).
This is essential to guarantee that the path returned is optimal if the heuristic function is admissible but not consistent.
An example of an A* algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to the target point:
Key: green: start; blue: goal; orange: visited The A* algorithm has real-world applications.
The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations.
A standard binary heap based priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time.
Alternatively, a Fibonacci heap can perform the same decrease-priority operations in constant amortized time.
[12][13] General depth-first search can be implemented using A* by considering that there is a global counter C initialized with a very large value.
On finite graphs with non-negative edge weights A* is guaranteed to terminate and is complete, i.e. it will always find a solution (a path from start to goal) if one exists.
In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems.
However, more recent research found that this pathological case only occurs in certain contrived situations where the edge weight of the search graph is exponential in the size of the graph and that certain inconsistent (but admissible) heuristics can lead to a reduced number of node expansions in A* searches.
To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion.
In the worst case of an unbounded search space, the number of nodes expanded is exponential in the depth of the solution (the shortest path) d: O(bd), where b is the branching factor (the average number of successors per state).
Its quality can be expressed in terms of the effective branching factor b*, which can be determined empirically for a problem instance by measuring the number of nodes generated by expansion, N, and the depth of the solution, then solving[25] Good heuristics are those with low effective branching factor (the optimal being b* = 1).
In other words, the error of h will not grow faster than the logarithm of the "perfect heuristic" h* that returns the true distance from x to the goal.
[18][24] The space complexity of A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory.
A* is often used for the common pathfinding problem in applications such as video games, but was originally designed as a general graph traversal algorithm.
[30] A* can also be adapted to a bidirectional search algorithm, but special care needs to be taken for the stopping criterion.