Generated by GPT-5-mini| A* search | |
|---|---|
| Name | A* search |
| Authors | Peter Hart, Nils Nilsson, Bertram Raphael |
| Year | 1968 |
| Paradigm | Best-first search, Graph search, Pathfinding |
| Complexity | Exponential in worst case |
| Related | Dijkstra's algorithm, Uniform-cost search, Greedy best-first search |
A* search is a best-first graph search algorithm that finds least-cost paths from a given start node to a target node by combining path cost and heuristic estimates. Developed in 1968 by Peter Hart, Nils Nilsson, and Bertram Raphael at the Stanford Research Institute and SRI International, it unifies earlier work such as Dijkstra's algorithm and Greedy best-first search to produce optimal and complete solutions when supplied with appropriate heuristics. A* has become foundational in fields including robotics, computer graphics, artificial intelligence, operations research, and video game pathfinding.
A* maintains a frontier (open set) of discovered nodes prioritized by f(n)=g(n)+h(n), where g(n) is the cost from the start and h(n) is a heuristic estimate to the goal; it expands nodes in order of increasing f. The algorithm uses a closed set to track expanded nodes and avoids revisiting states already settled with lower costs, analogous to state management in Breadth-first search, Depth-first search, and Uniform-cost search. Key historical influences include work at Massachusetts Institute of Technology and advances in heuristic search by researchers associated with RAND Corporation and Bell Labs.
The canonical A* loop pops the lowest-f node, checks for goal, generates successors, computes tentative g-values, and updates the frontier and closed set accordingly. Implementations typically use a priority queue such as a binary heap, Fibonacci heap, or a specialized open-list structure derived from research at Carnegie Mellon University and University of California, Berkeley. Correctness proofs rely on monotonicity and admissibility conditions formalized in theoretical results similar to those published in journals connected to Association for Computing Machinery and IEEE. Practical engineering examples appear in systems developed at NASA, DARPA, and commercial projects by Electronic Arts and Google.
Admissible heuristics never overestimate true cost to goal; classic admissible heuristics include Manhattan distance, Euclidean distance, and Chebyshev distance used in grid-based navigation and studied in the context of Bellman–Ford algorithm analyses. Consistency (monotonicity) guarantees that f-values along paths are nondecreasing and simplifies proof of optimality; consistency requirements are related to triangle inequality properties examined in work from Princeton University and Yale University. Heuristics often derive from relaxed problem formulations, pattern databases, or domain abstractions inspired by projects at University of Toronto and University of Washington. Machine-learned heuristics from groups at Stanford University and Massachusetts Institute of Technology offer learned admissible estimators when paired with verification mechanisms from Harvard University research.
Worst-case time and space complexity are exponential in the depth of the solution because A* may explore large portions of the search space similar to exhaustive searches analyzed in the literature of Complexity theory and Computational geometry. Performance depends critically on heuristic quality: a perfect heuristic reduces A* to direct tracing of the optimal path, while a null heuristic reduces it to uniform-cost search. Empirical benchmarking appears in comparative studies by teams at University College London, ETH Zurich, and University of Cambridge; these studies measure node expansions, memory use, and wall-clock time across maps and puzzles such as 15 puzzle and Rubik's Cube-like domains studied at University of Oxford.
Numerous variants address memory, optimality, and dynamic environments: Iterative Deepening A* (IDA*) reduces memory demands, Frugal A* and Memory-Bounded A* (e.g., SMA*) trade memory for completeness, and Weighted A* accelerates search by inflating heuristics for speed with bounded suboptimality. Bidirectional A* and Front-to-End methods split search from both directions, influenced by bidirectional algorithms used in Cryptography and networking. Incremental variants like D* and D* Lite support replanning in changing maps and derive from work funded by Defense Advanced Research Projects Agency. Hierarchical and abstraction-based methods connect to multilevel techniques used by groups at MIT Lincoln Laboratory and companies such as Uber for large-scale routing.
A* is widely used in robotics for motion planning in projects at Carnegie Mellon University and ETH Zurich, in navigation stacks for autonomous vehicles developed by Waymo and Tesla, and in pathfinding for games produced by Nintendo and Blizzard Entertainment. Other domains include route planning for logistics companies like UPS and FedEx, network routing research at Cisco Systems, and solution search in automated theorem provers from Max Planck Institute for Informatics. Puzzle-solving and combinatorial optimization applications appear in academic competitions hosted by International Olympiad in Informatics and industrial problem-solving at IBM Research.
Practical implementations address tie-breaking, duplicate detection, and memory management; priority queue choice influences amortized costs as shown in studies at Princeton University and University of Illinois at Urbana–Champaign. For large graphs, external memory algorithms and contraction hierarchies developed by teams at University of Vienna and Technical University of Munich help scale A*-like searches. Parallel and GPU-accelerated implementations are active research areas with contributions from NVIDIA and Microsoft Research. Robust production systems integrate map preprocessing, dynamic obstacle handling, and heuristic caching as implemented in middleware from OpenAI and open-source projects hosted on GitHub.
Category:Search algorithms