LLMpediaThe first transparent, open encyclopedia generated by LLMs

Computer Go

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: AlphaGo Hop 4
Expansion Funnel Raw 106 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted106
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Computer Go
Computer Go
Donarreiskoffer · CC BY-SA 3.0 · source
TitleComputer Go
First appeared1950s
DevelopersAlan Turing, Claude Shannon, John von Neumann, Stanislaw Ulam
PlatformDigital computer, Supercomputer, Personal computer, Cloud computing
GenreArtificial intelligence, board game

Computer Go is the study and practice of developing computer programs to play the board game Go at human and superhuman levels. It spans research in artificial intelligence, machine learning, reinforcement learning, pattern recognition, search algorithms, and intersections with mathematics, statistics, and cognitive science. Work in the field has driven advances in deep learning, Monte Carlo methods, parallel computing, and influenced competitions, academic conferences, and industrial applications.

History

Early theoretical interest arose after publications by Alan Turing and Claude Shannon on formalizing games for digital computers; pioneers included John von Neumann and Stanislaw Ulam who explored computation and game complexity. In the 1960s and 1970s, research groups at institutions such as Massachusetts Institute of Technology, Stanford University, and University of California, Berkeley experimented with handcrafted heuristics and pattern databases. The 1980s and 1990s saw projects from Japan Computer Go Association, Nihon Ki-in, and industrial labs at Fujitsu and IBM focusing on opening books and life-and-death modules. Work by teams at University of Alberta, University of Tokyo, Seoul National University, and Korea Advanced Institute of Science and Technology advanced Monte Carlo approaches. The 2000s popularized Monte Carlo Tree Search via groups at Soleil研究所, University of Lausanne, and Google DeepMind precursor teams. Breakthroughs by Google DeepMind with AlphaGo and successors involved collaborations with institutions like London School of Economics and led to matches against professionals at venues such as Future of Go Summit and Royal Institution events. Subsequent projects by Facebook AI Research, Microsoft Research, Tencent, and independent teams further pushed performance on supercomputers and cloud platforms.

Techniques and Algorithms

Early engines relied on handcrafted evaluation using knowledge from Go Seigen-era professionals and databases from Nihon Ki-in archives. Search techniques evolved from depth-limited minimax to probabilistic methods; notable algorithms include Monte Carlo Tree Search, Upper Confidence bounds applied to Trees, and variants developed at École Polytechnique Fédérale de Lausanne and University of Alberta. Supervised learning using professional game records from Kiseido and GoBase catalyzed adoption of convolutional neural networks pioneered by teams at University College London and DeepMind. Reinforcement learning approaches such as self-play were advanced via frameworks from OpenAI, DeepMind, and research groups at Carnegie Mellon University. Policy networks, value networks, and rollout policies were integrated with hardware acceleration provided by NVIDIA GPUs, Google TPUs, and Fujitsu vector processors. Endgame solvers used alpha–beta pruning style reductions and retrograde analysis as applied in projects at RIKEN and Seoul National University. Parallelization strategies employed message passing interfaces developed at Argonne National Laboratory and workload orchestration from Kubernetes clusters in industrial deployments.

Software and Programs

Prominent open-source engines originated from teams at KGS Go Server, Pandanet, and OGS communities; examples include programs maintained by contributors at GitHub and academic labs. Commercial and research systems include engines produced by Google DeepMind (AlphaGo, AlphaGo Zero, AlphaZero), projects at Facebook AI Research and Tencent AI Lab, and academic implementations from University of Alberta (Fuego), GnuGo contributors linked to Free Software Foundation, and engines like Leela variants backed by communities associated with GitHub and Go on Linux enthusiasts. Cloud services and APIs emerged from companies such as Google Cloud, Amazon Web Services, and Microsoft Azure offering accelerated model training and inference. Interfaces and clients include tools developed by SmartGo, GoGui, MultiGo, and mobile apps published by KataGo teams and independent developers. Databases of professional games are curated by organizations including Kiseido, American Go Association, European Go Federation, and online servers such as Pandanet and KGS.

Competitions and Benchmarks

Tournaments and ranking matches took place at events organized by Nihon Ki-in, Korean Baduk Association, Chinese Weiqi Association, and international forums like IJCAI, AAAI, and NeurIPS workshops. Benchmarks include game collections from KGS, Go4Go, and curated sets used by researchers at DeepMind and OpenAI to evaluate policy and value networks. Matches of historical significance occurred at venues such as Future of Go Summit where teams faced professionals from Nihon Ki-in and players like Lee Sedol and Gu Li. Prizes and awards have been associated with academic competitions hosted by ICML and NeurIPS and industrial challenges sponsored by Google and Facebook.

Human–Computer Interaction and Impact

Computer programs have altered pedagogy and practice within institutions like Nihon Ki-in, American Go Association, European Go Federation, and university clubs at Harvard University and Peking University. Tools for analysis and study are used by professionals such as Ke Jie and amateurs from IgoClub communities; teaching aids integrate engines with servers like KGS and OGS. The field influenced esports ecosystems managed by Pandanet and broadcast production in venues like Guinness World Records public demonstrations. Commercial applications of techniques from the field have been adopted by Google and Microsoft for problems in healthcare and transportation (industrial partners include Fujitsu and Tencent), and research findings feed into curricula at Massachusetts Institute of Technology and Stanford University.

Research Challenges and Future Directions

Open problems remain in sample-efficient learning studied at Carnegie Mellon University and ETH Zurich, interpretability pursued by teams at OpenAI and DeepMind, and transfer learning efforts involving groups at University of Toronto and University College London. Scalability of search and model compression are active topics in collaborations with NVIDIA and Google Research. Ethical and social implications intersect with policy labs at Harvard Kennedy School and technical standards bodies including IEEE. Future directions include hybrid symbolic–neural methods promoted by researchers at MIT Media Lab, decentralized training using infrastructures like OpenMPI and Kubernetes developed by industry partners, and cross-disciplinary work linking cognitive science labs at University of California, Berkeley and University of Cambridge.

Category:Artificial intelligence