Generated by DeepSeek V3.2| AI winter | |
|---|---|
| Name | AI Winter |
| Caption | A period of reduced funding and interest in artificial intelligence research. |
| Date | Mid-1970s to early 1980s; late 1980s to early 1990s |
| Type | Cycles of research |
| Cause | Unmet expectations, technical limitations, criticism from reports like the Lighthill report |
| Participants | DARPA, National Research Council, Stanford University, MIT |
| Outcome | Severe cuts to academic and corporate funding, shift to subfields like expert systems |
AI winter is a term for periods of significant reduction in funding, interest, and perceived progress within the field of artificial intelligence. These cyclical downturns are characterized by widespread disillusionment following periods of intense optimism and inflated promises. The most notable instances occurred following critiques like the Lighthill report and the collapse of the market for Lisp machines. Despite these setbacks, research often continued in niche areas, laying groundwork for future revivals such as the rise of deep learning.
The phrase describes a recurring pattern in the history of technology where enthusiasm and investment in artificial intelligence dramatically wane. This concept is closely related to the broader Gartner hype cycle and the "boom and bust" cycles observed in other emerging fields. It signifies a collapse in confidence among major funding bodies like DARPA and within the commercial sector, leading to the cancellation of numerous projects. The term itself gained popular usage following the AAAI conference in 1984, where attendees discussed surviving a potential downturn.
The first major downturn, often called the first, was precipitated in the early 1970s by the critical Lighthill report, commissioned by the British government, which severely questioned the field's achievements. This led to the termination of much AI research in the United Kingdom and influenced funders like the National Research Council. A second, more severe period began in the late 1980s, triggered by the failure of the Fifth Generation Computer Systems project in Japan to meet its goals and the collapse of the market for specialized Lisp machine companies like Symbolics and Lisp Machines Inc. This period saw the end of the Strategic Computing Initiative launched by DARPA.
Primary causes include the failure of AI research to deliver on its highly publicized promises, such as achieving human-level intelligence or creating generalized machine learning. Technical limitations, particularly the intractability of many problems using symbolic AI approaches and the lack of sufficient computational power, were fundamental constraints. Influential critiques, such as the Lighthill report and the book Perceptrons by Marvin Minsky and Seymour Papert, which highlighted the limits of neural networks, also stifled progress. Furthermore, the commercial failure of hardware like the Lisp machine and software like large-scale expert systems eroded corporate and government confidence.
The impact was severe and widespread, with major agencies like DARPA and the National Science Foundation drastically reducing grant allocations for AI research. This led to the closure of laboratories and a sharp decline in academic positions and student enrollment in related programs at institutions like Stanford University and Carnegie Mellon University. The term itself became a career deterrent, leading many researchers to rebrand their work under alternative names such as informatics, machine learning, or cognitive systems to secure funding. The collapse also affected adjacent industries, notably the decline of companies focused on Lisp machines and early robotics.
Following these periods, the field demonstrated resilience through focused work in more specialized and commercially viable subfields. Research continued in areas like probabilistic reasoning, Bayesian networks, and support vector machines, often funded by more modest corporate or academic sources. The eventual resurgence was fueled by practical successes in data mining and speech recognition, alongside theoretical advances. The modern era, marked by the dominance of deep learning and breakthroughs like AlphaGo developed by DeepMind, is considered a direct result of persistent work during the quieter inter-winter periods, supported by increased computational power and vast datasets.
Category:Artificial intelligence Category:History of technology Category:Research