Generated by DeepSeek V3.2| Perceptrons (book) | |
|---|---|
| Name | Perceptrons |
| Author | Marvin Minsky, Seymour Papert |
| Country | United States |
| Language | English |
| Subject | Artificial intelligence, Machine learning |
| Publisher | MIT Press |
| Pub date | 1969 |
| Pages | 258 |
Perceptrons (book). *Perceptrons* is a foundational 1969 monograph by Marvin Minsky and Seymour Papert that provided a rigorous mathematical analysis of the capabilities and limitations of single-layer perceptron networks. Published by MIT Press, the work became highly influential for its critical examination of early neural network models, establishing formal proofs about their computational boundaries. Its conclusions were widely interpreted as demonstrating the fundamental inadequacy of simple perceptrons for complex tasks, significantly shaping the trajectory of artificial intelligence research for over a decade.
The book emerged from research conducted at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory during the 1960s. Marvin Minsky, a co-founder of the MIT AI Lab, and Seymour Papert, a mathematician and pioneer in constructionist learning, collaborated to formalize the mathematical understanding of Frank Rosenblatt's perceptron model. Their work was situated within a broader intellectual climate at MIT and institutions like Stanford University and Carnegie Mellon University, where symbolic approaches to AI, such as those championed by John McCarthy and Allen Newell, were gaining prominence. The project was supported by grants from agencies including the Office of Naval Research and the National Institutes of Health.
Minsky and Papert employed concepts from geometry and topology to analyze the perceptron's computational power. A central theorem demonstrated that a single-layer perceptron could not compute the exclusive or (XOR) function, a fundamental logical operation. They also proved such networks were incapable of solving problems requiring topological invariance, such as determining the connectedness of a pattern. The authors formalized the notion of "order" in a perceptron, linking it to the complexity of predicates it could recognize. These results were presented with rigorous proofs, contrasting with the more heuristic arguments common in contemporary cybernetics and pattern recognition literature.
Upon publication, *Perceptrons* was met with significant acclaim within the mainstream artificial intelligence community for its mathematical rigor. Researchers at centers like Stanford Research Institute and the University of Edinburgh saw it as a definitive critique of neural network approaches. The book's conclusions were often oversimplified in broader discourse, leading to a widespread belief that all connectionist models were fundamentally limited. This perception contributed to a major shift in funding and research focus away from neural networks and toward symbolic AI paradigms, including expert systems and logic programming. The work solidified the reputations of both Marvin Minsky and Seymour Papert as leading theorists in the field.
The book is frequently cited as a primary catalyst for the onset of the first AI winter in the 1970s. Its pessimistic analysis, combined with earlier critiques like those in the ALPAC report on machine translation and the Lighthill report in the United Kingdom, led major funding bodies like the Defense Advanced Research Projects Agency to drastically reduce support for neural network research. This created a chilling effect that lasted over a decade, during which work on multilayer perceptrons and backpropagation was largely marginalized. The period saw the ascendancy of alternative approaches championed by institutions like Xerox PARC and researchers such as Edward Feigenbaum with his work on DENDRAL.
The resurgence of neural networks in the mid-1980s, fueled by the development of the backpropagation algorithm by researchers including David Rumelhart, Geoffrey Hinton, and Ronald J. Williams, prompted a major reassessment of *Perceptrons*. A 1988 expanded edition included a new prologue where the authors acknowledged that their analysis applied specifically to single-layer networks without hidden units. They noted that the original text had not conclusively addressed the potential of multilayer neural networks. This revival was showcased at influential conferences like Neural Information Processing Systems, leading to the modern field of deep learning. The historical narrative of *Perceptrons* causing the AI winter is now viewed as more nuanced, acknowledging its scientific merit while recognizing its role in a complex sociological phenomenon within academic research.
Category:1969 non-fiction books Category:Artificial intelligence books Category:MIT Press books