Generated by GPT-5-mini| Perceptrons (book) | |
|---|---|
| Name | Perceptrons |
| Author | Marvin Minsky; Seymour Papert |
| Country | United States |
| Language | English |
| Subject | Artificial intelligence; Neural networks; Cognitive science |
| Publisher | MIT Press |
| Pub date | 1969; expanded 1988 |
| Media type | |
| Pages | 256 (1969) |
Perceptrons (book) is a seminal technical monograph authored by Marvin Minsky and Seymour Papert that critically examined the theoretical capabilities and limitations of early artificial neural network models. First published by the MIT Press in 1969 and expanded in 1988, the work influenced research directions at institutions such as Massachusetts Institute of Technology, Stanford University, and Carnegie Mellon University. The book's analysis affected funding and research at agencies like the Defense Advanced Research Projects Agency and laboratories including Bell Labs and IBM Research.
Minsky and Papert wrote the book after interactions with researchers at Harvard University, Princeton University, University of California, Berkeley, Cornell University, and University of Pennsylvania about pattern recognition projects inspired by models from Frank Rosenblatt and the Perceptron program at the Cornell Aeronautical Laboratory. The 1969 publication followed contemporary work in cybernetics by figures such as Norbert Wiener and mathematical underpinnings by Alan Turing, John von Neumann, and Claude Shannon. The expanded edition of 1988 responded to later advances at places like California Institute of Technology, University of Toronto, and SRI International, incorporating developments associated with researchers at Bell Telephone Laboratories and AT&T Bell Labs.
Marvin Minsky, a professor affiliated with Massachusetts Institute of Technology, and Seymour Papert, associated with Massachusetts Institute of Technology and University of Massachusetts Amherst, brought backgrounds linked to Artificial Intelligence Laboratory (MIT), Project MAC, and collaborations with scholars such as John McCarthy, Herbert A. Simon, Allen Newell, and Donald Knuth. The 1969 first edition emphasized theoretical proofs; the 1988 expanded edition, released by MIT Press and edited amid debates at NIPS and International Joint Conference on Artificial Intelligence (IJCAI), added commentary addressing work by Geoffrey Hinton, Yann LeCun, Yoshua Bengio, and others active at University of Toronto, University of Montreal, and New York University.
Perceptrons analyzed the computational power of single-layer and multi-layer perceptron networks relative to formal models such as those developed in mathematical logic and computability theory by scholars like Alonzo Church and Kurt Gödel. Minsky and Papert provided proofs concerning invariances, limitations on representable functions, and the inability of certain architectures to compute functions like parity and connectedness—issues also investigated in theoretical work at Princeton University and University of Chicago. The book discussed geometric interpretations linked to linear separability, drawing on prior research by Frank Rosenblatt and mathematical results reminiscent of work by Hermann Minkowski and David Hilbert. It contrasted the analyzed models with symbolic approaches advanced by John McCarthy and cognitive theories proposed by Noam Chomsky.
The book provoked responses from communities at Stanford University, Carnegie Mellon University, University of California, San Diego, Columbia University, and Yale University, influencing funding priorities at institutions like National Science Foundation and DARPA. Some laboratories, including Bell Labs, IBM Research, and RAND Corporation, shifted emphasis toward rule-based expert systems exemplified by projects at Stanford Research Institute and commercial efforts by General Electric and Xerox PARC. Critics and proponents exchanged views in venues such as Proceedings of the IEEE and conferences like AAAI Conference on Artificial Intelligence and IJCAI, while graduate groups at MIT, Brown University, Dartmouth College, and University of Michigan debated methodological implications.
Despite early dampening effects on some neural network work, later advances by researchers at University of Toronto, Montreal Institute for Learning Algorithms, Bell Labs, Stanford University, Carnegie Mellon University, Google Research, and Microsoft Research revived interest in multi-layer networks, leading to breakthroughs in deep learning credited to figures such as Geoffrey Hinton, Yann LeCun, and Yoshua Bengio. Theoretical and practical progress at NVIDIA, OpenAI, DeepMind, Facebook AI Research, and university labs transformed architectures into convolutional networks for vision tasks pioneered by work at New York University and University of Oxford. Minsky and Papert's formal critique shaped curricula and research agendas at MIT, Stanford, Harvard, Princeton, and influenced historical narratives in books by Paul N. Edwards and histories circulated through museums like the Computer History Museum.
Debate surrounded the book's role in an alleged "AI winter" affecting funding at DARPA, National Science Foundation, and commercial sponsors like IBM and AT&T. Critics from University of Toronto, Carnegie Mellon University, University of California, Berkeley, and independent researchers argued that Minsky and Papert understated the prospects for multi-layer networks and underestimated algorithmic advances later realized by David Rumelhart, James McClelland, Yann LeCun, and Geoffrey Hinton. Other commentators from Stanford University, Harvard University, Oxford University, and Cambridge University pointed to methodological differences between connectionist and symbolic traditions advocated by John McCarthy and Herbert A. Simon. The controversy fueled scholarly exchanges in journals associated with Elsevier, Springer, IEEE, and conferences like NIPS and ICML.
Category:Books about artificial intelligence