LLMpediaThe first transparent, open encyclopedia generated by LLMs

Artificial General Intelligence

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Gödel, Escher, Bach Hop 4
Expansion Funnel Raw 94 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted94
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Artificial General Intelligence
NameArtificial General Intelligence
CaptionConceptual diagram of general intelligence systems
FieldArtificial intelligence
InventorsMultiple researchers and institutions
Year20th–21st century

Artificial General Intelligence. Artificial General Intelligence is the hypothesized capability of a single system to understand, learn, and apply knowledge across a wide range of domains at human-level or beyond. It is discussed across research communities at institutions such as Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, OpenAI, DeepMind, and Google, and appears in policy fora including United Nations and European Commission deliberations. Debates involve contributors from projects and figures associated with Alan Turing, John McCarthy, Marvin Minsky, Geoffrey Hinton, Yoshua Bengio, and Demis Hassabis.

Definition and Scope

Definitions vary among academics at Carnegie Mellon University, engineers at IBM, and ethicists at Oxford University. Some characterize it relative to benchmarks set by competitions such as ImageNet and challenges at DARPA, others situate it in philosophical traditions following Turing Test and analyses by John Searle. Scope discussions reference capabilities demonstrated in systems from Watson (computer), AlphaGo, GPT-3, PaLM, and robotics platforms developed at MIT CSAIL and Toyota Research Institute. Debates engage legal scholars at Harvard Law School and economists at London School of Economics about whether the target should be human-equivalent, superhuman, or modular generality as pursued by Allen Institute for AI.

History and Development

Early conceptual roots trace to exchanges between Alan Turing and contemporaries in the era of Bletchley Park and postwar computing at Bell Labs. Formal founding moments include the Dartmouth proposal linked with John McCarthy, while symbolic AI epochs involved institutions like MIT AI Laboratory and researchers such as Marvin Minsky and Seymour Papert. Connectionist revivals highlighted work by Geoffrey Hinton and Yann LeCun at Bell Labs and Université de Montréal. Milestones include chess victories by systems linked to IBM and the Deep Blue match, the Go series involving DeepMind and events at Fú Wéi venues, and advances in natural language linked to transformer models from groups at Google Research and OpenAI. Funding waves came from entities including DARPA, European Research Council, National Science Foundation, and private firms such as Microsoft Research, Amazon Web Services, and NVIDIA.

Approaches and Architectures

Architectural paradigms span symbolic systems from labs like Stanford AI Lab, connectionist neural networks from groups at University of Toronto and Facebook AI Research, probabilistic models advanced at University of Cambridge, and hybrid approaches explored by teams at Allen Institute for AI and DeepMind. Notable architectures include recurrent networks investigated at Bell Labs, convolutional networks originating with researchers at Yann LeCun's groups, and transformer families developed at Google Brain and extended by OpenAI. Reinforcement learning work connects to projects at DeepMind and OpenAI Five experiments, while cognitive architectures reference theories by Allen Newell and Herbert A. Simon at Carnegie Mellon University. Hardware-co-design involves firms such as Intel, AMD, NVIDIA, and research at IBM Research and TSMC fabrication partnerships.

Evaluation and Benchmarks

Benchmarks derive from competitions at DARPA programs, datasets like ImageNet curated by researchers at Stanford University and Princeton University, language benchmarks from groups at University of Washington and University of Edinburgh, and multimodal tasks advanced at MIT. Measures often adapt ideas from psychometrics rooted in tests by Alfred Binet and statistical practices from Karl Pearson and Ronald Fisher. Evaluation controversies involve standards set by consortia including ISO technical committees and policy discussions at Organisation for Economic Co-operation and Development and G7 meetings. Safety-oriented benchmarks are proposed in forums convened by Future of Humanity Institute, Centre for the Study of Existential Risk, and Partnership on AI.

Safety, Ethics, and Governance

Safety research connects to academic centers like Future of Humanity Institute at University of Oxford and Centre for Human-Compatible AI at University of California, Berkeley. Ethical frameworks draw on scholarship from Harvard University, Yale University, and Princeton University and are debated in institutions such as Council of Europe and European Commission. Governance proposals reference standards from IEEE, regulatory efforts at United States Congress, and international initiatives at United Nations Educational, Scientific and Cultural Organization and World Economic Forum. Notable figures in policy and advocacy include researchers from Ada Lovelace Institute, OpenAI, and DeepMind Ethics & Society. Risk taxonomies cite historical analyses of technological change by scholars linked to Stanford University and London School of Economics.

Societal Impact and Economics

Predictions about labor and markets draw on economic modelling from researchers at National Bureau of Economic Research, International Monetary Fund, and World Bank and analyses by economists at University of Chicago and Massachusetts Institute of Technology. Historical parallels are invoked with industrial transitions involving firms such as General Electric and events like the Industrial Revolution. Social consequences are examined by sociologists at Columbia University and public policy centers at Brookings Institution and RAND Corporation. Cultural portrayals appear in works like films distributed by Warner Bros. and literature associated with authors published by Penguin Books; legal debates involve courts such as the Supreme Court of the United States and legislative bodies including European Parliament.

Category:Artificial intelligence