LLMpediaThe first transparent, open encyclopedia generated by LLMs

Artificial intelligence

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: DeepMind Hop 3
Expansion Funnel Raw 74 → Dedup 4 → NER 4 → Enqueued 4
1. Extracted74
2. After dedup4 (None)
3. After NER4 (None)
4. Enqueued4 (None)
Artificial intelligence
NameArtificial intelligence
FieldComputer science
Founded1956 (Dartmouth Conference)
NotableAlan Turing, John McCarthy, Marvin Minsky, Geoffrey Hinton

Artificial intelligence Artificial intelligence (AI) is a broad branch of computer science focused on creating systems that perform tasks typically requiring human cognition, including perception, reasoning, learning, planning, and language understanding. Research spans theoretical foundations, algorithmic development, and engineering of software and hardware deployed across industries and institutions. Progress has been driven by cross-disciplinary contributions from figures and organizations in academia, industry, and government.

History

Early roots trace to formal logic and automata studied by Alan Turing, whose work influenced postwar efforts at institutions such as RAND Corporation and Bell Labs. The field was named at the 1956 Dartmouth workshop organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon; subsequent decades featured cycles of rapid advance and "AI winters" influenced by funding shifts at bodies like the National Science Foundation and policy choices in the United Kingdom and United States. Landmark systems and events include ELIZA at MIT, Shakey the Robot at SRI International, the victory of IBM’s Deep Blue over Garry Kasparov, and the success of AlphaGo by DeepMind against Lee Sedol. The rise of deep learning since the 2010s involved breakthroughs from groups at University of Toronto, Google, Facebook (Meta), and research labs such as OpenAI, enabled by GPUs from NVIDIA and large-scale data from platforms like ImageNet projects and corporate datasets.

Definitions and approaches

Definitions vary across communities: some follow symbolic AI exemplified by early work at MIT and Stanford University emphasizing logic and knowledge representation, while others adopt statistical and connectionist paradigms advanced at places like University of Montreal and University of Toronto. Hybrid approaches combine rule-based systems, probabilistic models developed at labs such as Bell Labs and Microsoft Research, and neural-network-based learning from groups like DeepMind and OpenAI. Competing schools include symbolicists (linked to researchers such as John McCarthy and Marvin Minsky), connectionists (including Geoffrey Hinton and Yoshua Bengio), and Bayesian proponents influenced by work at Carnegie Mellon University and University of California, Berkeley.

Techniques and architectures

Core techniques encompass supervised, unsupervised, and reinforcement learning, often implemented using architectures such as multilayer perceptrons, convolutional neural networks pioneered at University of Toronto and popularized by work from Yann LeCun, recurrent networks influenced by researchers at University of Montreal, and transformer architectures introduced by teams at Google and refined by researchers at OpenAI and Microsoft Research. Probabilistic graphical models with roots at Stanford University and Carnegie Mellon University remain important for uncertainty quantification. Optimization methods trace to classical research at Princeton University and Courant Institute, while hardware accelerators from Intel and NVIDIA and cloud platforms by Amazon Web Services and Google Cloud shape practical deployment. System integration draws on contributions from industrial labs such as IBM Research and robotics work at MIT and ETH Zurich.

Applications

AI is applied in healthcare settings at institutions like Mayo Clinic and Johns Hopkins Hospital for diagnostic imaging, in finance at firms including Goldman Sachs and JPMorgan Chase for algorithmic trading, and in transportation via companies such as Tesla and initiatives at Waymo for autonomous vehicles. Natural language tools from labs at Google and OpenAI are used in assistants by Apple and Microsoft, while recommendation systems pioneered by Netflix and Amazon drive personalization. Robotics platforms from Boston Dynamics and iRobot deploy perception and control algorithms; drug discovery leverages AI at firms like Pfizer and Deep Genomics; and scientific research integrates AI at facilities such as CERN and observatories collaborating with universities like Caltech.

Concerns are raised by civil society groups, regulators, and institutions including the European Commission and United Nations about bias, accountability, transparency, and labor displacement. High-profile debates involve companies such as Clearview AI and platforms governed by policies from Twitter and Facebook (Meta), while legal frameworks evolve in jurisdictions like the European Union with directives and proposals influenced by advocacy from organizations such as Amnesty International and Electronic Frontier Foundation. Discussion covers intellectual property contested by firms like Google and OpenAI, standards proposed by bodies such as IEEE and ISO, and safety work pursued by research groups at OpenAI and DeepMind on robustness, interpretability, and alignment.

Evaluation and benchmarks

Performance assessment uses benchmarks developed by academic groups and industry consortia: vision benchmarks like ImageNet and COCO, language tasks from datasets associated with Stanford University and Allen Institute for AI, and reinforcement-learning environments like Atari suites and MuJoCo simulations. Competitions and challenges organized by institutions such as DARPA, NeurIPS workshops, and the Kaggle platform measure progress; standard metrics include accuracy, F1 score, BLEU, and reward curves used by teams at DeepMind and OpenAI. Ongoing critique emphasizes the need for diverse, representative datasets and evaluation protocols promoted by groups at MIT and University of California, Berkeley.

Category:Artificial intelligence