LLMpediaThe first transparent, open encyclopedia generated by LLMs

MIRI

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: SOFIA Hop 4
Expansion Funnel Raw 61 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted61
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
MIRI
NameMachine Intelligence Research Institute
Founded2000
FounderEliezer Yudkowsky
LocationBerkeley, California
FocusArtificial intelligence safety, alignment research

MIRI

MIRI is a research organization focused on long-term risks from advanced artificial intelligence, established in 2000. The institute concentrates on theoretical and technical work aimed at ensuring superintelligent systems act in ways that align with human intentions, coordinating with a range of researchers, philanthropic organizations, and technology institutions. Its work sits at the intersection of computer science, philosophy, and policy debates about transformative technologies.

Overview

MIRI was founded in response to discussions among thinkers concerned about existential risk from accelerating technological change; its origins involve figures associated with the Bayesian rationality and effective altruism communities such as Eliezer Yudkowsky, Nick Bostrom, Derek Parfit, William MacAskill, and organizations like LessWrong and Centre for Effective Altruism. The institute emphasizes formal methods and theoretical models rather than empirical deployment, overlapping with academic groups at institutions such as University of Oxford, Carnegie Mellon University, Massachusetts Institute of Technology, Stanford University, and UC Berkeley. Over time MIRI has influenced discussions at technology companies including Google, OpenAI, DeepMind, and funders like the Effective Altruism Global community and philanthropic initiatives tied to Open Philanthropy Project.

Mission and Goals

MIRI articulates goals that reflect concerns raised by thinkers such as Nick Bostrom in works like Superintelligence: Paths, Dangers, Strategies, and ethical philosophers such as Derek Parfit and T. M. Scanlon. Its stated mission prioritizes the development of rigorous mathematical frameworks for agent behavior, building on foundations from scholars like John von Neumann and Alan Turing, and drawing on logical and decision-theoretic work from figures such as Leonard Savage and Bruno de Finetti. The institute seeks to produce research that can inform policymakers and technologists at venues such as NeurIPS, ICML, AAAI, and IJCAI, as well as advisory bodies including National Science Foundation-related panels and international forums where leaders from European Commission and United States agencies discuss advanced technologies.

Research Areas

Research topics at the institute include formal verification of decision-making systems, value alignment, logical uncertainty, corrigibility, and robustness under distributional shift. These areas intersect with theoretical frameworks developed by researchers like Stuart Russell, Jürgen Schmidhuber, Christopher Bishop, and Yoshua Bengio though MIRI emphasizes distinct formal approaches influenced by work in mathematical logic from Kurt Gödel and algorithmic information theory from Andrey Kolmogorov. The institute publishes technical reports engaging with concepts related to utility theory of John Harsanyi, cooperative game theory connected to Lloyd Shapley, and control problems reminiscent of work by Richard Bellman. MIRI’s outputs are intended to be relevant to practitioners at companies such as Anthropic, Microsoft Research, and to academics at Princeton University and Harvard University exploring AI governance and safety.

Organizational Structure

MIRI operates as a non-profit research organization with a core team of research scientists, engineers, and operations staff, interacting with external advisors drawn from academia and industry including scholars from Oxford University, Harvard University, Columbia University, and institutions like RAND Corporation. Governance has involved boards and donors connected to figures in the effective altruism movement including patrons who have also supported Giving What We Can and The Life You Can Save. The institute holds workshops and collaborates with research collectives and labs at venues such as Berkeley academic centers and conference series hosted by organizations like Center for a New American Security and academic departments across United States universities.

Funding and Partnerships

Funding for the institute has come from a mix of philanthropic foundations, individual donors, and grants associated with networks like Effective Altruism Global and Open Philanthropy Project. Major donors and partners in the AI safety ecosystem include philanthropic actors connected to George Soros-style foundations, large tech philanthropy such as gifts from major industry figures, and collaborations with research groups at DeepMind and universities including Yale University and University of Cambridge. MIRI also engages with policy stakeholders and philanthropic grantmakers who participate in advisory discussions at institutions like Brookings Institution and Chatham House.

Criticisms and Controversies

MIRI’s focus on long-term, speculative risks has drawn critique from AI researchers and commentators at organizations such as OpenAI, DeepMind, and academic critics from MIT and Stanford University, who argue for more empirical, short-term safety work; notable commentators include figures like Andrew Ng and Yann LeCun in public debates. Critics have questioned methodological choices, fundraising transparency, and the prioritization of theoretical research over applied verification, echoing broader disputes in communities tied to Effective Altruism and debates surrounding trade-offs addressed by scholars like Amartya Sen and Martha Nussbaum. Supporters counter that formal foundational work complements applied efforts pursued by institutions such as IBM Research and Microsoft Research.

Category:Artificial intelligence