LLMpediaThe first transparent, open encyclopedia generated by LLMs

MIRI

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 45 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted45
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
MIRI
NameMachine Intelligence Research Institute
Established2000 (as Singularity Institute)
FounderEliezer Yudkowsky
TypeNonprofit organization
FocusArtificial intelligence safety
HeadquartersBerkeley, California
Key peopleNate Soares

MIRI. The Machine Intelligence Research Institute is a nonprofit organization focused on technical research aimed at ensuring that the development of advanced artificial intelligence has a positive impact. Founded in the early 21st century, it is one of the pioneering organizations dedicated to the long-term problem of AI alignment. Its work involves foundational mathematical and logical research to address the potential risks associated with the emergence of superintelligent systems.

Overview

The institute operates as a central hub for theoretical work on the control problem, seeking to align advanced AI systems with human values and intentions. Its mission is rooted in concerns about existential risk from artificial intelligence, a topic also explored by institutions like the Future of Humanity Institute and the Centre for the Study of Existential Risk. Researchers at the organization, including figures like Nate Soares and Scott Garrabrant, often publish on topics in decision theory and logical uncertainty.

Research areas

Primary technical work is concentrated in several interconnected domains. A major focus is agent foundations, which involves building simplified formal models of intelligent agents to study issues like corrigibility and value learning. Another key area is logical induction, a framework developed to reason under logical uncertainty. Additional research threads include cooperative AI and game theory, investigating how multiple advanced systems might interact, and work on formal verification methods to ensure robust AI behavior.

History and organization

Originally founded in 2000 by Eliezer Yudkowsky as the Singularity Institute, the organization was rebranded in 2013. Its early activities were closely associated with the LessWrong community and involved organizing conferences like the Singularity Summit. Key leadership has included Luke Muehlhauser and Rob Bensinger. The institute is based in Berkeley, California, and maintains collaborative ties with academic groups at Oxford University and the University of California, Berkeley.

Key publications and findings

Researchers have produced influential papers and technical reports that shape the field. Seminal works include "Logical Induction" by Scott Garrabrant and others, which presents a new theory of uncertain reasoning. The "Agent Foundations" agenda has yielded publications on timeless decision theory. Earlier influential writings by Eliezer Yudkowsky, such as the sequences published on LessWrong, laid much of the philosophical groundwork for the institute's research direction.

Funding and partnerships

Financial support has come from a variety of sources, including philanthropic donations from individuals and foundations concerned with global catastrophic risk. Significant past funders have included the Open Philanthropy Project and donations from Peter Thiel. The institute has also received grants for specific research initiatives and has engaged in collaborative projects with other entities in the effective altruism ecosystem, such as the Centre for Effective Altruism.

Reception and criticism

The institute's focus on long-term, speculative risks from artificial general intelligence has been both influential and contentious. It is credited with helping to establish AI safety as a serious field of study, influencing work at DeepMind and the Future of Humanity Institute. However, some critics, including researchers like Rodney Brooks and within communities like the Association for the Advancement of Artificial Intelligence, have questioned its emphasis on distant existential risk over near-term technical challenges in machine learning.

Category:Artificial intelligence organizations Category:Non-profit organizations based in California Category:Effective altruism