Generated by GPT-5-mini| OECD Principles on Artificial Intelligence | |
|---|---|
| Title | OECD Principles on Artificial Intelligence |
| Adopted | 2019 |
| Organization | Organisation for Economic Co-operation and Development |
| Location | Paris |
| Status | Adopted |
OECD Principles on Artificial Intelligence The OECD Principles on Artificial Intelligence are a set of guidelines adopted in 2019 by the Organisation for Economic Co-operation and Development in Paris with participation from countries such as United States, United Kingdom, Germany, France, Japan and organizations like the European Commission and United Nations Educational, Scientific and Cultural Organization. The Principles aim to promote innovative and trustworthy AI deployment consistent with international commitments under instruments such as the Universal Declaration of Human Rights, the G20 discussions, and the G7 consultations, while engaging stakeholders including World Economic Forum, IEEE Standards Association, and national bodies like the National Institute of Standards and Technology.
The instrument emerged from multilateral negotiations involving the Organisation for Economic Co-operation and Development, delegations from Canada, Australia, Italy, Korea, Netherlands and consultations with experts associated with Massachusetts Institute of Technology, Stanford University, University of Oxford, Tsinghua University and think tanks such as the Brookings Institution and Center for Strategic and International Studies. Drafting referenced precedent documents including reports by the European Commission's High-Level Expert Group on AI, the Council of Europe's advisory work, the United Nations deliberations at the Internet Governance Forum, and standards work by the International Organization for Standardization. Adoption involved ministers from Sweden, Norway, Spain and observers from China and India who engaged via joint sessions with organizations like Organisation for Economic Co-operation and Development committees and intergovernmental fora such as the OECD Forum and the UNESCO Global Policy Lab.
The Principles set five high-level imperatives stressing inclusive growth and human-centered values aligned with instruments like the Universal Declaration of Human Rights and objectives pursued by the Sustainable Development Goals; they reference proportionality familiar from the European Convention on Human Rights and accountability practices from the International Criminal Court. They include commitments to inclusive growth, human rights, transparency consistent with standards from the IEEE Standards Association and traceability methods endorsed by researchers at Carnegie Mellon University and University of Cambridge, robustness and safety drawing on work from DeepMind and OpenAI, transparency resembling disclosure regimes discussed at the World Trade Organization and World Health Organization, and accountability mechanisms anticipated in regulatory models pursued by the European Union and the United Kingdom's advisory bodies.
Implementation pathways encouraged national policy instruments like national AI strategies exemplified by Canada's strategy, United Kingdom's guidance, China's roadmap, and Singapore's model; they reference institutional actors such as the OECD AI Policy Observatory, regulatory agencies like the European Data Protection Board, and funding programs from agencies like the European Investment Bank and the National Science Foundation. The Principles recommend governance measures paralleling frameworks used by the Federal Trade Commission, legal approaches analogous to the General Data Protection Regulation and procurement practices evident in United States Department of Defense acquisitions, while advising alignment with standards developed by the International Telecommunication Union and the International Organization for Standardization.
Following adoption, numerous states including Mexico, Chile, South Africa, Brazil and Israel referenced the Principles in national roadmaps and bilateral agreements with blocs such as the European Union and forums like the G20 and APEC. Multilateral organizations including the United Nations Educational, Scientific and Cultural Organization, the World Bank, and the International Monetary Fund have cited the Principles in policy guidance and funding conditions, while private sector consortia like the Partnership on AI and corporations such as Google, Microsoft, Amazon and IBM have aligned internal policies with the Principles alongside standards work at the Institute of Electrical and Electronics Engineers.
Critics from civil society groups including Amnesty International and Human Rights Watch argue the Principles lack binding enforcement mechanisms similar to treaties like the Geneva Conventions or statutory regimes such as California Consumer Privacy Act, and scholars at Harvard University, London School of Economics, and Yale University have highlighted gaps in remedy and liability compared with proposals debated in the European Parliament and litigation in courts like the European Court of Human Rights. Industry commentators linked to Silicon Valley startups and venture capitalists in Wall Street note tensions between innovation incentives seen in United States policy and precautionary approaches favored by actors in the European Union, while privacy advocates compare the Principles unfavorably to the enforceable protections in Brazil's data protection statute and enforcement trends in Israel and South Korea.
The Principles influenced corporate governance at firms including Facebook, Apple Inc., Baidu, Tencent and research agendas at institutions like Oxford Internet Institute, Allen Institute for AI, MIT Media Lab and Google DeepMind, shaping funding priorities at agencies such as the European Research Council and collaborative projects under the Horizon Europe program. They informed standards alignment at bodies including the International Organization for Standardization, the Institute of Electrical and Electronics Engineers, and public procurement reforms in municipalities like New York City, Paris, Tokyo and Seoul, and catalyzed academic work published in journals associated with Nature, Science, Journal of Artificial Intelligence Research and conference forums such as NeurIPS, ICML and IJCAI.
Category:Artificial intelligence governance