Generated by GPT-5-mini| FAIR principles | |
|---|---|
| Name | FAIR principles |
| Caption | Principles for data stewardship |
| Introduced | 2016 |
| Authors | Mark D. Wilkinson, Michel Dumontier, Iratxe Puebla |
| Field | Data management, Scholarly communication, Digital preservation |
FAIR principles are a set of guidelines that aim to make digital assets easier to find, access, interoperate, and reuse by humans and machines. Originating from a 2016 publication, the principles have influenced research data management, institutional policies, and funding mandates across the global scholarly ecosystem. They are applied in domains ranging from life sciences to social sciences and are complemented by technical standards and community practices.
The FAIR principles were articulated to address challenges in data stewardship faced by stakeholders such as European Commission, National Institutes of Health, Wellcome Trust, Research Councils UK, and European Research Council. They respond to incentives and mandates introduced by organizations including Organisation for Economic Co-operation and Development and Group on Earth Observations and align with infrastructure efforts like European Open Science Cloud and National Science Foundation initiatives. The principles emphasize machine-actionability, building on ideas present in projects such as Digital Object Identifier systems, Linked Data initiatives, and repositories like Dryad Digital Repository and Zenodo.
Findable: Assets should use persistent identifiers such as Digital Object Identifiers and be described with rich metadata following registries and catalogs like DataCite and ORCID. Findability is advanced by indexing in infrastructures such as Google Scholar, CrossRef, and domain-specific aggregators exemplified by PubMed and Europe PMC.
Accessible: Access conditions and protocols should be clearly specified using standards such as Open Archives Initiative Protocol for Metadata Harvesting and authentication frameworks like OAuth; licensing and access statements often reference instruments like Creative Commons or legal frameworks including General Data Protection Regulation.
Interoperable: Data and metadata should use shared vocabularies, ontologies, and formats exemplified by Resource Description Framework, Web Ontology Language, JSON-LD, and community models like Dublin Core and Schema.org; semantic integration initiatives such as Gene Ontology, SNOMED CT, and FAO Taxonomy illustrate domain-specific interoperability.
Reusable: Rich provenance and license information, and adherence to community standards such as those endorsed by Committee on Data for Science and Technology and World Data System, enable reuse. Provenance models like PROV and citation practices promoted by DataCite and Force11 support reproducibility and attribution.
Organizations implement FAIR through policy instruments, technical workflows, and training programs used by entities such as Wellcome Sanger Institute, European Molecular Biology Laboratory, and Los Alamos National Laboratory. Best practices include assigning persistent identifiers via Handle System, using metadata registries like ISO 19115 and FAO Geonetwork, and embedding provenance using standards from Open Provenance Model. Data management plans conform to funder templates from NIH and Horizon 2020 and integrate with repository workflows seen at Figshare and Institutional Repository programs at universities like University of Oxford and Harvard University.
Community governance, stewardship roles, and certification frameworks—such as those developed by CoreTrustSeal and Data Seal of Approval—provide operational guidance. Training and capacity building are supported by initiatives including Software Carpentry, Data Carpentry, and national programs in regions like European Union member states and United States federal agencies.
Technical toolchains and registries that support FAIR include identifier services such as DataCite and Handle System, metadata standards like Dublin Core, ISO 19115, and domain models like MIAME and FAIRsharing registries. Interoperability is enabled by serialization formats such as RDF, XML, JSON-LD, and exchange protocols like OAI-PMH and SPARQL. Ontology and vocabulary services including BioPortal, Ontology Lookup Service, and W3C recommendations underpin semantic consistency. Platforms and software that facilitate FAIR-aligned publishing and curation include CKAN, Dataverse, Zenodo, and workflow systems like Galaxy and Nextflow.
FAIR has been widely adopted by funders, publishers, and research infrastructures—examples include European Commission mandates, funder requirements from Bill & Melinda Gates Foundation, and publisher policies at Nature Research and PLOS. Impact assessments cite increased data discoverability in portals such as FAIRsharing and uptake metrics tracked by organizations like Research Data Alliance and ELIXIR. Critics and commentators from institutions like University of Edinburgh and groups convened by OECD point to challenges: ambiguous interpretability of principles, varied measurement frameworks, and tensions with privacy regimes such as GDPR. Debates involve balancing openness with ethical constraints illustrated in cases reviewed by Institutional Review Boards and legal counsel in jurisdictions including United Kingdom and United States.
Overall, FAIR serves as a flexible, technology-agnostic framework that interacts with infrastructures, standards bodies, funders, and communities to improve stewardship of digital assets across disciplines.
Category:Data management