LLMpediaThe first transparent, open encyclopedia generated by LLMs

DORA (Declaration on Research Assessment)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 69 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted69
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
DORA (Declaration on Research Assessment)
NameDeclaration on Research Assessment
AbbreviationDORA
Formed2012
FoundersSan Francisco Declaration on Research Assessment
PurposeResearch evaluation reform
LocationSan Francisco

DORA (Declaration on Research Assessment) is an international initiative advocating for improved methods of evaluating scholarly research and researchers, emphasizing the limitations of journal-based metrics and calling for qualitative assessment. It originated from a meeting that gathered editors, funders, publishers, and researchers to address incentives in scholarly communication and has influenced policies across universities, funding agencies, and publishers.

Background and origins

The initiative began at a meeting convened in San Francisco attended by representatives from National Institutes of Health, Wellcome Trust, European Commission, Howard Hughes Medical Institute, and stakeholders from Nature, Science, and PLOS. Founders included editorial staff from eLife and leading scholars associated with University of California, San Francisco, University of Oxford, and Max Planck Society. The meeting was partly a response to debates involving figures such as Eugene Garfield, the history of Journal Citation Reports, controversies around Impact factor use highlighted by editors at The Lancet and commentators at Retraction Watch, and policy discussions involving National Science Foundation and Research Council UK.

Principles and recommendations

The declaration articulated principles discouraging use of metrics like the Journal Impact Factor as a surrogate measure of individual research quality and urged signatories to evaluate research on its own merits. It recommended that institutions such as Harvard University, University of Cambridge, Stanford University, and funders including Gates Foundation and European Research Council adopt practices that consider outputs like datasets, software, preprints on arXiv, and monographs from presses such as Oxford University Press and Cambridge University Press. Guidance drew on prior critiques by scholars linked to Institut Pasteur, Karolinska Institute, Johns Hopkins University, and advocates from American Association for the Advancement of Science.

Adoption and signatories

Signatories range from individual researchers to major organizations. Early institutional endorsers included Wellcome Trust, Howard Hughes Medical Institute, Mount Sinai Health System, and universities such as University of Toronto and University of Melbourne. Publisher and journal signatories have included PLOS, eLife, Nature Publishing Group, and BMJ. Funders and agencies endorsing the declaration include National Institutes of Health, Canadian Institutes of Health Research, Australian Research Council, and Deutsche Forschungsgemeinschaft. Numerous professional societies, libraries like British Library, and consortia such as CERN have also signed.

Impact on research assessment practices

Adoption of the declaration has led to policy shifts at organizations like European Commission programmes, Research Councils UK, and the National Institutes of Health in piloting narrative CVs, responsible metrics, and narrative-based review. Universities including University College London, Princeton University, and ETH Zurich have revised hiring and promotion guidelines to de-emphasize journal metrics and to recognize diverse outputs such as software repositories at GitHub and data archives like Dryad. Publishers including PLOS and Elsevier have developed article-level metrics and tools aligned with the declaration’s recommendations. The movement has intersected with initiatives such as OpenAIRE, SPARC, and the development of standards by Committee on Publication Ethics.

Criticisms and limitations

Critics from institutions like Clarivate Analytics and commentators in The Economist argue that alternatives to established metrics can be subjective and harder to standardize across large systems such as the European Research Area or national assessment exercises like Research Excellence Framework. Some scholars affiliated with Yale University and Université de Montréal have noted uneven uptake across regions and disciplines, with humanities publishers such as Oxford University Press and smaller learned societies slower to change evaluation cultures. Others linked to Elsevier and Clarivate contend that quantitative indicators remain useful for large-scale benchmarking despite the declaration’s cautions.

Implementation and policy examples

Specific implementations inspired by the declaration include narrative CV formats adopted by the European Research Council, the Wellcome Trust grant application changes, and revised promotion criteria at University of Amsterdam and University of Cape Town. National bodies like Research England and funders such as NIHR have issued guidelines aligning with the declaration, while consortia including COAR and projects like FAIR data promote infrastructure for diverse research outputs. Implementation often involves coordination with institutional offices such as those at Massachusetts Institute of Technology, Columbia University, and University of Sydney to embed new assessment criteria into hiring, tenure, and grant review processes.

Category:Research evaluation