Generated by GPT-5-mini| Leiden Manifesto | |
|---|---|
| Name | Leiden Manifesto |
| Date | 2015 |
| Authors | Centre for Science and Technology Studies, Leiden University, Martin W. van Leeuwen, Thed N. van Leeuwen, Paul Wouters, Henk F. Moed, Anthony F.J. van Raan, Loet Leydesdorff |
| Location | Leiden |
| Subject | Research evaluation, scientometrics, bibliometrics |
Leiden Manifesto
The Leiden Manifesto is a statement of principles for responsible research evaluation drafted to guide policy and practice in the assessment of scientific performance. It distills lessons from debates involving Thomson Reuters, Clarivate Analytics, Institute for Scientific Information, National Science Foundation, European Commission, and national bodies such as Research Councils UK and National Institutes of Health about the use of bibliometric indicators, citation data, and metrics-based decisions. The document became influential across universities, funding agencies, and professional associations including American Association for the Advancement of Science, Royal Society, and Academy of Sciences.
The manifesto emerged amid controversies over ranking systems like Times Higher Education World University Rankings, QS World University Rankings, Academic Ranking of World Universities, and journal-level measures such as Journal Impact Factor. Critics within International Council for Science, Committee on Publication Ethics, and research policy forums argued that reliance on indicators produced by Elsevier and Clarivate Analytics led to perverse incentives and gaming behavior exemplified in cases involving Beall's List controversies, institutional hiring disputes, and grant allocation debates in countries including United States, United Kingdom, China, India, and Brazil. Parallel discussions in scientometrics drew on historical work by figures and institutions such as Eugene Garfield, Derek de Solla Price, Institute for Scientific Information, Centre for Science and Technology Studies, and Royal Netherlands Academy of Arts and Sciences. The manifesto was framed as a corrective to simplistic interpretations of metrics like citation counts, h-index, and impact factors, and as complementary to qualitative assessments exemplified by panels at European Research Council and national peer review systems such as Research Excellence Framework.
The manifesto articulates a concise set of principles intended to balance quantitative indicators with qualitative judgment by experts from institutions such as Council of European Academies, National Academy of Sciences, and European University Association. It emphasizes accuracy, transparency, and the need for field-normalized indicators rooted in databases like Web of Science, Scopus, and Google Scholar. The principles caution against misuse of journal-level metrics such as Journal Impact Factor for individual assessment and recommend context-aware indicators similar to normalized citation scores, percentile ranks, and altmetrics monitored by platforms including Altmetric, PlumX, and Crossref. It advocates for openness of data and methods reflecting norms of Open Science initiatives championed by Plan S, European Open Science Cloud, and institutions like Wellcome Trust. The manifesto promotes combining metrics with expert peer review as practiced by National Science Foundation panels, European Research Council review panels, and committees of the Royal Society.
Drafted primarily by researchers at the Centre for Science and Technology Studies at Leiden University, the manifesto drew on contributions and critiques from scholars associated with University of Leiden, Delft University of Technology, University of Amsterdam, VOSviewer developers, and international collaborators from Harvard University, University of California, Berkeley, Max Planck Society, and Chinese Academy of Sciences. Key authors included Loet Leydesdorff, Paul Wouters, Henk F. Moed, Anthony F.J. van Raan, Martin W. van Leeuwen, and Thed N. van Leeuwen. The process was informed by earlier documents and initiatives such as San Francisco Declaration on Research Assessment, discussions at conferences like International Conference on Science and Technology Indicators, and workshops organized by organizations including European Network for Research Evaluation of Education and OECD. Drafts circulated among policy-makers from European Commission directorates, administrators at University of Oxford, University of Cambridge, and feedback from editorial boards of journals such as Nature and Science.
The manifesto gained endorsement from universities, learned societies like Royal Society of Canada, and national assessment agencies including Australian Research Council and Research Councils UK. It stimulated policy responses from funders such as Wellcome Trust and influenced statements by publishers like Springer Nature and consortiums including Coalition S. Critics from sectors tied to ranking providers—such as corporate units in Elsevier and Clarivate Analytics—argued about practical limitations in replacing established metrics. The manifesto energized complementary efforts like the San Francisco Declaration on Research Assessment and spurred committees within European University Association, Association of American Universities, and national academies to issue guidance on metrics. Its recommendations have been cited in reforms to national evaluation exercises in Italy, Portugal, Flanders, and debates over appointment practices at major institutions such as Harvard, MIT, and Stanford.
Institutions implemented the manifesto's guidance by adopting policies that favor field-normalized citation indicators, mixed-methods assessment, and transparent documentation of evaluation criteria. Tools and infrastructures implicated include Web of Science, Scopus, Dimensions, as well as institutional repositories tied to ORCID and Crossref metadata. Many universities revised promotion criteria in faculty handbooks at University of Toronto, University of Melbourne, and National University of Singapore to de-emphasize simple metrics like h-index. Funding agencies recalibrated proposal review rubrics at European Research Council and introduced metrics literacy training in staff units at National Institutes of Health and Canadian Institutes of Health Research. Despite uptake, challenges persist in resource-limited settings such as some universities in Africa and Latin America, where bibliometric infrastructures are less accessible and local scholarly communication practices require tailored evaluation approaches.