Generated by GPT-5-mini| CWTS Leiden Ranking | |
|---|---|
| Name | CWTS Leiden Ranking |
| Established | 2007 |
| Publisher | Centre for Science and Technology Studies |
| Country | Netherlands |
| Discipline | Bibliometrics |
| Frequency | annual |
CWTS Leiden Ranking The CWTS Leiden Ranking is an annual bibliometric ranking produced by the Centre for Science and Technology Studies at Leiden University in the Netherlands. It compares research performance of universities and research institutions using citation-based indicators and focuses on bibliometric output rather than reputational surveys. The ranking is widely used alongside lists such as the Times Higher Education World University Rankings, QS World University Rankings, and the Academic Ranking of World Universities for institutional benchmarking.
The ranking was initiated by the Centre for Science and Technology Studies (CWTS) at Leiden University to provide an evidence-based alternative to perception-driven lists like U.S. News & World Report and rankings associated with the Shanghai Ranking Consultancy. It emphasizes transparency and methodological rigor, drawing attention from ministries such as the Ministry of Education, Culture and Science (Netherlands), funding agencies like the European Research Council, and universities including University of Oxford, Harvard University, Stanford University, and University of Cambridge. Stakeholders in higher education policy—such as those at the Organisation for Economic Co-operation and Development and the European Commission—use its data alongside bibliometric resources from entities like Clarivate Analytics and Elsevier. The ranking often appears in analyses by media outlets such as the Guardian (newspaper), The New York Times, and The Economist.
CWTS applies a field-normalized citation impact approach influenced by bibliometric work from scholars at institutions such as Centre for Science and Technology Studies and research groups at Max Planck Society. The methodology includes fractional counting techniques that echo methods developed at Institut de l'Information Scientifique et Technique and statistical normalization similar to approaches used by Scopus analysts and Web of Science Group researchers. Methodological debates reference contributions from researchers at Clarivate Analytics, Elsevier, and independent bibliometricians like those at Leiden University Medical Center. The ranking distinguishes whole-count and fractional-count schemes and addresses issues raised in reports from the Royal Society and the National Academies of Sciences, Engineering, and Medicine.
Key indicators include the proportion of publications among the top 10% most cited (PP(top 10%)), mean normalized citation score (MNCS), and metrics for collaboration such as the proportion of international collaborations. These indicators relate to frameworks used by the Horizon 2020 program and evaluation practices in agencies like the National Science Foundation and German Research Foundation. Other metrics account for citation windows and document types comparable to measures used in analyses at European Research Council panels and by institutions such as Massachusetts Institute of Technology and California Institute of Technology.
The ranking relies primarily on bibliographic data derived from the Web of Science database, aligning coverage considerations with providers like Clarivate Analytics and bibliometric datasets curated by organizations such as CWTS and research infrastructures like OpenAIRE. Institutional affiliation data intersect with authority files maintained by entities like ORCID and address data examined by scholars at Delft University of Technology and Utrecht University. The geographic and language coverage prompts comparisons with national repositories including NARCIS and aggregators like Crossref, while discipline delineation references classification systems used by Scopus and disciplinary standards promoted by committees at European University Association meetings.
Universities and research funders—including Wellcome Trust, Bill & Melinda Gates Foundation, and national ministries—use the ranking for benchmarking, strategic planning, and accountability exercises similar to practices seen in evaluations by the Research Excellence Framework and the Higher Education Funding Council for England. Critics from academic organizations such as the International Association of Universities and scholarly commentators at Science (journal), Nature (journal), and policy groups at the OECD have argued about limitations concerning citation bias, coverage of non-English outputs, and potential gaming analogous to critiques raised against lists like ShanghaiRanking's Academic Ranking of World Universities. Debates reference case studies involving institutions such as University of Tokyo, Peking University, and University of Melbourne and draw on methodological critiques published by researchers at Utrecht University and Leiden University.
Since its inception, the ranking has evolved in indicator design and in its handling of multidisciplinary research, reflecting changes in bibliometric scholarship from groups at Max Planck Institute for Solid State Research and policy shifts within the European Commission. Historical trend analyses compare longitudinal performance of institutions like Princeton University, Yale University, and Columbia University and have been used in reports by national agencies such as the Swedish Research Council and the Danish Agency for Science and Higher Education. The ranking’s adjustments over time mirror broader transformations in bibliometric practice debated at conferences organized by the International Society for Scientometrics and Informetrics and workshops at Leiden University.
Category:University rankings