Generated by GPT-5-mini| Scopus CiteScore | |
|---|---|
| Name | CiteScore |
| Producer | Elsevier |
| Launched | 2016 |
| Type | Bibliometric indicator |
| Coverage | Scopus-indexed titles |
| Frequency | Annual |
| Purpose | Journal impact assessment |
Scopus CiteScore is a journal-level bibliometric indicator produced by Elsevier that quantifies citation impact of serial publications indexed in Scopus. It offers an annual ratio of citations received to documents published over a defined window, intended to inform researchers, librarians, funders and publishers about comparative influence among journals. The metric sits alongside alternative indicators in the scholarly communications ecosystem and has prompted debate among stakeholders including publishers such as Elsevier, indexing services such as Clarivate and PubMed, and academic institutions like Harvard University and University of Oxford.
CiteScore reports a single numeric value for each Scopus-indexed title and provides percentile ranks and subject-category comparisons for titles indexed with Elsevier’s Scopus platform. Stakeholders from National Institutes of Health, European Commission, Wellcome Trust, Howard Hughes Medical Institute and university libraries including Yale University and University of Cambridge use CiteScore alongside other metrics such as those from Clarivate Analytics and altmetrics aggregators like Altmetric (company). Major publishers including Springer Nature, Wiley-Blackwell, and Taylor & Francis monitor CiteScore trends for portfolio analysis. Professional societies such as the American Chemical Society, Royal Society and IEEE consider CiteScore among multiple indicators when reporting journal performance.
CiteScore is calculated as a ratio: citations received in a given year to documents published in a prior four-year window. The numerator comprises citations recorded in Scopus to any document type in the target year, while the denominator counts documents indexed in Scopus during the four-year publication window. Calculation details intersect with indexing policies of Elsevier and the metadata standards of cross-referencing entities including CrossRef, Digital Science and DataCite. The methodology relates to citation practices observed across disciplines represented in Scopus subject classifications such as those used by United Nations Educational, Scientific and Cultural Organization and managed collections at institutions like Max Planck Society and Chinese Academy of Sciences.
CiteScore relies entirely on the Scopus database, which aggregates records from publishers including Elsevier, Springer Nature, Wiley-Blackwell, Taylor & Francis, SAGE Publications and society publishers like American Physical Society. Source coverage is curated by Scopus content selection teams and is influenced by standards set by organizations such as the Committee on Publication Ethics and the International Committee of Medical Journal Editors. Bibliographic metadata and citations originate from publisher-supplied feeds, indexing of conference proceedings such as those from IEEE and book series indexed with partners like Brill. Inclusion decisions intersect with national repositories and aggregators including PubMed Central, CNKI, and institutional repositories at Massachusetts Institute of Technology.
CiteScore is frequently compared to the Journal Impact Factor produced by Clarivate and the Eigenfactor Score linked to the Institute for Scientific Information legacy; differences include citation windows, document-type inclusion, and calculation transparency. Unlike the two-year window of Journal Impact Factor used by Thomson Reuters historically, CiteScore uses a four-year window and includes all document types, producing divergence with metrics favored by funders such as Wellcome Trust and ranking services used by Times Higher Education and QS World University Rankings. Other alternatives include Article Influence Score, SNIP developed by CWTS and SJR maintained by Scimago; librarians at institutions such as University of California and Columbia University often present multiple metrics in collection-development decisions.
Critiques of CiteScore mirror long-standing criticisms of journal metrics: susceptibility to disciplinary citation patterns exemplified by fields represented at American Mathematical Society versus Cell Press journals, the impact of editorial policies at publishers like Elsevier and Wiley, and potential manipulation via self-citation or editorial practices spotlighted in disputes involving entities such as Springer Nature. Scholars at Stanford University and Princeton University have emphasized that aggregate journal-level indicators do not substitute for article-level peer review, echoing principles from the San Francisco Declaration on Research Assessment supported by institutions including HHMI and Wellcome Trust. Additional limitations stem from coverage biases toward English-language and Western publishers, affecting representation of work from Universidade de São Paulo, University of Cape Town, and other global institutions.
Elsevier has revised CiteScore methodology and presentation since its launch, including annual recalculations, the addition of percentile and quartile reporting, and adjustments to indexing practices coordinated with editorial committees and content selection review boards. Revisions have involved consultation with stakeholders such as International Association of Scientific, Technical and Medical Publishers, national academies including the Royal Society, and research analytics vendors including Clarivate and Digital Science. Changes are reflected in Scopus platform releases and communicated to academic libraries, consortia like JISC, and funding bodies such as European Research Council.
CiteScore informs journal selection by researchers at laboratories and departments across institutions including California Institute of Technology, ETH Zurich, University of Toronto and Peking University; it is used in collection development by university libraries and in publisher portfolio assessments at companies like Elsevier and Wiley. Funders and hiring committees at bodies such as National Science Foundation, Australian Research Council and Medical Research Council (UK) may view CiteScore as one input among many, while advocates of research quality assessment from DORA and national research evaluation exercises like the Research Excellence Framework emphasize multi-dimensional assessment beyond single metrics. The metric’s visibility influences editorial strategies at journals from Nature Publishing Group to smaller society titles, and shapes researcher decisions about manuscript submission and readership discovery.