Generated by GPT-5-mini| SCImago Journal Rank | |
|---|---|
| Name | SCImago Journal Rank |
| Abbreviation | SJR |
| Established | 2007 |
| Discipline | Bibliometrics |
| Publisher | SCImago Lab |
| Country | Spain |
| Frequency | Annual |
SCImago Journal Rank SCImago Journal Rank is a journal-level bibliometric indicator developed to assess scholarly influence by weighting citations according to the prestige of citing journals. It was introduced by the SCImago Lab and built from data derived from the Scopus database maintained by Elsevier; it forms part of a suite of indicators alongside the SCImago Institutions Rankings and related tools. The indicator is widely cited in discussions involving journal evaluation, library collection development, research assessment exercises, and academic publishing policy.
The indicator was developed by researchers associated with the SCImago Research Group and draws on citation data curated by Elsevier's Scopus and institutional partners such as the Spanish National Research Council, Instituto de Buenas Prácticas, and collaborating universities. It situates journals within subject areas similar to classifications used by Clarivate Analytics and the Web of Science, while interacting with institutional frameworks such as the European Research Council, the National Institutes of Health, the National Science Foundation, and research universities including Harvard University, University of Oxford, Stanford University, Massachusetts Institute of Technology, University of Cambridge, University College London, Columbia University, University of California Berkeley, and California Institute of Technology. The SJR is presented alongside other bibliometric tools from publishers and organizations like Wolters Kluwer, Springer Nature, Taylor & Francis, Wiley-Blackwell, and professional societies including the American Chemical Society, IEEE, and the Royal Society.
SJR uses an eigenvector centrality approach related to algorithms developed for network analysis and link analysis such as those by Google, including PageRank, and earlier mathematical foundations from the work of Paul Erdős, Alfréd Rényi, and the network studies of Mark Newman. Citations are weighted by the citing journal's prestige, which involves iterative calculations across networks similar to methods used in the Journal Citation Reports by Clarivate, CrossRef citation linking, Digital Science tools such as Dimensions, and open infrastructures like PubMed Central, arXiv, and ORCID. The computational pipeline engages databases and platforms including Elsevier Scopus, Microsoft Academic (historical), JSTOR, ProQuest, and national repositories like HAL, RePEc, and CNKI to model citation flows. The metric incorporates time windows and normalization procedures that echo methodological choices in works produced by organizations such as UNESCO, OECD, and the European Science Foundation.
SJR produces journal rankings and subject-area quartiles akin to tiering systems used by Clarivate's Journal Citation Reports, Scimago's own subject classifications, and rankings influenced by bibliometricians affiliated with institutions like Leiden University, CWTS, and the Centre for Science and Technology Studies. The indicator yields related measures such as SJR indicator values, H-index comparisons, citation per document statistics, and journal impact profiles that are compared with indicators from PubMed, Scopus CiteScore, Eigenfactor, and SNIP. These outputs inform decisions at academies, foundations, and funding bodies including the Wellcome Trust, Bill & Melinda Gates Foundation, National Endowment for the Humanities, and the European Commission's Horizon programs, as well as editorial boards at journals published by Nature Publishing Group, The Lancet, Cell Press, PLOS, and Proceedings of the National Academy of Sciences.
SJR is frequently compared with metrics such as Journal Impact Factor from Clarivate Analytics, CiteScore from Elsevier, Eigenfactor from the University of Washington, and SNIP developed at CWTS. Comparative evaluations appear in analyses involving bibliometric research groups at Leiden University, University College London, Max Planck Society, and the Humboldt Foundation, and are cited in policy discussions at institutions like the European University Association, Association of American Universities, and the Research Excellence Framework in the United Kingdom. Differences with metrics used by platforms such as Google Scholar, ResearchGate, Academia.edu, and Mendeley often center on data coverage, transparency, and algorithmic weighting, which are debated in forums hosted by the Royal Society, American Association for the Advancement of Science, and National Academy of Sciences.
Universities, research institutes, libraries, and funding agencies use SJR-derived rankings in collection development, promotion and tenure committees, strategic planning at institutions such as Yale University, Princeton University, University of Toronto, Australian National University, University of Tokyo, Peking University, Tsinghua University, and research evaluation frameworks in countries including Germany, France, Italy, India, Brazil, and South Africa. Publishers and societies like Elsevier, Springer, IEEE, American Institute of Physics, Chemical Abstracts Service, and the American Physical Society monitor SJR trends for editorial strategy and marketing. Policy-makers and accrediting bodies reference SJR alongside bibliometric tools from organizations such as the World Bank, European Commission, and national ministries of science.
Critiques of SJR echo broader debates about journal-level metrics: potential misuse in research assessment, sensitivity to database coverage (noting disparities with Web of Science, PubMed, and regional indexes such as SciELO and LILACS), and the opacity of algorithmic weighting discussed by scholars at the University of Leiden, University of Ottawa, and University of Montreal. Other limitations noted by critics from associations including the San Francisco Declaration on Research Assessment (DORA), the European University Association, and Research Councils highlight disciplinary bias affecting fields represented in Scopus, the influence of editorial policies at journals like Cell, Nature, and The Lancet, and the risk of gaming by practices scrutinized in investigations by Retraction Watch and COPE. Debates continue in venues such as the International Council for Science, UNESCO, and academic conferences hosted by the Association for Information Science and Technology and the International Society for Scientometrics and Informetrics.