Generated by GPT-5-mini| Impact Factor | |
|---|---|
| Name | Impact Factor |
| Publisher | Clarivate Analytics |
| Introduced | 1960s |
| Field | Bibliometrics |
| Purpose | Journal citation metric |
Impact Factor
The Impact Factor is a widely cited journal citation metric used to summarize the average citation rate of articles published in a journal. It is produced annually and influences decisions by librarians, editors, funders, and scholars associated with institutions such as Harvard University, University of Oxford, National Institutes of Health, Max Planck Society, and Wellcome Trust. Major publishers and indexing services like Elsevier, Springer Nature, Wiley-Blackwell, PubMed, and Web of Science routinely reference this metric alongside awards such as the Nobel Prize or recognitions from organizations including the Royal Society.
The metric is defined by a ratio produced by counting citations to items in a journal over a two-year citation window and dividing by the number of "citable items" during that window, as tabulated by Clarivate Analytics in the Web of Science Group database. Calculation depends on the indexing decisions of services operated by companies such as Clarivate, Elsevier (through Scopus), and aggregators used by libraries at institutions like Columbia University and University of Tokyo. The numerator comprises citations recorded in a given year to articles published in the preceding two years; the denominator comprises original research and review articles as classified by the indexer, with classification practices influenced by editorial policies from publishers including Nature Publishing Group and Science (American Association for the Advancement of Science).
The origin traces to bibliometric work from the 1950s and 1960s associated with scholars at institutions such as Institute for Scientific Information and researchers collaborating with entities like Eugene Garfield and the Philadelphia-based organizations that evolved into Clarivate Analytics. Early adopters included libraries at University of Chicago and committees within national academies such as the National Academy of Sciences. The metric rose in prominence alongside developments in citation indexing, the growth of databases managed by Thomson Reuters, and the expansion of global publishing by houses like Cambridge University Press, Oxford University Press, and Springer. Influential policy debates at forums such as the Royal Society and reports commissioned by agencies like the European Commission shaped its institutional entrenchment.
Universities such as Stanford University, Massachusetts Institute of Technology, and University College London often use journal metrics in recruitment, promotion, and tenure discussions alongside grant evaluations by funders like the European Research Council and National Science Foundation. Librarians at institutions including Yale University and University of California consult Impact Factor data when prioritizing journal subscriptions from publishers like Taylor & Francis and SAGE Publications. Editorial boards for titles such as The Lancet, Cell (journal), and Journal of the American Medical Association monitor the metric to attract submissions and shape editorial strategy, while indexing services like PubMed Central and repositories at California Digital Library reflect publishing trends influenced by citation measures.
Scholars at universities including University of Cambridge, University of Toronto, and Peking University have highlighted flaws such as gaming through editorial manipulation, citation stacking among journals, and the distortion of research priorities. Critics cite cases involving publishers like IEEE, Elsevier, and other houses where editorial practices prompted scrutiny by professional bodies such as the Committee on Publication Ethics and regulatory inquiries in jurisdictions including European Union member states. Methodological limitations include sensitivity to outliers, field-dependent citation behaviors observed across disciplines represented at conferences such as American Association for the Advancement of Science meetings, and the influence of review articles and editorial policies from journals like Annual Review of Biochemistry.
Alternatives and complements originate from initiatives and platforms such as Scopus (by Elsevier), Google Scholar, and bibliometric frameworks developed by research centers at Karolinska Institutet and Leiden University. Metrics include the h-index associated with scholars like Jorge Hirsch, article-level indicators provided by services like Altmetric, citation normalization protocols promoted by groups such as CWTS Leiden Ranking, and usage statistics from repositories like arXiv. Funders and institutions such as Wellcome Trust and the San Francisco Declaration on Research Assessment endorse diversified assessment strategies that combine qualitative peer review with quantitative indicators from multiple sources.
The metric has shaped behaviors at scholarly outlets and institutions including editorial policies at titles like Nature, promotion committees at Imperial College London, and funding decisions by bodies such as Japan Society for the Promotion of Science. Consequences include emphasis on publishing in high-metric journals, strategic submission timing, and the proliferation of special issues and review commissions by publishers like MDPI and Frontiers. Responses include reforms advocated by organizations such as the Royal Society of Chemistry and initiatives at universities including University of Edinburgh to adopt broader evaluation criteria, encourage open access via platforms like PubMed Central and DOAJ, and promote research integrity through guidelines from entities such as the Committee on Publication Ethics.