Generated by GPT-5-mini| i10-index | |
|---|---|
| Name | i10-index |
| Introduced | 2000s |
| Developer | Google Scholar |
| Purpose | Bibliometric indicator of author impact |
| Type | Author-level citation metric |
| Calculation | Count of publications with ≥10 citations |
| Related | h-index, g-index, citation count, Altmetric, Eigenfactor |
i10-index
The i10-index is a bibliometric indicator that counts the number of an author's publications that have received at least ten citations. It is used alongside measures such as the h-index, total citation counts, and article-level indicators to summarize scholarly influence across careers at institutions like Harvard University, Stanford University, Massachusetts Institute of Technology, and University of Cambridge. Prominent researchers at NIH, Max Planck Society, CNRS, Wellcome Trust and corporations such as Google and Microsoft Research are often profiled using the i10-index within author profiles on platforms including Google Scholar, ORCID, and institutional repositories.
The i10-index is defined as the integer count of publications by a researcher that have accrued ten or more citations in the indexing source. Calculation requires an indexed corpus, automated citation parsing, and a citation threshold filter; platforms performing this include Google Scholar and third-party aggregators used by groups like Elsevier and Clarivate Analytics. For a given author, publications are ranked by citations to verify the threshold condition; the resulting value complements metrics such as the h-index and aggregate citation totals reported by organizations like Scopus and Web of Science. Because the metric is threshold-based, it is insensitive to whether a qualifying paper has 10, 100, or 10,000 citations, in contrast to measures employed in evaluations by National Institutes of Health panels, European Research Council committees, or internal review boards at universities.
The threshold concept underpinning this metric emerged in bibliometrics literature in the late 20th and early 21st centuries amid debates at venues like International Committee of Medical Journal Editors meetings and conferences hosted by Association for Information Science and Technology. The specific adoption of the 10-citation threshold was popularized when Google Scholar implemented author profiles in the late 2000s, providing an easily computed, consumer-friendly statistic visible alongside citation counts and other profile elements. The metric’s appearance affected visibility practices at publishers such as Springer Nature, Wiley-Blackwell, and Taylor & Francis and influenced academic networking on platforms like ResearchGate and Academia.edu.
Institutions and individuals use the metric for quick assessment of publication portfolios in hiring, promotion, grant review, and departmental reporting at places like University of Oxford, University of California, Berkeley, Columbia University, and Johns Hopkins University. Funding bodies including National Science Foundation panels and philanthropic funders such as Gates Foundation sometimes consider simple indicators in preliminary triage. The i10-index features in scholar profiles used by librarians at British Library and research managers at European Commission directorates. It is also applied in benchmarking studies comparing departments at Princeton University and Yale University or tracking career trajectories for laureates of awards like the Nobel Prize in Physics, Turing Award, and Fields Medal.
Compared with the h-index, which requires both a count and citation threshold relationship, the i10-index uses a single absolute citation cutoff, making it easier to compute but less sensitive to citation distribution tails than metrics such as the g-index or m-index. Aggregate metrics from Scopus and Web of Science often report total citations and h-index, while altmetric services tied to Altmetric.com and PlumX emphasize attention signals; the i10-index neither captures social media attention nor normalizes for field differences as the Field-Weighted Citation Impact does. Measures like the Eigenfactor and Journal Impact Factor apply at the journal level and are not directly comparable, whereas author-level metrics like the h-index, g-index, and total citation counts are commonly presented alongside the i10-index in benchmarking reports used by research offices.
Critiques highlight the arbitrary choice of the threshold "10", which may advantage researchers in high-citation fields such as those represented at Broad Institute or Cold Spring Harbor Laboratory and disadvantage scholars in specialized venues associated with American Antiquarian Society or regional publishers. The index is sensitive to the coverage and deduplication practices of the indexing platform; discrepancies between Google Scholar, Scopus, and Web of Science can produce divergent i10-index values. It also fails to account for author position on multi-author works—a concern in evaluations at University of Pennsylvania and University of Chicago—and can be gamed by self-citation or coordinated citation practices criticized in reports by Committee on Publication Ethics and panels convened by Royal Society. Finally, reliance on a single-number summary is cautioned against by methodological commentators at institutions such as CERN and OECD.
Alternatives and refinements include fractional authorship-adjusted counts used by research assessment frameworks at institutions like Imperial College London and national systems such as Research Excellence Framework; normalized measures such as the Field-Weighted Citation Impact and percentile-based indicators employed by Clarivate Analytics; and composite indices like the h-index family (g-index, e-index) and time-aware metrics (m-index). Altmetric indicators from Altmetric.com, article usage metrics from Figshare and Zenodo, and network-based measures such as PageRank-inspired author influence algorithms applied within Microsoft Academic and institutional CRIS systems offer complementary perspectives for evaluators at universities, funders, and scholarly societies.