Generated by GPT-5-mini| Eigenfactor Project | |
|---|---|
| Name | Eigenfactor Project |
| Established | 2007 |
| Type | bibliometric ranking |
| Location | United States |
| Founders | Carl Bergstrom, Jesse M. Perkel |
| Field | bibliometrics |
Eigenfactor Project The Eigenfactor Project is a bibliometric initiative that produces influence-based metrics for scholarly journals and institutions. It provides rankings and data intended to complement traditional citation counts, drawing attention from publishers, libraries, and research funders. The Project is associated with academic groups and initiatives that study citation networks, scholarly communication, and research evaluation.
The Project publishes measures including the Eigenfactor Score and the Article Influence Score, situating journals within citation networks derived from databases such as Web of Science, Scopus (Elsevier), and indexing efforts used by institutions like National Institutes of Health, Harvard University, and University of California. Its outputs have been discussed alongside indicators from Journal Citation Reports, SNIP, SJR (SCImago Journal Rank), and bibliometric tools promoted by Elsevier, Clarivate, and Google Scholar. Researchers from organizations such as University of Washington, Stanford University, University of California, San Diego, and think tanks like RAND Corporation have engaged with the Project's analyses.
The methodology adapts eigenvector centrality and network analysis techniques from fields exemplified by work at Princeton University, Massachusetts Institute of Technology, and the Santa Fe Institute. It constructs a directed citation network where nodes represent journals indexed by sources like PubMed, Medline, and CrossRef, and edges represent citation flows similar to models used in the development of PageRank at Stanford University. The Eigenfactor Score is computed by iterating stochastic matrices comparable to methods from Markov chain theory applied in studies at Bell Labs and implemented with software libraries common in computational research at Los Alamos National Laboratory and National Center for Supercomputing Applications. To control for discipline size, the Article Influence Score normalizes per-article impact, echoing normalization approaches debated at Organisation for Economic Co-operation and Development meetings and in reports by the National Science Foundation.
Librarians at institutions like Yale University, Columbia University, and University of Cambridge have used the Project's metrics for collection development and subscription negotiations alongside input from publishers such as Springer Nature and Wiley-Blackwell. Research administrators at Wellcome Trust, European Research Council, and National Institutes of Health have considered the scores in assessment frameworks, often in conjunction with bibliometric platforms like Altmetric and Dimensions (digital science). The metrics inform analyses in meta-research published in journals such as Nature, Science (journal), PLoS Biology, and Proceedings of the National Academy of Sciences of the United States of America. The Project’s data have been integrated into tools used by consortia including Big Ten Academic Alliance and commercial entities including Clarivate Analytics for comparative evaluation.
Scholars from University of Oxford, University of Cambridge, and University College London have critiqued dependence on citation indexing sources like Web of Science and Scopus (Elsevier) for bias toward English-language and Western publications, echoing concerns raised by panels at UNESCO and reports by Committee on Publication Ethics. Methodological limitations similar to criticisms of Journal Impact Factor include sensitivity to citation practices in fields represented by journals from American Chemical Society, Institute of Electrical and Electronics Engineers, and Association for Computing Machinery. Critics from groups such as DORA (San Francisco Declaration on Research Assessment) and scholars affiliated with Leiden University highlight potential misuses in hiring and funding decisions, and warn about gaming via editorial policies seen in cases involving publishers like Elsevier and Springer Nature. Technical limitations include coverage gaps in regional databases such as SciELO and Redalyc, which affect representation of scholarship from Latin America and Africa.
The Project emerged in the mid-2000s from collaborations among researchers at University of Washington and collaborators influenced by network science at University of California, San Diego and University of Michigan. Early methodological inspirations trace to graph-theoretic work prominent at Princeton University and algorithmic developments at AT&T Bell Laboratories and Google LLC. Over time the Project responded to initiatives from funding bodies including National Science Foundation and policy discussions at European Commission directorates, and its outputs have been cited in policy analyses by Organisation for Economic Co-operation and Development and reviews in Nature Index. Subsequent enhancements involved partnerships with academic libraries at Cornell University and data providers like Clarivate Analytics and CrossRef to broaden coverage and transparency.