Generated by GPT-5-mini| Science Citation Index | |
|---|---|
| Title | Science Citation Index |
| Producer | Institute for Scientific Information |
| Current owner | Clarivate |
| Country | United States |
| History | Established 1960; integrated into Web of Science platform |
| Discipline | Multidisciplinary science |
Science Citation Index
The Science Citation Index was a citation indexing service created to track citation relationships among scientific literature, originating from efforts by Eugene Garfield and the Institute for Scientific Information; it later became a core component of the Web of Science platform maintained by Clarivate. It influenced bibliometrics, research evaluation, and collection development practices used by institutions such as Harvard University, University of Cambridge, Massachusetts Institute of Technology, and funding agencies like the National Institutes of Health and European Research Council. The index intersects with major bibliographic initiatives including PubMed, Scopus, Google Scholar, and cataloging projects at the Library of Congress.
The initiative began under the direction of Eugene Garfield at the Institute for Scientific Information in the late 1950s and was first published in print form in 1960. Early adopters among libraries and research organizations including New York Public Library, British Library, Yale University, and University of California used the print index alongside citation analyses performed by scholars at University of Chicago and Columbia University. Commercial and academic interactions involved firms and institutions such as Thomson Corporation, ISI, and later Clarivate Analytics after mergers with entities connected to The Thomson Reuters Company. Key developments paralleled technological advances exemplified by projects at Rand Corporation and initiatives like MEDLINE digitization and digitized archives preserved by National Library of Medicine.
The index covered core science journals from publishers such as Elsevier, Springer Nature, Wiley-Blackwell, and Oxford University Press, and included conference proceedings, patents referenced in literature, and selected monographs cited by articles. Users cross-referenced works by authors affiliated with institutions like Stanford University, California Institute of Technology, Imperial College London, ETH Zurich, and research outputs funded by bodies such as the National Science Foundation and Wellcome Trust. The content list influenced national research assessments like the Research Excellence Framework and informed citation metrics that appear in reports from Organisation for Economic Co-operation and Development analyses. Coverage criteria intersected with editorial policies of journals including Nature, Science, Cell, and specialty periodicals from societies such as the American Chemical Society and Institute of Electrical and Electronics Engineers.
Indexing relied on bibliographic metadata standards employed by libraries including cataloging practices of the Library of Congress and serials control used by the International Standard Serial Number agency. Citation strings were parsed to link cited works to cited sources, enabling construction of citation networks used in studies by scholars at University of Oxford, Princeton University, Johns Hopkins University, and University of Toronto. Algorithms for disambiguation evolved in collaboration with data teams influenced by developments at IBM Research, Microsoft Research, and projects like CrossRef’s DOI infrastructure. Metric derivations produced measures such as the journal impact metric that informed evaluations by entities including Times Higher Education and indexing benchmarks referenced by Science Magazine commentary.
The index reshaped how institutions such as Harvard University, University of Michigan, and Peking University conducted literature reviews, tenure committees at University of Pennsylvania and Cornell University assessed candidates, and funding panels at European Commission and Wellcome Trust prioritized grants. It underpinned bibliometric research by scholars at Indiana University, Leiden University, and Université Paris-Saclay and informed policy reports by organizations like the Organization for Economic Co-operation and Development and UNESCO. Libraries including British Library and consortia such as Research Libraries UK used the index for collection management and subscription negotiations with publishers like Taylor & Francis and SAGE Publications.
Initially distributed in print and microform to institutions including Columbia University and University of Chicago, the index migrated to digital platforms integrated into commercial products by Thomson Reuters and later Clarivate. The digital offerings interoperated with platforms such as EndNote, Zotero, RefWorks and were compared to competing services like Scopus by Elsevier and search tools such as Google Scholar. Access models involved academic consortia (for example, Big Ten Academic Alliance), national licenses coordinated by agencies like Jisc, and subscription services used by corporate research centers such as Bell Labs and IBM Research.
Critiques by scholars at institutions like University of Montreal, University of Leiden, and University of Amsterdam addressed index coverage bias favoring journals from United States and United Kingdom publishers and underrepresentation of regional journals from Latin America, Africa, and parts of Asia. Methodological concerns raised in literature by researchers at University of Copenhagen and Max Planck Society included reliance on journal-based metrics that impacted hiring practices at universities such as University of Sydney and University of Cape Town. Legal and commercial critiques invoked by entities including European Commission competition authorities and academic groups at University of California highlighted issues with proprietary access models and impacts on open access initiatives championed by organizations like the Public Library of Science and SPARC.
Category:Bibliographic databases