Generated by GPT-5-mini| Programming Historian | |
|---|---|
| Name | Programming Historian |
| Type | Open-access digital publication |
| Founded | 2008 |
| Founders | Damian I. Talbot, Tomás Croddy, Daniel Pett |
| Country | United Kingdom |
| Headquarters | University of Oxford |
| Language | English, Spanish, Portuguese, French |
Programming Historian is an open-access peer-reviewed project that publishes practical, step-by-step tutorials for digital research methods aimed at humanities and social science practitioners. The project connects historical scholarship with computational tools and methodological training and has collaborated with a range of universities, cultural heritage institutions, and learned societies. Programming Historian emphasizes reproducible workflows, community review, and multilingual access, situating its work at the intersection of digital scholarship, archival practice, and open-source software communities.
The project was launched in 2008 with early involvement from scholars linked to University of Oxford, practitioners associated with The National Archives (United Kingdom), and digital humanists connected to King's College London and University College London. Over time it engaged contributors active in projects such as Text Encoding Initiative, Europeana, and Digital Humanities Observatory while intersecting with platforms like GitHub and initiatives including Open Knowledge Foundation. Major milestones include expansion into Spanish and Portuguese editions influenced by collaborators from Universidad de Salamanca, Universidade de São Paulo, and Universidad Nacional Autónoma de México; recognition from organizations such as Jisc and incorporation into teaching at institutions like Stanford University and University of California, Berkeley. The project’s emergence paralleled developments in Digital Humanities conferences, including presentations at DH2012 and ADHO gatherings, and dialog with funders like Arts and Humanities Research Council.
The editorial structure combines an international editorial board, regional editors, and volunteer reviewers drawn from academic departments at University of Oxford, King's College London, Universidad Complutense de Madrid, and research libraries such as British Library and Library of Congress. Governance has followed models similar to those used by Creative Commons, Wikimedia Foundation, and community-led projects like OpenStreetMap and Software Sustainability Institute. Decision-making balances editorial policies influenced by standards from Committee on Publishing Ethics alongside infrastructure hosted on platforms including GitHub and cloud services used by institutions like University of Toronto and Yale University. Funding sources historically include small grants from bodies such as Andrew W. Mellon Foundation and partnerships with consortia like CLIR and DARIAH.
The project publishes tutorials authored by practitioners associated with museums, archives, and universities including Smithsonian Institution, V&A, Bibliothèque nationale de France, and departments at Harvard University, Princeton University, and University of Chicago. Topics span text processing with tools from the Python (programming language) ecosystem, data visualization using R (programming language), geospatial work with QGIS and ArcGIS, optical character recognition tied to Tesseract (software), and web-scraping that references standards like IIIF. Pedagogical approaches reflect methods promoted by educators at Carnegie Mellon University, Massachusetts Institute of Technology, and University of Edinburgh, emphasizing stepwise reproducibility, sample datasets drawn from collections at National Library of Scotland and Biblioteca Nacional de España, and assessment techniques akin to those in MOOCs from edX and Coursera. The project’s multilingual editorial tracks allow adaptation by teams in Argentina, Portugal, Spain, and Mexico to incorporate local archival practices and national heritage policies.
Authors submit tutorials which undergo editorial checks and community peer review involving reviewers from institutions such as Oxford Internet Institute, London School of Economics, New York University, and Max Planck Institute for the History of Science. The process integrates version control practices modeled on workflows used by Linux kernel contributors and documentation standards similar to those from Mozilla Foundation. Edited lessons are published with metadata practices influenced by Dublin Core and linked-data experiments paralleling work at Europeana Research. The project uses Creative Commons licensing policies compatible with guidance from SPARC and aligns editorial ethics with statements from COPE; translations are coordinated by regional editors in conjunction with bilingual teams at universities like Universidade Federal do Rio de Janeiro and Universidad de los Andes.
The project has been cited in syllabi at institutions including University of Cambridge, Columbia University, University of Toronto, and University of Sydney and referenced in grant proposals to funders such as National Endowment for the Humanities and Horizon 2020. Reviews and academic commentary have appeared in journals affiliated with Modern Language Association, Digital Scholarship in the Humanities, and Literary and Linguistic Computing. Cultural heritage professionals at The National Archives (United Kingdom), National Library of Australia, and Biblioteca Nacional de Chile have recommended lessons for upskilling staff, while library science programs at University College Dublin and Simmons University have integrated materials into curricula. The project’s open model has influenced community publishing practices adopted by regional initiatives like GLAM-WIKI collaborations and informed policy discussions in forums connected to UNESCO and Council of Europe.