Generated by GPT-5-mini| GHCN | |
|---|---|
| Name | Global Historical Climatology Network |
| Abbreviation | GHCN |
| Type | Dataset |
| Established | 1980s |
| Maintained by | National Centers for Environmental Information |
| Region served | Global |
GHCN The Global Historical Climatology Network is an integrated set of historical climate datasets widely used for surface temperature and precipitation analyses. It underpins research by institutions such as the National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, Met Office, Intergovernmental Panel on Climate Change, and academic groups at Princeton University, Harvard University, and University of Cambridge. The datasets inform assessments produced by IPCC Sixth Assessment Report, United Nations Framework Convention on Climate Change, and numerous national reports.
GHCN is a consolidated archive that combines station-based observations from instrumental networks including U.S. Historical Climatology Network, European Climate Assessment & Dataset, Global Telecommunication System, and archives from national services such as Australian Bureau of Meteorology, Environment and Climate Change Canada, Japan Meteorological Agency, Deutscher Wetterdienst, and Météo-France. It supplies standardized series used by projects at NASA Goddard Institute for Space Studies, NOAA National Centers for Environmental Information, Berkeley Earth, Hadley Centre, and Climatic Research Unit. Analysts from Columbia University, Woods Hole Oceanographic Institution, Scripps Institution of Oceanography, and Max Planck Institute for Meteorology use the data for trend detection, extreme-event attribution, and climate model evaluation.
Early efforts trace to station archives curated by Royal Meteorological Society, U.S. Weather Bureau, and colonial observatories such as Kew Observatory and Observatory of Paris. Consolidation accelerated with programs at NOAA, NASA, and the World Meteorological Organization in the late 20th century, influenced by milestones like the Charney Report and assessments by the Intergovernmental Panel on Climate Change. Key collaborators included the National Weather Service, European Centre for Medium-Range Weather Forecasts, International Research Institute for Climate and Society, and university consortia. Over time, digitization projects led by Smithsonian Institution, Library of Congress, and national archives expanded historical reach back to 19th-century observatories such as Greenwich Observatory.
The archive includes monthly and daily series for variables sourced from instruments at stations such as Mount Washington Observatory and Mauna Loa Observatory. Typical variables comprise minimum temperature, maximum temperature, mean temperature, and precipitation totals, alongside station metadata fields like elevation, latitude, longitude, and station name tied to registries such as Global Surface Summary of the Day. Derived fields support analyses by groups like NOAA Climate Prediction Center, National Snow and Ice Data Center, and International Arctic Research Center. The structure supports homogenization algorithms used in studies at Princeton University and University of Oxford.
Observational inputs derive from national meteorological services, historical logbooks from institutions such as Observatoire de Paris, ship logs archived by National Oceanic and Atmospheric Administration and National Maritime Museum, and automated networks including Automatic Weather Station installations at polar sites like McMurdo Station. Data ingestion workflows mirror practices at Global Telecommunications System hubs and use cataloging standards similar to World Data Center for Meteorology. Collaborative digitization efforts have involved Citizen Science projects and initiatives by European Space Agency research programs.
Quality assurance applies automated and manual checks analogous to methods at Hadley Centre and procedures used in the Berkeley Earth project. Techniques include outlier detection, duplication checks, and break-point analysis employing statistical tests developed in studies at University of Bern and ETH Zurich. Homogenization approaches reconcile discontinuities introduced by station moves, instrument changes, and urbanization effects studied in research from Yale University and University of California, Berkeley. Metadata-based adjustments reference station histories maintained by agencies such as National Centers for Environmental Information and Japan Meteorological Agency.
Principal distributions include monthly compilations used by NOAA National Centers for Environmental Information and daily datasets integrated into climate services at Met Office. Distinct releases—historically enumerated as versions including v3 and v4—parallel product lines from Berkeley Earth, NASA GISS Surface Temperature Analysis, and HadCRUT. Subsets labeled for monthly (M) and daily (D) usage are combined or compared in multi-dataset assessments by Intergovernmental Panel on Climate Change authors, World Climate Research Programme, and climate model intercomparison projects such as CMIP6.
Researchers employ the archive for global and regional trend attribution in publications in journals like Nature, Science, Journal of Climate, and Geophysical Research Letters. Policy makers reference analyses in reports to United Nations Environment Programme and national assessments by agencies including Environmental Protection Agency and Department of Energy. Applied uses include calibration of reanalyses from ECMWF, verification of climate model output at institutions like Lawrence Livermore National Laboratory, drought indices used by Food and Agriculture Organization, and detection of extremes underpinning insurance risk models used by companies such as Munich Re.
Category:Climate data sets