LLMpediaThe first transparent, open encyclopedia generated by LLMs

GlobeLand30

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 77 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted77
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
GlobeLand30
NameGlobeLand30
OperatorNational Geomatics Center of China
Launched2014 (dataset release)
CountryChina
TypeLand cover dataset
Spatial resolution30 m
Temporal coverage2000, 2010
StatusActive

GlobeLand30 is a global 30-meter resolution land cover dataset produced to map terrestrial surface types for the years 2000 and 2010. Developed and released by the National Geomatics Center of China, the project integrates multi-source satellite imagery, cartographic data, and automated classification to produce a consistent global land cover product. The dataset has been used in research fields tied to NASA, European Space Agency, United Nations Environment Programme, Food and Agriculture Organization, and national agencies across China, United States, Brazil, India, and Australia.

Overview

GlobeLand30 provides wall-to-wall global coverage at 30 m resolution with thematic classes such as "forest", "shrubs", "grassland", "cropland", "wetlands", "water bodies", "artificial surfaces", and "bare land". It was released following prototype products from continental and national projects that involved institutions like the National Aeronautics and Space Administration, Beijing Normal University, China Academy of Sciences, European Commission, and the World Wildlife Fund. The product serves as a high-resolution alternative to coarser global maps such as those derived from MODIS and AVHRR time series and complements regional projects like Landsat-based inventories, Copernicus land monitoring, and national land use databases held by agencies like USGS and INPE.

Data and Methodology

The core inputs include archival optical satellite imagery primarily from the Landsat series (e.g., Landsat 5, Landsat 7, Landsat 8), global digital elevation models such as SRTM and ASTER GDEM, and ancillary cartographic sources from institutions like the National Geomatics Center of China and the Global Land Cover Facility. Processing pipelines used radiometric normalization, cloud masking, and multi-temporal compositing similar to techniques applied by Google Earth Engine partners and research teams at Zhejiang University, Peking University, and Tsinghua University. Classification methods combined decision tree classifiers, rule-based systems, and stratified workflows influenced by algorithms developed in studies from Stanford University, University of Oxford, and the Chinese Academy of Sciences.

Training and reference data were drawn from field campaigns and interpreted samples linked to datasets produced by FAO and regional surveys coordinated with institutions like INPE (Brazil), NRCan (Canada), and CSIRO (Australia). Preprocessing incorporated geometric correction referencing ground control points from national mapping agencies and geodetic references such as WGS 84.

Products and Classification Scheme

The released products include global raster maps for the years 2000 and 2010 at 30 m, thematic legends, confidence maps, and metadata records. The classification scheme uses ten primary classes aligned with international land cover taxonomies used by UNEP and the Coordination of Information on the Environment community, enabling interoperability with datasets like GLC2000 and regional schemes from ESA projects. Outputs are tiled to facilitate distribution and integration with platforms operated by Google, Microsoft, and academic data repositories maintained at institutions such as Peking University and National Geomatics Center of China.

Accuracy and Validation

Accuracy assessment employed stratified random sampling and confusion matrix analysis following guidance from ISO standards and best practices published by CEOS and the Global Observation of Forest Cover Change community. Validation used independent reference samples from national land surveys, high-resolution imagery from commercial providers like DigitalGlobe/Maxar, and field observations collected in collaboration with universities including Wageningen University, University of California, Berkeley, and University of Tokyo. Reported overall accuracies varied regionally, with higher performance in temperate zones and reduced accuracy in complex mosaics such as the Amazon Rainforest, Sahara Desert, and Arctic tundra influenced by seasonal snow and cloud cover.

Applications and Impact

GlobeLand30 has been applied in studies of carbon accounting, hydrological modeling, biodiversity assessment, and urban expansion analysis conducted by teams at IPCC-related institutions, World Bank projects, and conservation NGOs including IUCN and BirdLife International. Its high resolution supports national reporting to conventions such as the UNFCCC and CBD, and underpins regional ecosystem services assessments used by development banks like the Asian Development Bank and the Inter-American Development Bank. Research groups at Imperial College London, ETH Zurich, and Massachusetts Institute of Technology have used GlobeLand30 to calibrate land surface models and to validate outputs from regional climate models developed at centers like NCAR and Met Office.

Limitations and Future Development

Limitations include classification confusion between spectrally similar classes (e.g., cropland vs. natural grassland), temporal gaps for dynamic land cover change processes, and lower accuracy in persistently cloudy regions such as parts of Southeast Asia and the Congo Basin. The dataset’s epochal snapshots (2000, 2010) constrain change-detection studies requiring annual time series. Future development pathways discussed by contributors—including research groups at Chinese Academy of Sciences, European Space Agency, NASA, and universities across Europe, North America, and Asia—focus on updating epochs (e.g., 2020), integrating higher-frequency data from Sentinel-2, assimilating radar inputs from Sentinel-1 to reduce cloud bias, and improving class semantics via machine learning advances from labs at Carnegie Mellon University and University of Cambridge. Community-driven validation and partnerships with national agencies aim to enhance thematic detail and regional accuracies.

Category:Remote sensing