LLMpediaThe first transparent, open encyclopedia generated by LLMs

Science Data Management (DESDM)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Dark Energy Survey Hop 4
Expansion Funnel Raw 98 → Dedup 6 → NER 5 → Enqueued 0
1. Extracted98
2. After dedup6 (None)
3. After NER5 (None)
Rejected: 1 (not NE: 1)
4. Enqueued0 (None)
Science Data Management (DESDM)
NameScience Data Management (DESDM)

Science Data Management (DESDM) is a coordinated set of practices, systems, and policies that enable the ingestion, curation, processing, storage, and dissemination of large-scale scientific datasets. It integrates hardware, software, workflow orchestration, and organizational governance to support reproducible research and long-term preservation for projects spanning observational campaigns, experimental facilities, and computational simulations. DESDM draws on technologies and institutional frameworks to meet requirements for throughput, provenance, and open access across collaborative consortia and national facilities.

Overview

DESDM encompasses data lifecycle management for projects such as Large Hadron Collider, Hubble Space Telescope, James Webb Space Telescope, Square Kilometre Array, Human Genome Project, and CERN Open Data Portal, coordinating between facilities like European Southern Observatory, National Aeronautics and Space Administration, European Space Agency, and National Science Foundation. It is informed by standards and initiatives including FAIR data principles, OpenAIRE, Research Data Alliance, World Data System, and directives from agencies such as U.S. Office of Science and Technology Policy and European Commission. Stakeholders include observatories, laboratories, university consortia, and data centers such as Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory, SLAC National Accelerator Laboratory, and National Center for Supercomputing Applications.

Architecture and Components

Architectural components of DESDM typically include instrument interfaces from projects like Atacama Large Millimeter Array, message buses and middleware informed by Apache Kafka and RabbitMQ, workflow engines inspired by Airflow (software), Cromwell (software), and Pegasus (workflow management system), and storage subsystems using technologies exemplified by Ceph, Lustre (file system), and Amazon S3. Metadata catalogs follow schemas influenced by Dublin Core, International Virtual Observatory Alliance, and discipline-specific ontologies developed by consortia such as Global Biodata Coalition and GenBank. Compute provisioning integrates high-performance computing centers like NERSC and cloud providers comparable to Amazon Web Services, Google Cloud Platform, and Microsoft Azure.

Data Acquisition and Processing Pipelines

Data acquisition patterns reflect instrumentation from Keck Observatory, Very Large Telescope, Fermilab, LIGO Scientific Collaboration, and NOAA National Centers for Environmental Information with raw streams captured using protocols seen in TCP/IP and MQTT. Pipelines implement calibration, reduction, and analysis stages paralleling workflows used by Dark Energy Survey, Sloan Digital Sky Survey, Gaia (spacecraft), and ALMA; orchestration tools trace lineage to Jenkins (software), Slurm Workload Manager, and domain-specific software like IRAF and CASA (astronomy software). Pipeline design must accommodate real-time alerting systems exemplified by Zwicky Transient Facility and follow practices established by collaborations such as Event Horizon Telescope.

Storage, Access, and Distribution

Storage strategies balance tiered architectures found at institutions like CERN and European Grid Infrastructure with content-delivery paradigms used by NASA Earthdata. Access methods include authenticated APIs following patterns from ORCID and Crossref, bulk distribution via protocols such as GridFTP and HTTP/2, and user interfaces modeled on portals like Vizier (database) and NASA/IPAC Infrared Science Archive. Data citation and DOIs are implemented using frameworks from DataCite and repositories like Zenodo, while long-term preservation aligns with guidance from International Organization for Standardization and archives such as British Library.

Quality Assurance and Provenance

Quality assurance frameworks derive from practices used by Intergovernmental Panel on Climate Change, Human Cell Atlas, and International Virtual Observatory Alliance validation documents, employing unit tests, integration tests, and continuous verification tools including pytest and SonarQube. Provenance models often adopt standards such as W3C PROV and are tracked through tools akin to git and Dataverse workflows to ensure reproducibility for analyses similar to those in Broad Institute and Wellcome Sanger Institute. Calibration records and audit trails mirror procedures used by National Institute of Standards and Technology.

Governance, Policies, and Security

Governance models reflect multilateral agreements like Treaty on the Non-Proliferation of Nuclear Weapons-style consortium arrangements adapted for peaceful data sharing among entities such as European Research Council, National Institutes of Health, Wellcome Trust, and national laboratories. Policy areas include data management plans promoted by National Science Foundation and Horizon Europe, intellectual property handled through mechanisms like Creative Commons, and access control guided by standards such as OAuth (protocol) and GLBA-style regulatory compliance where applicable. Security practices draw on frameworks from NIST Cybersecurity Framework and incident response playbooks used by US-CERT and ENISA.

Applications and Case Studies

DESDM underpins scientific outputs from programs including Dark Energy Survey, Sloan Digital Sky Survey, Human Genome Project, LIGO Scientific Collaboration, Event Horizon Telescope, and Gaia (spacecraft), enabling discoveries published by institutions like Harvard University, Massachusetts Institute of Technology, Princeton University, Caltech, and Stanford University. Case studies demonstrate scalable reductions at facilities such as Lawrence Berkeley National Laboratory for Large Synoptic Survey Telescope precursor data, federated analyses in projects coordinated by CERN OpenLab, and cross-disciplinary reuse exemplified by data sharing between National Oceanic and Atmospheric Administration and United Nations Environment Programme.

Category:Data management