LLMpediaThe first transparent, open encyclopedia generated by LLMs

LHCONE

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: CERN OpenLab Hop 5
Expansion Funnel Raw 54 → Dedup 3 → NER 1 → Enqueued 0
1. Extracted54
2. After dedup3 (None)
3. After NER1 (None)
Rejected: 2 (not NE: 2)
4. Enqueued0 (None)
LHCONE
NameLHCONE
Formed2014
JurisdictionInternational
HeadquartersGeneva
Parent organizationCERN

LHCONE

LHCONE is a dedicated high-performance research network fabric designed to interconnect major scientific facilities, data centers, and research organizations supporting large-scale experiments. It facilitates sustained bulk data transfers among institutions participating in international projects, linking particle physics laboratories, academic centers, and national research and education networks to accelerate data sharing for experiments. The fabric complements commercial Internet backbones and specialized overlays to meet throughput, latency, and policy requirements for distributed science collaborations.

Overview

LHCONE provides a managed, high-capacity point-to-point and point-to-multipoint overlay that connects sites such as CERN, Fermilab, Brookhaven National Laboratory, DESY, and national research and education networks including GÉANT, ESnet, Internet2, and NORDUnet. It supports the global data distribution needs of experiments associated with facilities like the Large Hadron Collider and federates with regional grids and cloud providers including OpenStack Foundation deployments, Amazon Web Services, and Google Cloud Platform for hybrid workflows. Stakeholders include research councils, national laboratories, and computing centers such as National Energy Research Scientific Computing Center and TRIUMF.

History and Development

The initiative emerged to address escalating data volumes from projects at CERN and collaborators after upgrades to the Large Hadron Collider and associated detectors like ATLAS and CMS. Early development involved partnerships among European Organization for Nuclear Research, U.S. Department of Energy, and national research networks including DFN, RENATER, and SURFnet. Pilot phases tested intercontinental paths linking Geneva, New York City, San Francisco, and Tokyo research hubs, leveraging lessons from experiments at SLAC National Accelerator Laboratory and the distributed computing model pioneered by the Worldwide LHC Computing Grid. Governance structures matured through forums such as meetings convened by GÉANT and the Internet2 Global Summit.

Architecture and Network Infrastructure

The LHCONE fabric is architected as an overlay that uses dedicated circuits, VLANs, and routing policies across participating backbones like GÉANT, ESnet, and Internet2. Core elements include high-throughput optical links, wavelength-division multiplexing infrastructure sourced from vendors used by CERN and national laboratories, and peering at major exchange points such as DE-CIX, AMS-IX, and LINX. The design incorporates traffic-engineered paths, quality-of-service agreements with providers like NORDUnet and SURF, and integration with perfSONAR measurement nodes used by R&E Network operators. Endpoints run transfer tools and storage systems familiar to centers such as CERN IT, Fermilab Scientific Computing Division, and KIT.

Membership and Governance

Membership comprises research and education networks, national laboratories, and recognized computing centers that meet technical and policy criteria. Decision-making involves steering groups with representatives from organizations such as CERN, ESnet, Internet2, GÉANT, and national research councils including Science and Technology Facilities Council and Deutsches Elektronen-Synchrotron. Policies are codified through memoranda of understanding and service-level agreements negotiated among participants, with coordination facilitated by meetings at venues like Geneva and virtual workshops hosted by GÉANT and Internet2. Operational matters are overseen by technical working groups populated by engineers from Fermilab, Brookhaven National Laboratory, and regional NRENs.

Operations and Performance

Operations focus on predictable, high-throughput transfers to support workflows from experiments such as ATLAS, CMS, ALICE, and LHCb. Performance engineering leverages transfer tools and middleware used by the Worldwide LHC Computing Grid and relies on active monitoring via systems interoperable with perfSONAR and measurement platforms employed by GÉANT and ESnet. Typical operational practices include scheduled bulk transfers, dynamic path selection to avoid congestion, and coordination for maintenance windows at exchange points like DE-CIX. Measured throughput routinely reaches multi-gigabit and multi-ten-gigabit sustained flows between major sites, enabling replication and analysis tasks across distributed facilities including National Research Council Canada centers and regional Tier-1 facilities.

Security and Access Policies

Access requires members to agree to policies addressing acceptable use, traffic filtering, and abuse handling coordinated among entities such as CERN IT, ESnet Security teams, and national CERTs like CERT-EU and US-CERT. Security controls combine perimeter filtering, route filtering, and peering policy enforcement with incident response coordination drawing on frameworks used by GÉANT and Internet2 security collaboratives. Because the fabric is designed for trusted research traffic, onboarding includes vetting of organizational credentials, network engineering validation, and operational contacts for CERT escalation and law enforcement coordination when necessary.

Impact and Use Cases

LHCONE accelerates large-scale scientific discovery by enabling efficient bulk movement of experimental data for analyses conducted at centers such as CERN, Fermilab, Brookhaven National Laboratory, and university clusters at MIT, University of California, Berkeley, University of Oxford, and University of Tokyo. Use cases extend beyond particle physics to multi-institution projects in astronomy involving facilities like ALMA and Square Kilometre Array prototyping groups, climate modeling collaborations tied to ECMWF, and bioinformatics consortia leveraging high-throughput sequencing data. The fabric underpins federated computing models that couple storage resources, workflow managers, and analysis frameworks developed by communities including the Worldwide LHC Computing Grid and national e-infrastructure programs, thereby shaping how international science shares and processes petascale datasets.

Category:Research networks