Generated by GPT-5-mini| ATLAS computing group | |
|---|---|
| Name | ATLAS computing group |
| Formation | 1990s |
| Type | Scientific collaboration subgroup |
| Location | CERN, Geneva |
| Region served | Worldwide |
| Parent organization | ATLAS experiment |
ATLAS computing group
The ATLAS computing group coordinates distributed computing resources, software development, and data management for the ATLAS experiment at CERN. It interfaces with Worldwide LHC Computing Grid, national laboratories such as Fermilab, Brookhaven National Laboratory, DESY, and regional centres including GridKA and TRIUMF to support reconstruction, simulation, and analysis of Large Hadron Collider collision datasets. The group engages with projects and institutions like European Grid Infrastructure, Open Science Grid, CERN IT and experiments such as CMS experiment and LHCb experiment for interoperability and standards.
The computing group provides governance and technical direction for resource provisioning, middleware integration, and policy alignment across CERN, ENEA, NODECA, INFN, STFC, and university partners like Oxford University, University of Chicago, University of Tokyo. It coordinates with infrastructure initiatives including HTCondor, ARC middleware, Globus Toolkit, KV grid, and cloud providers exemplified by Amazon Web Services, Google Cloud Platform, and research clouds at CESNET. The group’s remit covers data lifecycle from raw detector output to preservation with bodies such as WLCG and archives at CERN Open Data Portal.
Membership comprises computing coordinators, site reliability engineers, software architects, and data managers drawn from institutions such as Lawrence Berkeley National Laboratory, Max Planck Society, University of Manchester, Kyoto University, and University of Melbourne. Leadership interfaces with governance panels including the ATLAS Collaboration Board, Technical Coordination, and working groups tied to Physics Working Group conveners and detector sub-systems like ATLAS Inner Detector and ATLAS Tile Calorimeter. Collaborators often hold affiliations with funding agencies such as National Science Foundation, European Research Council, Deutsche Forschungsgemeinschaft, and national ministries.
The group operates and integrates services for distributed computing elements, site batch systems, and storage fabrics across Tier-0 at CERN Data Centre, Tier-1 centres at RAL, CC-IN2P3, FZK, and many Tier-2 sites at universities. Core infrastructure components include EOS (CERN) storage, dCache, xrootd, Rucio for data placement, and FTS for transfers. It leverages virtualization and container orchestration with Docker, Singularity, and Kubernetes while collaborating with projects like HelixNebula and OpenStack. Authentication and authorization use federations and services such as CERN Single Sign-On, EduGAIN, and VOMS.
Software stacks managed include Athena (software framework), ROOT (software), Geant4, FastCaloSim, and experiment-specific toolkits integrating with Gaudi and HEPData. The group maintains continuous integration with platforms like Jenkins, GitLab, and GitHub, and coordinates release validation with detector teams and analysis groups. Data management policies employ metadata catalogues, provenance tracking, and preservation standards with collaborations including DPHEP, EOSC, Zenodo, and the Digital Preservation Coalition. Interoperability with external analyses uses standards from HEP Software Foundation and shared libraries such as LHAPDF.
Operations cover prompt reconstruction at Tier-0, reprocessing campaigns at Tier-1, Monte Carlo production across Tier-2s, and user analysis on opportunistic cloud and HPC resources like PRACE and NERSC. Workflow engines and workload management systems include PanDA, ProdSys2, HTCondor, and site queuing systems such as SLURM. The group coordinates with detector operations during runs, aligning with schedules from LHC machine schedule and analysis timelines of the Higgs physics group, Supersymmetry (SUSY) group, and heavy-ion teams.
Monitoring stacks utilize tools and projects such as Grafana, Prometheus, ELK Stack, and experiment dashboards developed in collaboration with CERN OpenLab. Metrics include throughput for tape and disk I/O, CPU efficiency, job success rates, and transfer latencies benchmarked against service-level agreements with Tier-1s and Tier-2s. Incident response and post-mortems coordinate with CERN Computer Security Team, site operations centers at TRIUMF and FNAL, and community fora like WLCG Operations Coordination.
The group enables discoveries such as measurements by the ATLAS Collaboration including precision studies of the Higgs boson, searches reported in Physical Review Letters, and analyses disseminated at conferences like ICHEP and EPS-HEP. It contributes software and best practices to the HEP Software Foundation and open-data initiatives that support education at institutions like Imperial College London and University of California, Berkeley. Outreach activities include tutorials at CERN Summer Student Programme, workshops with PRACE and EIROforum, and public datasets on the CERN Open Data Portal that foster collaboration with industry partners such as Intel and NVIDIA.