Generated by GPT-5-mini| ATLAS TDAQ | |
|---|---|
| Name | ATLAS Trigger and Data Acquisition |
| Established | 2007 |
| Location | CERN |
| Field | Particle physics |
ATLAS TDAQ The ATLAS Trigger and Data Acquisition (TDAQ) system is the real-time electronics, computing, and control framework used by the ATLAS experiment at the Large Hadron Collider to select, record, and manage collision data. It interfaces with the ATLAS detector sub-systems, the CERN accelerator complex, and worldwide Tiered computing resources to deliver data for analyses such as searches for the Higgs boson, measurements of the top quark, and probes of supersymmetry. The project involves collaborations among institutes like CERN, University of Oxford, Lawrence Berkeley National Laboratory, and Brookhaven National Laboratory and engages services from initiatives including the Worldwide LHC Computing Grid, GridPP, and Open Science Grid.
The TDAQ mission integrates fast decision-making electronics, high-throughput networking, and distributed computing to reduce raw collision rates produced by the Large Hadron Collider to a sustainable recording rate for experiments such as ATLAS, while preserving events relevant to studies of the Standard Model, beyond-Standard-Model searches, and precision measurements like the W boson mass. The system coordinates efforts across hardware groups such as the Pixel detector, SCT, Transition Radiation Tracker, Liquid Argon calorimeter, and Tile calorimeter, as well as software teams familiar with frameworks from Gaudi, ROOT, and projects influenced by the LHCb trigger designs.
TDAQ architecture is layered into front-end electronics, readout, first- and high-level trigger farms, and storage/archival interfaces linking to the Worldwide LHC Computing Grid and national centers like GridKA and Tier-1 centre. Components include custom boards (e.g., ATCA and PCIe mezzanines), network technologies such as Ethernet, InfiniBand, and switches from vendors used by CERN IT Department, and farm nodes provisioned by sites like Fermilab, DESY, INFN centers, and TRIUMF. Control and timing are synchronized using the Timing, Trigger and Control distribution and links to the LHC beam instrumentation and timing from the Beam Loss Monitoring systems.
The trigger chain combines a hardware-based Level-1 trigger implemented with field-programmable gate arrays and custom electronics inspired by designs used in CMS and LHCb with a software-based High-Level Trigger (HLT) farm employing algorithms developed in frameworks related to Athena and Gaudi. Level-1 reduces the 40 MHz bunch-crossing rate using inputs from the muon spectrometer, calorimeters, and information from the forward detectors, while the HLT performs refined selection using full-granularity data to identify signatures like high-pT muons, electrons, jets, and missing transverse energy seen in searches for dark matter models and resonances such as Z' bosons. Trigger menus and prescales are adjusted in coordination with operations staff from CERN Running Group and physics working groups studying Higgs boson decays and top quark properties.
The DAQ handles event building, buffering, and reliable transfer to online storage systems and the Tier-0 facility at CERN Data Centre. Readout drivers aggregate data from front-end modules including the Inner Detector, Muon Chambers, and calorimeter electronics into event fragments which are assembled into full events using switched networks and farm software patterned after techniques from BaBar and Belle II. Data quality monitoring streams feed teams that include experts from ATLAS Calibration Group, ATLAS Run Coordination, and computing groups that manage datasets for distribution to sites such as CNAF, SARA, and Pegasus.
Control and configuration use systems derived from industrial standards and high-energy physics frameworks, combining components like PVSS/WinCC OA concepts, CORBA-style services, and bespoke middleware compatible with Gaudi-based algorithms and ROOT I/O. The HLT farm runs trigger reconstruction and selection code within containers and batch systems interfacing with workload managers used at CERN IT Department and partner centers, while detector experts employ monitoring tools influenced by Nagios and Grafana for operational dashboards. Configuration databases and conditions services coordinate with groups such as the ATLAS Conditions Database team and the Beam Instrumentation community.
TDAQ performance evolved through Run 1 and Run 2 upgrades, with hardware and firmware refreshes influenced by lessons from Tevatron experiments and designs from CMS and LHCb. Notable upgrades include increased Bandwidth for readout, expansion of the HLT compute farm, and adoption of FPGA-based preprocessors reminiscent of developments in ALICE and Belle II. These upgrades supported key discoveries like the observation of the Higgs boson and precision measurements in Run 2, and paved the way for Phase-I and Phase-II modifications coordinated with the High-Luminosity LHC project and collaborations among CERN, national labs, and university groups.
Operational readiness involves shift crews drawn from institutions such as University of Manchester, University of Tokyo, Universidad de Buenos Aires, and University of Melbourne, following procedures developed in coordination with CERN Accelerator Operations and the ATLAS Run Coordination office. Run procedures include trigger menu deployment, prescale management, calibration runs with the cosmic ray stream, and interlocks linked to the Beam Loss Monitoring and Machine Protection System. Post-run workflows hand off data to the CERN Tier-0 for prompt reconstruction, distribution to Tier-1 and Tier-2 centers, and physics groups performing analyses on topics like electroweak interactions, flavor physics, and searches for exotic particles.