LLMpediaThe first transparent, open encyclopedia generated by LLMs

Trinity supercomputer

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 63 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted63
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Trinity supercomputer
NameTrinity
OperatorLos Alamos National Laboratory and Lawrence Livermore National Laboratory
LocationLos Alamos, New Mexico, Livermore, California
Inaugurated2019
ManufacturerCray Inc. (now Hewlett Packard Enterprise)
PurposeNuclear stockpile stewardship, scientific computing
OsSUSE Linux Enterprise Server
Peak~40 petaflops (double precision)

Trinity supercomputer

Trinity is a high-performance computing system built to support United States Department of Energy initiatives, National Nuclear Security Administration, and scientific research at Los Alamos National Laboratory and Lawrence Livermore National Laboratory. Designed through a partnership including Hewlett Packard Enterprise, Cray Inc., Intel Corporation, and NVIDIA, Trinity addresses simulation needs arising from Stockpile Stewardship Program, Advanced Simulation and Computing Program, and cross-disciplinary projects involving researchers from Sandia National Laboratories, Oak Ridge National Laboratory, and universities such as Massachusetts Institute of Technology and Stanford University.

Overview

Trinity was procured under the National Nuclear Security Administration acquisition program to modernize computational capabilities for nuclear deterrence assessment, modeling tied to Advanced Simulation and Computing Program, and classification-appropriate workloads from agencies including Department of Defense partners and academic consortia like the University of California system. The system's commissioning followed procurement pathways influenced by directives from President of the United States administrations and budget appropriations authorized by the United States Congress, with technical reviews from panels including experts associated with Association for Computing Machinery and IEEE.

Design and architecture

Trinity employs a heterogenous architecture integrating multi-core processors and accelerator technologies pursued across the supercomputing roadmap advocated by Exascale Computing Project and referenced in reports by Office of Science and Technology Policy. Its network topology builds on designs tested at Argonne National Laboratory and concepts from the Cray XC-series interconnect lineage. The system architecture balances memory bandwidth, inter-node latency, and I/O throughput, reflecting lessons from deployments at National Energy Research Scientific Computing Center and Oak Ridge Leadership Computing Facility.

Hardware components

The compute layer combines processors from Intel Corporation with GPU accelerators developed by NVIDIA Corporation, assembled in cabinets manufactured originally by Cray Inc. and later supported by Hewlett Packard Enterprise. Storage subsystems use parallel filesystems influenced by Lustre deployments and enterprise disk arrays from vendors like Seagate Technology and Western Digital. Cooling infrastructure and power delivery were engineered in collaboration with firms experienced at Lawrence Livermore National Laboratory facilities and modeled after data center projects at Google and Facebook campuses. Facility readiness involved compliance with standards from American Society of Heating, Refrigerating and Air-Conditioning Engineers and power provisioning coordinated with local utilities and U.S. Department of Energy site offices.

Software and performance

Trinity runs system software including distributions such as SUSE Linux Enterprise Server and resource managers used across national labs like SLURM and job schedulers similar to those at Argonne National Laboratory. Scientific software stacks include compilers and libraries from Intel Corporation, GNU Project, and vendor-optimized toolchains, with numerical libraries like BLAS, LAPACK, and packages maintained by projects associated with National Institute of Standards and Technology. Performance characterization used benchmarks inspired by the TOP500 and workload suites comparable to High Performance Conjugate Gradients and domain-specific tests developed with collaborators from Princeton University, University of California, Berkeley, and Caltech.

Applications and workload

Primary workloads target simulation and modeling for Stockpile Stewardship Program activities, climate and earth-system components comparable to work at NOAA and NASA, and fusion modeling coordinated with Princeton Plasma Physics Laboratory and General Atomics efforts. Trinity also supported computational fluid dynamics projects akin to those in NASA Ames Research Center programs and materials science studies related to initiatives at Argonne National Laboratory and Brookhaven National Laboratory. Collaborative research involved investigators from Columbia University, University of Texas at Austin, and international partners linked through forums like International Conference for High Performance Computing, Networking, Storage and Analysis.

Deployment and operational history

Procurement and deployment followed milestones involving contracts with Cray Inc. and later transitions under Hewlett Packard Enterprise corporate arrangements. The system entered production in the late 2010s and has undergone upgrades and maintenance cycles coordinated with operations teams at Los Alamos National Laboratory and Lawrence Livermore National Laboratory. Operational lessons were shared at conferences hosted by IEEE and ACM SIGARCH, and incident response procedures aligned with guidance from Cybersecurity and Infrastructure Security Agency and National Institute of Standards and Technology.

Security and access policies

Access to Trinity is governed by authorization frameworks from the National Nuclear Security Administration, requiring account credentials vetted through institutional affiliations such as Los Alamos National Laboratory and Lawrence Livermore National Laboratory, and project approvals tied to federal program offices. Data handling and classification adhere to directives influenced by Executive Order 13526 and security practices influenced by standards from National Institute of Standards and Technology and Defense Information Systems Agency. Collaborations involve formal agreements similar to memoranda of understanding between national labs and universities, with audit and compliance procedures reported to sponsoring agencies like Department of Energy.

Category:Supercomputers