Generated by GPT-5-mini| Sierra (supercomputer) | |
|---|---|
| Name | Sierra |
| Manufacturer | IBM |
| Release | 2018 |
| Cpu | IBM POWER9 |
| Gpu | NVIDIA Tesla V100 |
| Flops | 125e15 |
| Location | Lawrence Livermore National Laboratory |
| Purpose | Nuclear weapons stewardship |
Sierra (supercomputer) is a high-performance supercomputer deployed at Lawrence Livermore National Laboratory to support National Nuclear Security Administration missions including stockpile stewardship and computational physics. Designed and built in partnership with IBM and NVIDIA, Sierra exemplifies exascale-oriented architectures combining IBM POWER9 processors and NVIDIA Tesla V100 accelerators for large-scale simulation, modeling, and data analysis. The system integrates advanced interconnects and storage to serve workflows from Monte Carlo methods to multi-physics codes used by national laboratories and federal programs.
Sierra was procured under a U.S. Department of Energy initiative to modernize computing for nuclear weapons life extension programs, intended to complement systems such as Sequoia and precede exascale projects like Aurora (supercomputer) and Frontier (supercomputer). Deployed at Lawrence Livermore National Laboratory alongside facilities including the National Ignition Facility, Sierra provides consolidated capability for simulation campaigns tied to the Stockpile Stewardship Program and collaboration with Sandia National Laboratories and Los Alamos National Laboratory. The procurement involved contracts with IBM and NVIDIA and tied into broader federal computing investments including programs managed by the National Nuclear Security Administration and Oak Ridge National Laboratory exchanges.
Sierra's node design couples IBM POWER9 CPUs with multiple NVIDIA Tesla V100 GPUs using a high-bandwidth PCIe topology and coherent memory semantics influenced by NVLink pathways. The chassis and rack architecture reflect engineering practices from IBM and NVIDIA reference designs, while the network fabric uses an IBM Spectrum Scale-compatible interconnect with low-latency switching similar to technologies by Mellanox Technologies. Storage and I/O subsystems leverage solutions akin to IBM Elastic Storage Server and parallel file systems used at Argonne National Laboratory and Lawrence Berkeley National Laboratory, enabling scalable checkpointing for codes like VisIt and ParaView. Power and cooling draw upon data center infrastructure standards followed at Lawrence Livermore National Laboratory and other Department of Energy facilities.
Sierra achieved peak performance figures in the double-precision FLOPS range reported by procurement benchmarks, positioning it among the top systems on the TOP500 list upon deployment and participating in Graph500 and HPL suites. Measured performance for multi-physics codes such as ALE3D, ALEGRA, and hydrodynamics solvers showed strong GPU-accelerated scaling across thousands of nodes, demonstrating throughput improvements comparable to hybrid systems like Summit (supercomputer). Benchmarking also compared Sierra against regional and international systems at Oak Ridge National Laboratory and Argonne National Laboratory, with performance drivers including memory bandwidth, interconnect latency, and optimized libraries from NVIDIA CUDA and IBM Spectrum MPI.
The system software stack on Sierra integrates compilers and runtimes from IBM and NVIDIA, including CUDA, OpenMP, and vendor MPI implementations such as Open MPI variants and IBM Spectrum MPI. Job scheduling and resource management used batch systems and schedulers common to national laboratories, similar to configurations at Los Alamos National Laboratory and Sandia National Laboratories, enabling workflows for legacy codes and modernized GPU-aware applications. Scientific libraries and toolchains included math libraries analogous to cuBLAS, cuFFT, and optimized solver packages used in community projects like PETSc and Trilinos, while visualization and analysis employed tools shared with National Center for Supercomputing Applications and other centers.
Sierra was primarily tasked with simulation and analysis for the Stockpile Stewardship Program, running weapons physics, materials science, and coupled multi-physics simulations developed at Lawrence Livermore National Laboratory and partner institutions including Los Alamos National Laboratory and Sandia National Laboratories. Use cases included high-fidelity calculations for hydrodynamics, radiation transport, and equation-of-state modeling, interfacing with experimental data from facilities such as the National Ignition Facility and Z Machine at Sandia National Laboratories. Sierra also supported algorithm development in areas like uncertainty quantification, optimization, and machine learning research linked to programs at Lawrence Livermore National Laboratory and collaborations with universities and federal labs.
The procurement of Sierra involved a contract awarded to IBM with accelerator hardware from NVIDIA, under oversight by the National Nuclear Security Administration and operational management by Lawrence Livermore National Laboratory. The development cycle built on lessons from prior procurement efforts for systems like Sequoia and contemporary planning for exascale systems such as Aurora (supercomputer) and Frontier (supercomputer). Funding and acquisition followed federal processes coordinated with entities including the U.S. Department of Energy and adhered to performance, reliability, and security requirements defined by national laboratory programs and inter-lab agreements with Sandia National Laboratories and Los Alamos National Laboratory.
Since commissioning, Sierra has enabled advanced simulation campaigns tied to the Stockpile Stewardship Program, influencing decisions and research at Lawrence Livermore National Laboratory and partner labs such as Los Alamos National Laboratory and Sandia National Laboratories. The system contributed to methodological advances in GPU-accelerated simulation, informed procurement strategies for successor systems at Oak Ridge National Laboratory and other DOE sites, and supported collaborations with academic centers including University of California, Berkeley researchers and industry partners like NVIDIA. Sierra's operational lessons affected software modernization efforts across national laboratories and influenced planning for exascale deployments such as Perlmutter and Frontier (supercomputer).
Category:Supercomputers Category:Lawrence Livermore National Laboratory