Generated by GPT-5-mini| Frontier (supercomputer) | |
|---|---|
![]() | |
| Name | Frontier |
| Location | Oak Ridge National Laboratory, Tennessee |
| Manufacturer | Cray (HPE) and AMD |
| Architecture | HPE Cray EX, AMD EPYC, AMD Instinct |
| Performance | ~1.1 exaFLOPS (HPL) |
| Memory | 8 PB (system aggregate, peak) |
| Storage | High-performance parallel file system |
| Power | ~21 MW (design) |
| Operating system | Linux (HPE Cray OS) |
| Purpose | Scientific research, exascale computing |
Frontier (supercomputer) Frontier is an exascale-capable supercomputer installed at Oak Ridge National Laboratory that achieved sustained performance at the exaFLOP scale. It combines hardware and software developments from Hewlett Packard Enterprise, Cray Inc., and Advanced Micro Devices with deployment overseen by the U.S. Department of Energy and management by Oak Ridge National Laboratory Leadership Computing Facility. Frontier supports large-scale simulations and data analytics across national laboratories and academic institutions including Lawrence Berkeley National Laboratory, Argonne National Laboratory, and Los Alamos National Laboratory.
Frontier is designed to serve the U.S. Department of Energy Office of Science missions, enabling research for programs at National Nuclear Security Administration partner facilities, collaborations with National Institutes of Health, projects funded by the National Science Foundation, and initiatives involving NASA centers and major universities like Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, and Georgia Institute of Technology. The system is part of the Exascale Computing Project and contributes to international collaborations touching institutions such as CERN, Max Planck Society, Riken, École Polytechnique Fédérale de Lausanne, and Tsinghua University. Frontier’s commissioning followed roadmaps set by the Office of Advanced Scientific Computing Research and was benchmarked under programs coordinated with TOP500 and Green500 stakeholders.
Frontier’s chassis and interconnect build on designs from HPE Cray EX cabinets produced by Hewlett Packard Enterprise after its acquisition of Cray Inc.. Compute nodes pair AMD EPYC x86 processors with AMD Instinct MI accelerators using high-bandwidth memory and coherent memory approaches similar to designs seen in systems by NVIDIA competitors and vendors like Intel in prior architectures. The interconnect topology uses a high-radix fabric in line with prior Cray networks and draws upon technologies referenced by InfiniBand research and switches from vendors engaged in projects with Los Alamos National Laboratory and Sandia National Laboratories. Storage subsystems integrate parallel file systems influenced by Lustre deployments and large-scale storage efforts at Pacific Northwest National Laboratory and Lawrence Livermore National Laboratory. Power and cooling designs incorporate engineering practices from large data centers such as those run by Google, Microsoft, and Amazon Web Services while meeting safety requirements established by Tennessee Valley Authority and local authorities.
Frontier achieved exascale performance measured by High Performance LINPACK (HPL) results reported to the TOP500 list and achieved notable positions on the Green500 list for energy efficiency. Benchmarks included tests used by the HPCG consortium and application-level scaling from codes developed at Argonne National Laboratory, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and universities like University of Illinois Urbana-Champaign and University of Texas at Austin. Performance characterization involved collaborations with software teams from Intel Corporation and NVIDIA competitor research groups, as well as optimization efforts guided by standards from the Message Passing Interface Forum and libraries such as BLAS and LAPACK. Comparative analyses referenced milestone systems including Summit (supercomputer), Sunway TaihuLight, and earlier petascale systems at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory.
Frontier runs a Linux-based environment customized by Hewlett Packard Enterprise and supports programming models promoted by the U.S. Department of Energy and the Exascale Computing Project, including implementations of MPI and OpenMP along with accelerator-focused frameworks like HIP (from AMD), OpenACC, and emerging standards influenced by Kokkos and RAJA efforts at Sandia National Laboratories and Lawrence Livermore National Laboratory. Compilers and toolchains include offerings from GNU Project, LLVM Project, and proprietary toolkits adapted from AMD and HPE engineering teams; performance analysis tools draw on instrumentation techniques developed by National Energy Research Scientific Computing Center and academia at University of Chicago and Princeton University. Ecosystem support extends to scientific software packages used at Fermilab, Brookhaven National Laboratory, Argonne National Laboratory, and major projects like Climate Modeling groups at NOAA-affiliated labs.
Frontier is housed at the Oak Ridge Leadership Computing Facility within Oak Ridge National Laboratory and was installed following site preparation influenced by environmental assessments from Tennessee Department of Environment and Conservation and infrastructure coordination with U.S. Army Corps of Engineers. Power provisioning involved regional utilities including Tennessee Valley Authority and followed grid modernization efforts with partners like American Electric Power. Facility design considered lessons from supercomputer halls at Lawrence Berkeley National Laboratory and cooling strategies similar to deployments at Los Alamos National Laboratory and industrial facilities run by IBM research centers.
Researchers use Frontier for large-scale simulations and data analysis in fields supported by programs at Department of Energy offices, collaborations with National Institutes of Health for biomedical modeling, studies with NASA for climate and astrophysics simulations, particle physics modeling with connections to CERN workflows, and materials science research following initiatives at Argonne National Laboratory and Lawrence Berkeley National Laboratory. Projects include multi-physics codes developed with funding from the Office of Science and interdisciplinary teams from universities such as University of Michigan, Columbia University, University of California, San Diego, University of Wisconsin–Madison, and Purdue University. Applications span molecular dynamics packages used at Brookhaven National Laboratory, cosmology simulations linked to Harvard-Smithsonian Center for Astrophysics, and machine learning workloads inspired by research from Stanford University and Carnegie Mellon University.
Frontier’s procurement and development were coordinated by the U.S. Department of Energy under initiatives like the Exascale Computing Project and funded through the Office of Science with contracts awarded to Hewlett Packard Enterprise and Advanced Micro Devices. Design milestones involved collaborations with national laboratory teams at Oak Ridge National Laboratory, Argonne National Laboratory, and Lawrence Livermore National Laboratory and drew upon procurement frameworks used in prior acquisitions such as for Summit (supercomputer). The project timeline intersected with congressional appropriations overseen by committees in the United States House of Representatives and United States Senate and benefited from partnerships with industrial research units at AMD Research and HPE Research.