LLMpediaThe first transparent, open encyclopedia generated by LLMs

TACC Frontera

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: XSEDE Hop 4
Expansion Funnel Raw 81 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted81
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
TACC Frontera
NameFrontera
LocationAustin, Texas
OperatorTexas Advanced Computing Center
ManufacturerDell Technologies
ArchitectureIntel Xeon Scalable / NVIDIA
Memory8.71 PB (aggregate)
Storage28 PB
Peak38.7 PFLOPS
Date deployed2019

TACC Frontera Frontera was a leadership-class supercomputer deployed at the Texas Advanced Computing Center in Austin, Texas to serve researchers across United States institutions including The University of Texas at Austin, Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, and Oak Ridge National Laboratory. It provided high-performance computing capabilities to investigators from National Science Foundation, National Institutes of Health, Department of Energy, NASA, and private partners such as IBM and Amazon Web Services. Frontera supported projects tied to initiatives by XSEDE, ALCF, NERSC, and PRACE collaborators.

Overview

Frontera functioned as a petascale resource delivering compute for scientists at The University of Texas at Austin, Rice University, Texas A&M University, University of Chicago, and Columbia University across domains like Human Genome Project, Event Horizon Telescope, Large Synoptic Survey Telescope, CERN, and LIGO Scientific Collaboration. It was procured through a competitive process involving vendors such as Dell Technologies, Intel, NVIDIA, and Micron Technology, and was integrated into national research workflows with support from National Center for Supercomputing Applications and Compute Canada partners. Frontera’s service model connected to grant programs from NSF and research consortia like HPC User Forum.

Architecture and performance

Frontera’s architecture combined Intel Xeon Scalable processors with accelerator nodes and high-bandwidth fabrics used by systems at Argonne National Laboratory and Oak Ridge National Laboratory, paralleling designs seen in Summit (supercomputer), Sierra (supercomputer), and Fugaku. The system achieved sustained throughput on workloads from NERSC benchmarks and delivered peak performance comparable to installations at Lawrence Livermore National Laboratory and Los Alamos National Laboratory. Interconnect technologies aligned with deployments at Sandia National Laboratories and used topologies investigated by teams from MIT, Stanford University, and Carnegie Mellon University to optimize latency for MPI codes developed at Los Alamos National Laboratory and Argonne National Laboratory.

Hardware and software technologies

Hardware components included Intel Xeon Gold CPUs, memory modules from Micron Technology, NVMe flash arrays similar to those in Google and Facebook datacenters, and Infiniband or Ethernet fabrics akin to those used by Hewlett Packard Enterprise. Software stacks leveraged operating systems and tools used at NERSC and ALCF such as distributions from Red Hat, resource managers like Slurm Workload Manager, compilers from Intel Corporation and GNU Project, and libraries including OpenMPI, MPI, HDF5, and NetCDF. Middleware and containerization followed patterns employed by Docker adopters and orchestration practices from Kubernetes experiments at Lawrence Berkeley National Laboratory and NERSC for reproducible science.

Deployment and operations

Deployment was overseen by staff from the Texas Advanced Computing Center in coordination with procurement teams at The University of Texas at Austin and vendor support from Dell Technologies engineers and Intel field teams. Operational policies mirrored governance frameworks from NSF allocations committees and allocations processes used by XSEDE and PRACE, with user support emulating models from HPC User Support centers at Oak Ridge National Laboratory and Argonne National Laboratory. Monitoring and telemetry incorporated tools used at Lawrence Livermore National Laboratory and Sandia National Laboratories to achieve reliability goals aligned with federal research computing standards from National Institute of Standards and Technology.

Scientific applications and benchmarks

Frontera ran applications and benchmark suites commonly employed by teams at LIGO Scientific Collaboration, Event Horizon Telescope, Human Genome Project, NOAA, NASA, CERN experiments, and climate groups from NOAA and IPCC. Production codes included computational fluid dynamics packages used at NASA Ames Research Center and European Centre for Medium-Range Weather Forecasts, molecular dynamics tools popular at Brookhaven National Laboratory and Lawrence Berkeley National Laboratory, genomics pipelines similar to those at Broad Institute, and machine learning frameworks developed at Google DeepMind, Facebook AI Research, and OpenAI. Benchmarks referenced include HPL and HPCG suites used at Top500 installations and community benchmarks utilized by SPEC.

History and development

Frontera emerged from planning activities involving Texas Advanced Computing Center, funding from National Science Foundation, and procurement via partners including Dell Technologies and Intel. Its acquisition followed precedents set by deployments at National Center for Supercomputing Applications, Argonne National Laboratory, and Oak Ridge National Laboratory and was informed by architectural studies by researchers at Stanford University, MIT Lincoln Laboratory, and Lawrence Berkeley National Laboratory. Development cycles included integration testing with software teams from NERSC and performance tuning with experts from HPC User Forum and IEEE workshops.

Impact and collaborations

Frontera accelerated research efforts across institutions such as The University of Texas at Austin, Rice University, Texas A&M University, Columbia University, University of Chicago, and national labs like Argonne National Laboratory and Los Alamos National Laboratory, enabling advances reported in venues like Nature, Science, Proceedings of the National Academy of Sciences, IEEE Transactions on Parallel and Distributed Systems, and ACM conferences. Collaborations extended to international partners including PRACE members, computational science groups at CERN, and data initiatives with Amazon Web Services and Google Cloud Platform to pilot hybrid workflows. The system influenced procurement and architectural decisions for successor projects at Texas Advanced Computing Center and other centers participating in NSF's CIF21 initiatives.

Category:Supercomputers