LLMpediaThe first transparent, open encyclopedia generated by LLMs

Stampede2

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: XSEDE Hop 4
Expansion Funnel Raw 55 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted55
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Stampede2
NameStampede2
OperatorTexas Advanced Computing Center
LocationAustin, Texas
CountryUnited States
ArchitectureStampede2
Memory4.5 PB (aggregate)
Storage8 PB Lustre
Peak18.7 PFLOPS
Date deployed2018
Decommissioned2023

Stampede2 Stampede2 was a petascale supercomputer deployed at the Texas Advanced Computing Center in Austin, Texas under the direction of the National Science Foundation. Designed to support large-scale computational science, engineering, and data analytics, it provided high-throughput, low-latency compute capability for researchers across the United States and allied projects. Stampede2 formed part of the evolution of national cyberinfrastructure alongside systems at the Oak Ridge National Laboratory, Argonne National Laboratory, and Lawrence Berkeley National Laboratory.

Overview

Stampede2 succeeded earlier systems in a lineage that included machines at the Cornell University-affiliated centers and partnerships with vendors such as Intel Corporation and Dell Technologies. Its mission targeted disciplines including climate modeling, computational fluid dynamics, molecular dynamics, seismology, and astronomy. Funded principally by the National Science Foundation and administered by the Texas Advanced Computing Center, Stampede2 served academic consortia, NASA researchers, and industry collaborations. The procurement and deployment intersected with initiatives at the Pittsburgh Supercomputing Center and the San Diego Supercomputer Center.

Architecture and Hardware

The hardware architecture combined many‑core processors and high‑performance interconnects sourced from vendors with histories in large-scale systems, including components compatible with Intel Xeon Phi architectures and accelerators used in contemporary machines at Oak Ridge National Laboratory and Argonne National Laboratory. The system used a high-speed fabric similar to technologies deployed in systems at Lawrence Livermore National Laboratory and networking approaches found in National Center for Supercomputing Applications installations. Large shared and local memory tiers supported workloads comparable to those run on machines at Princeton University and Massachusetts Institute of Technology. Storage was implemented using parallel filesystems comparable to deployments at Los Alamos National Laboratory and Sandia National Laboratories, enabling data-intensive workflows used by NOAA and US Geological Survey projects.

Software Environment and Scheduling

Stampede2 provided an ecosystem of scientific software stacks, compilers, and libraries maintained by teams with experience from University of Texas at Austin centers and collaborations with compiler vendors such as Intel Corporation and tool providers used at the Barcelona Supercomputing Center. Commonly available packages included MPI implementations used in projects at Argonne National Laboratory and numerical libraries similar to those cited in publications from Stanford University and Harvard University. Job scheduling and resource management followed paradigms implemented in schedulers used at NERSC and Pittsburgh Supercomputing Center, enabling batch workflows, interactive debugging, and workflow orchestration for campaigns run by groups at Columbia University and California Institute of Technology.

Performance and Benchmarks

Measured performance placed Stampede2 among leading NSF-funded systems of its era, with peak theoretical throughput comparable to contemporaneous systems at Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory. It featured scaling studies and benchmark suites referenced in comparative analyses alongside machines at Argonne National Laboratory and Los Alamos National Laboratory. Communities in computational chemistry and materials science used established benchmarks developed by researchers at MIT and University of Illinois at Urbana–Champaign to validate performance for codes originating from Sandia National Laboratories and Princeton University. Performance tuning workflows often mirrored procedures published by teams at Columbia University and University of California, San Diego.

Operational History and Decommissioning

Commissioned following procurement practices aligned with National Science Foundation awards and project milestones shared with the Extreme Science and Engineering Discovery Environment, Stampede2 supported multi-year science allocations and education initiatives connected to programs at Texas A&M University and Rice University. It enabled research contributing to publications in venues frequented by investigators from University of Michigan and Yale University. Decommissioning occurred as NSF and community roadmaps advanced toward next-generation systems, with retirements coordinated alongside transitions at National Energy Research Scientific Computing Center and capacity shifts at Pawsey Supercomputing Centre. Hardware and software resources were phased out and datasets migrated in collaboration with archival partners like Oak Ridge National Laboratory and institutional repositories at University of Texas at Austin.

User Access and Projects

Access to Stampede2 was provisioned through allocation programs similar to those administered by the XSEDE framework and was available to researchers from institutions such as University of Washington, University of California, Berkeley, and Princeton University. Notable project classes included ensemble simulations in climate research for groups at NOAA, high‑resolution geophysical imaging for teams at US Geological Survey, and genomics pipelines comparable to work at Broad Institute. Educational and training activities paralleled outreach conducted by Compute Canada and the European Grid Infrastructure, providing hands-on workshops for users from Duke University and Indiana University.

Category:Supercomputers Category:Texas Advanced Computing Center