LLMpediaThe first transparent, open encyclopedia generated by LLMs

Jaguar (supercomputer)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 66 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted66
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Jaguar (supercomputer)
NameJaguar
ManufacturerCray, Oak Ridge National Laboratory
Release2005
Decommissioned2012
Cores224,256
Peak2.33 PFLOPS
ArchitectureCray XT5, AMD Opteron, Cray SeaStar2+, Cray Gemini
Operating systemCray Linux Environment
LocationOak Ridge National Laboratory

Jaguar (supercomputer) was a high-performance computing system operated at Oak Ridge National Laboratory and built by Cray, Inc. that achieved petascale-class performance. It served as a flagship resource for scientific modeling and simulation, hosting projects from institutions such as the U.S. Department of Energy, National Science Foundation, NASA, and multiple university consortia. Jaguar featured a massively parallel architecture used for climate modeling, astrophysics, materials science, and nuclear research.

Overview

Jaguar was installed at Oak Ridge National Laboratory's Leadership Computing Facility and funded through programs including the U.S. Department of Energy's Advanced Simulation and Computing Program and collaborations with the National Center for Computational Sciences. The system evolved from earlier Cray systems and was part of a lineage that included projects at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and international centers like the Barcelona Supercomputing Center and Jülich Research Centre. Jaguar's operation involved partnerships with vendors such as AMD, NVIDIA, and Sandia National Laboratories for software and benchmarking support.

Hardware and Architecture

Jaguar's hardware comprised hundreds of thousands of processor cores based on AMD Opteron x86-64 microprocessors coupled by Cray interconnects such as Cray SeaStar2+ and later Cray Gemini in its upgrade path. Cabinets contained compute blades interconnected through a three-dimensional torus topology influencing node communication similarly to architectures at Argonne National Laboratory's systems and the Blue Gene line from IBM. Storage and I/O subsystems employed parallel file systems comparable to Lustre deployments used at National Energy Research Scientific Computing Center and Pawsey Supercomputing Centre. Cooling and power delivery referenced practices at Lawrence Berkeley National Laboratory facilities and conformed to standards observed at European Centre for Medium-Range Weather Forecasts data centers.

Performance and Benchmarks

Jaguar reached a sustained performance peak that placed it among top systems on the TOP500 list, achieving over one petaflop on the LINPACK benchmark, joining ranks with contemporaries such as Roadrunner at Los Alamos National Laboratory and Kraken at National Institute for Computational Sciences. Benchmarking campaigns included runs of HPL, STREAM, and application-level tests used by projects from Princeton University, University of Tennessee, and Purdue University. Performance analysis leveraged tools and methodologies developed at Sandia National Laboratories, Argonne National Laboratory, and research groups at Carnegie Mellon University to study scalability, memory bandwidth, and interconnect latency.

System Software and Programming Environment

Jaguar ran the Cray Linux Environment, integrating elements from SUSE Linux Enterprise Server and parallel runtime stacks used at centers like NERSC and XSEDE resources. Programming models supported included MPI implementations from projects at Argonne National Laboratory and threaded models such as OpenMP and later support for CUDA and OpenACC via collaborations with NVIDIA and university research groups at University of California, Berkeley. Compilers from GNU Project, Intel Corporation, and vendor toolchains used by teams at Massachusetts Institute of Technology and Stanford University enabled optimization. Job scheduling and resource managers interfaced with middleware practices developed by XSEDE and center operations at National Center for Atmospheric Research.

Applications and Scientific Use Cases

Jaguar supported large-scale simulations in climate science by groups from NOAA, National Center for Atmospheric Research, and University of Colorado Boulder for models like Community Earth System Model. Astrophysics projects from NASA centers and teams at Princeton University used Jaguar for cosmic structure formation and supernova modeling. Materials science researchers at Oak Ridge National Laboratory and collaborators from University of Tennessee ran density functional theory and molecular dynamics codes such as VASP-class and LAMMPS, similar to workloads at Argonne National Laboratory. Nuclear and fusion simulations involved partnerships with Lawrence Livermore National Laboratory and General Atomics, while bioinformatics and genomics workflows mirrored efforts at Broad Institute and Scripps Research.

Development, Upgrades, and Decommissioning

Jaguar underwent staged upgrades that increased core counts and improved interconnects, paralleling upgrade cycles seen at Blue Waters and systems at Pawsey Supercomputing Centre. Coordinated procurement and system integration involved Cray, Inc., AMD, and system administrators trained under programs with DOE leadership computing initiatives and the National Science Foundation. Jaguar was succeeded by newer systems including Titan (supercomputer)—a hybrid system that incorporated NVIDIA GPUs—and later exascale precursors at Oak Ridge National Laboratory such as Summit (supercomputer). Decommissioning followed best practices established at Lawrence Livermore National Laboratory and the European Supercomputing Centre for data migration, hardware recycling, and transition of user communities to successor platforms.

Category:Cray supercomputers Category:Oak Ridge National Laboratory Category:Petascale supercomputers