LLMpediaThe first transparent, open encyclopedia generated by LLMs

Keeneland (supercomputer)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: XSEDE Hop 4
Expansion Funnel Raw 72 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted72
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Keeneland (supercomputer)
NameKeeneland
LocationUniversity of Tennessee
OperatorsNational Science Foundation / Tennessee Advanced Computing Center
Cost$5.6 million (hardware grant)
Announced2011
Decommissioned2015
ProcessorsIntel Xeon CPUs, NVIDIA Tesla GPUs
Memory384 GB per node (varies)
InterconnectInfiniBand
Purposeresearch, high-performance computing

Keeneland (supercomputer) was a GPU-accelerated high-performance computing system deployed at the University of Tennessee and operated by the Tennessee Advanced Computing Center. Funded through a major award from the National Science Foundation, the system combined Intel Xeon processors with NVIDIA Tesla GPUs and high-speed InfiniBand networking to serve researchers across Oak Ridge National Laboratory, University of Kentucky, Georgia Institute of Technology, and other US academic institutions. Keeneland supported projects in computational science, data analytics, and algorithm development for heterogeneous computing.

Overview

Keeneland was designed as an experimental platform to explore accelerated computing, support proposals to the Office of Science agencies, and train scientists in GPU programming models used on systems such as Titan (supercomputer), Summit (supercomputer), and Aurora (supercomputer). The project brought together personnel from the Pittsburgh Supercomputing Center, National Center for Supercomputing Applications, and regional universities to create a shared resource for computational workflows in disciplines including climate science, biophysics, computational fluid dynamics, and materials science. The system emphasized reproducible performance studies relevant to procurement decisions by centers such as XSEDE and national laboratories like Lawrence Berkeley National Laboratory.

Architecture and Hardware

Keeneland’s architecture paired multi-core Intel Xeon CPUs with multiple NVIDIA Tesla GPU accelerators per compute node, connected via a low-latency InfiniBand fabric similar to interconnects used on Roadrunner and Sequoia (supercomputer). Storage subsystems included parallel file systems and shared node-local memory configurations akin to those found at Argonne National Laboratory facilities. The hardware selection mirrored trends toward hybrid CPU–GPU nodes reflected in procurement choices at Sandia National Laboratories and Los Alamos National Laboratory, enabling experiments with heterogeneous memory hierarchies, PCIe topologies, and power-constrained performance models examined by teams from Columbia University and Massachusetts Institute of Technology.

Software and Programming Environment

The software stack supported CUDA-based development with toolchains from NVIDIA, compiler toolchains for GNU Compiler Collection and Intel Parallel Studio, and libraries such as MPI implementations used at NERSC and math libraries like cuBLAS, cuFFT, and vendor-tuned BLAS from Intel. Middleware included resource managers and schedulers comparable to SLURM and TORQUE, and debuggers/profilers similar to NVIDIA Nsight and tools developed at National Institute of Standards and Technology. Users ported codes from sources such as LAMMPS, GROMACS, and OpenFOAM to leverage GPU offloading, collaborating with software engineering groups at University of Illinois Urbana–Champaign and University of California, Berkeley.

Performance and Benchmarks

Keeneland hosted benchmark suites drawn from community-driven collections like the TOP500 and domain-specific benchmarks used by ACM and IEEE communities. Performance evaluations compared double-precision throughput, memory bandwidth, and strong/weak scaling on kernels similar to those in Linpack, spectral solvers from FFTW, and sparse linear algebra workloads akin to PETSc benchmarks. Studies conducted by researchers at Oak Ridge National Laboratory and Georgia Tech published comparative results against systems such as Titan (supercomputer) and commodity CPU clusters, informing energy-to-solution metrics and performance per watt analyses promoted by Green Grid initiatives.

Research Applications and Use Cases

Research on Keeneland spanned molecular dynamics simulations for biomolecular systems (work related to Protein Data Bank studies), large-scale cosmological simulations similar to projects at Kavli Institute for Cosmological Physics, and machine-learning experiments leveraging GPU-accelerated frameworks analogous to early deployments of TensorFlow and Caffe. Applications included computational chemistry codes maintained by groups at California Institute of Technology, earthquake simulation efforts connected to US Geological Survey research, and computational electromagnetics studies familiar to researchers at Johns Hopkins University. The platform also supported graduate training programs and workshops organized with partners such as XSEDE and NSF regional alliances.

History and Deployment

Keeneland emerged from a 2011 award by the National Science Foundation to create a production-class GPU-accelerated resource at the University of Tennessee / ORNL gateway, with operational cooperation from the Tennessee Advanced Computing Center and academic collaborators including Vanderbilt University and University of Virginia. Deployment activities referenced best practices from earlier deployments at centers like Pittsburgh Supercomputing Center and drew on procurement lessons from projects such as Blue Waters. The system entered production in the early 2010s, serving a broad user base and participating in collaborative science campaigns coordinated through regional consortia and national programs.

Decommissioning and Legacy

Keeneland was decommissioned in the mid-2010s as successor systems with next-generation GPU and CPU technology—such as Summit (supercomputer) and DOE exascale initiatives—came online. Its legacy includes documented porting strategies, benchmarking datasets, and training curricula adopted by groups at National Renewable Energy Laboratory and Argonne National Laboratory, influencing architecture choices in subsequent deployments like Theta (supercomputer) and academic GPU clusters at University of Texas. Data and lessons from Keeneland informed proposals to agencies including the Department of Energy and contributed to community knowledge repositories maintained by XSEDE and the Open Science Grid.

Category:Supercomputers