Generated by GPT-5-mini| NAS parallel benchmarks | |
|---|---|
| Name | NAS parallel benchmarks |
| Developer | NASA Advanced Supercomputing Division |
| Released | 1991 |
| Latest | 3.3 (example) |
| Platform | Supercomputers, clusters, multicore systems |
| Genre | Benchmark suite |
NAS parallel benchmarks
The NAS parallel benchmarks are a standardized set of performance tests created to evaluate high-performance computing systems developed by the NASA Advanced Supercomputing Division, designed to measure parallel computation, communication, and I/O characteristics on large-scale machines such as those at National Center for Supercomputing Applications, Sandia National Laboratories, Lawrence Livermore National Laboratory, Argonne National Laboratory, and commercial systems from Cray Research, IBM, Hewlett-Packard, Intel Corporation, and Dell Technologies. They serve research programs in projects connected to NASA, the Department of Energy, the National Science Foundation, and academic centers including Massachusetts Institute of Technology, Stanford University, University of Illinois Urbana-Champaign, University of California, Berkeley and University of Cambridge. The suite influenced evaluation efforts at events such as the Supercomputing Conference and shaped procurement decisions for installations like Blue Gene, Sierra (supercomputer), and Summit (supercomputer).
The benchmarks comprise a compact collection of computational kernels and pseudo-applications derived from workloads studied by the NASA Advanced Supercomputing Division and collaborators at institutions including NASA Ames Research Center, Jet Propulsion Laboratory, Princeton Plasma Physics Laboratory, and Oak Ridge National Laboratory. They were motivated by performance assessment needs for projects such as Computational Fluid Dynamics missions, simulations related to Earth Observing System, climate modeling efforts linked to NOAA, and large-scale numerical experiments conducted at centers like Argonne National Laboratory. The suite emphasizes portability across processor architectures from vendors like Intel Corporation, Advanced Micro Devices, NVIDIA Corporation, and vector designs from Cray Research.
Work on the benchmarks began in the late 1980s under programs at NASA Ames Research Center and the broader HPC community, with significant contributions from researchers affiliated with Massachusetts Institute of Technology, University of Illinois Urbana-Champaign, Lawrence Livermore National Laboratory, and the National Center for Supercomputing Applications. The original objectives paralleled procurement and evaluation efforts for machines such as the Cray-2, Cray Y-MP, and early massively parallel processors developed at Thinking Machines Corporation and Intel Paragon. Over time the suite evolved through collaboration with projects at Los Alamos National Laboratory and funding agencies like the Department of Energy and National Science Foundation, resulting in multiple revisions to target emerging architectures exemplified by IBM Blue Gene, Fujitsu PRIMEHPC, and accelerator platforms from NVIDIA Corporation.
The NAS collection is partitioned into classes representing sizes and complexity, containing kernels and pseudo-applications modeled on computational motifs relevant to missions at NASA Ames Research Center and simulations run at centers like Oak Ridge National Laboratory and Lawrence Livermore National Laboratory. Representative kernels map to numerical algorithms used in projects spearheaded by researchers at Stanford University, University of Cambridge, Princeton University, and Massachusetts Institute of Technology. The pseudo-applications reflect workflows similar to those in climate science studies connected to NOAA, combustion modeling in programs at Sandia National Laboratories, and astrophysics simulations at Caltech. Vendors such as Cray Research and IBM frequently report NAS results for procurement and marketing comparisons.
The methodology prescribes fixed problem classes and reference implementations to enable reproducible studies across platforms deployed at sites like National Center for Supercomputing Applications and Argonne National Laboratory. Metrics include throughput and scaling measures used in evaluations at conferences such as the Supercomputing Conference, with attention to strong scaling and weak scaling trends observed across machines like Summit (supercomputer) and Sierra (supercomputer). Performance analysis often integrates profiling tools developed at institutions including Lawrence Berkeley National Laboratory and compiler toolchains from Intel Corporation and GNU Project.
Implementations have been provided in languages and models used across the HPC ecosystem, including Fortran (programming language), C (programming language), MPI, OpenMP, and accelerator models promoted by NVIDIA Corporation and AMD. Porting efforts occurred at research centers such as Los Alamos National Laboratory, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and universities like University of Illinois Urbana-Champaign and Massachusetts Institute of Technology. Benchmark runs were executed on historical systems from Cray Research and modern heterogeneous clusters built by vendors like Dell Technologies and Hewlett-Packard for projects funded by the Department of Energy and National Science Foundation.
Results from NAS runs have been used in comparative studies by researchers at Sandia National Laboratories, Argonne National Laboratory, Lawrence Livermore National Laboratory, and academic groups at Stanford University and Massachusetts Institute of Technology to characterize memory bandwidth, interconnect latency, and solver scalability. Analyses informed design choices in systems such as Blue Gene and influenced interconnect research at vendors like Mellanox Technologies and Intel Corporation. Published performance trends have been presented at venues including the Supercomputing Conference and workshops hosted by NASA and the Department of Energy.
The NAS benchmarks guided procurement and architecture studies at national laboratories including Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and Argonne National Laboratory, and informed software development practices at universities like Massachusetts Institute of Technology and University of California, Berkeley. They affected application tuning for projects in computational fluid dynamics tied to NASA Ames Research Center, climate modeling relevant to NOAA, and multidisciplinary simulations at Caltech and Princeton University. As an enduring reference in the HPC community, they shaped benchmarking culture at events organized by the Association for Computing Machinery and the IEEE.
Category:Benchmarks