LLMpediaThe first transparent, open encyclopedia generated by LLMs

LINPACK benchmarks

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Titan (supercomputer) Hop 4
Expansion Funnel Raw 53 → Dedup 21 → NER 10 → Enqueued 10
1. Extracted53
2. After dedup21 (None)
3. After NER10 (None)
Rejected: 11 (not NE: 11)
4. Enqueued10 (None)
LINPACK benchmarks
NameLINPACK benchmarks
AuthorJack Dongarra
DeveloperUniversity of Tennessee, Oak Ridge National Laboratory
Released1979
GenreBenchmark (computing)
LicenseBSD licenses

LINPACK benchmarks. The LINPACK benchmarks are a series of standardized tests designed to measure the floating-point performance of computer systems by solving dense systems of linear equations. Originally developed in the late 1970s, they became the de facto standard for ranking the world's most powerful supercomputers, most notably through the TOP500 list. While historically central to high-performance computing, the benchmarks have also faced criticism for not fully representing modern computational workloads.

Overview

The core task involves solving a dense system of linear equations, *Ax = b*, using Gaussian elimination with partial pivoting, a fundamental algorithm in numerical linear algebra. Performance is measured in floating-point operations per second (FLOPS), providing a clear, single-number metric for comparison. The benchmark's operations are derived from the LINPACK library, a collection of Fortran subroutines for linear algebra. This focus made it a useful tool for evaluating the raw computational power of systems from Cray Research supercomputers to modern massively parallel architectures.

History and development

The benchmark was created in 1979 by Jack Dongarra, then at the Argonne National Laboratory, to provide a consistent measure for comparing different supercomputers. It was based on the widely used LINPACK software library, which itself was an evolution of earlier work from projects like EISPACK. The publication of the first TOP500 list in 1993, curated by Dongarra, Hans Meuer, and Erich Strohmaier, cemented its global prominence. This list, released biannually at the International Supercomputing Conference, used the benchmark's results to rank systems from institutions like Fujitsu, IBM, and Cray Inc..

Benchmark variants

Three main variants exist, each with different constraints. The "100x100" benchmark solves a small, fixed-size problem, historically used for comparing classic vector processors like those from CDC Corporation. The "1000x1000" benchmark allows for larger problem sizes and optimized code, providing a measure for systems like the Earth Simulator. The most influential is the **HPL** (High-Performance Linpack) benchmark, which requires solving the largest possible problem size to maximize performance on parallel systems, a rule central to the TOP500 competition. A newer variant, **HPL-AI**, has been introduced to benchmark mixed-precision performance relevant to artificial intelligence workloads.

Implementation and usage

Implementing the benchmark, especially HPL, requires highly tuned software that leverages optimized BLAS (Basic Linear Algebra Subprograms) libraries, such as those from OpenBLAS, Intel Math Kernel Library, or AMD BLIS. System administrators must configure parameters like grid topology and block size to maximize performance on their specific architecture, whether it uses processors from Intel Corporation or Advanced Micro Devices and accelerators from NVIDIA or Intel. The results are submitted to the TOP500 organization for verification and inclusion in the list, a process overseen by researchers at the University of Tennessee.

Performance metrics and records

Performance is reported in FLOPS, with modern results measured in petaFLOPS and exaFLOPS. The race for the top position has driven records set by systems like Fugaku (developed by Riken and Fujitsu), Summit at Oak Ridge National Laboratory, and Frontier. Achieving peak performance often involves extensive use of GPU accelerators and custom interconnects like InfiniBand. The benchmark's efficiency ratio, comparing achieved to theoretical peak FLOPS, is a key metric analyzed by organizations like the National Science Foundation.

Impact and criticism

The benchmark's primary impact was establishing a standardized, long-term performance tracking method for the supercomputer industry, influencing procurement decisions by agencies like the United States Department of Energy and the European Commission. However, significant criticism argues that its narrow focus on dense linear algebra does not reflect the diverse workloads of modern computing, such as data analytics, climate modeling, or machine learning. This led to the development of alternative benchmarks like the HPCG benchmark and the Graph500 list. Despite these critiques, it remains a historic and influential metric in the field of high-performance computing.

Category:Computer benchmarks Category:Supercomputing Category:Numerical linear algebra