Generated by GPT-5-mini| HPCG benchmark | |
|---|---|
| Name | HPCG benchmark |
| Developer | Lawrence Livermore National Laboratory; Oak Ridge National Laboratory; Argonne National Laboratory; Sandia National Laboratories |
| Released | 2014 |
| Genre | Benchmarking software |
HPCG benchmark
The HPCG benchmark is a performance assessment workload designed to complement the TOP500 list by measuring computational characteristics relevant to large-scale scientific applications. It emphasizes sparse linear algebra and memory-bound operations to reflect the behavior of codes used at facilities such as Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, Argonne National Laboratory, Sandia National Laboratories and supercomputing centers like Oak Ridge Leadership Computing Facility and Argonne Leadership Computing Facility. The benchmark informs procurement, system design, and algorithmic research affecting projects at institutions including National Energy Research Scientific Computing Center, European Centre for Medium-Range Weather Forecasts, and Brookhaven National Laboratory.
HPCG was introduced to provide a complementary view to dense linear algebra benchmarks used by TOP500 and Linpack performance communities, targeting communication patterns and memory hierarchies relevant to applications run on systems developed by vendors such as Cray, IBM, Intel, NVIDIA, AMD, and HPE. The workload models the behavior of solvers and preconditioners employed in codes at organizations like CERN, Los Alamos National Laboratory, NASA, CERN and research groups at universities such as Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, and University of Illinois Urbana-Champaign. Results are often compared alongside metrics from centers like Texas Advanced Computing Center and Pawsey Supercomputing Centre.
Development began in the early 2010s with collaborators from Lawrence Livermore National Laboratory, Sandia National Laboratories, Argonne National Laboratory, and industry partners including Intel and Cray. The effort was motivated by observations at procurement panels and workshops held by Department of Energy offices, advisory boards such as Biological and Environmental Research panels, and committees including members from National Science Foundation. Early demonstrations were presented at conferences like the SC Conference and International Supercomputing Conference, with involvement from projects at Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory.
HPCG implements a conjugate gradient solver with multigrid preconditioning and sparse matrix-vector multiplication to exercise memory bandwidth, irregular communication, and latency seen in production codes used at Los Alamos National Laboratory, Fermilab, and in computational campaigns at NASA Ames Research Center. The primary metric reported is a floating-point rate in Gflop/s reflecting end-to-end solver performance, which is interpreted alongside system-level characteristics from suppliers such as Intel and NVIDIA. Test inputs mirror discretizations used by applications at institutions like European Centre for Medium-Range Weather Forecasts and Lawrence Livermore National Laboratory to stress off-node communication and cache utilization. Quality assurance and reproducibility practices have been influenced by standards adopted by OpenMP Architecture Review Board and communities around MPI Forum.
Reference implementations of HPCG are available in languages and parallel models used by research centers and vendors, including versions using MPI and OpenMP for distributed-memory and shared-memory parallelism, and accelerators via toolchains from NVIDIA and AMD. Builds and portability practices draw on software ecosystems maintained by GitHub, collaborations with compiler vendors such as GCC and Intel Compilers, and performance tools from Cray and HPE. Integrations and deployment have been demonstrated on systems at Oak Ridge Leadership Computing Facility, Argonne Leadership Computing Facility, and cloud platforms offered by providers like Amazon Web Services for benchmarking studies.
HPCG results are published alongside TOP500 lists to provide a contrasting view to Linpack-measured peak performance on machines from IBM, Cray, HPE, Fujitsu, and others. Reported metrics often show a large gap between Linpack rates and HPCG rates, prompting analyses by research teams at Sandia National Laboratories, Lawrence Livermore National Laboratory, and academic groups at University of Cambridge and ETH Zurich. Comparative studies appear in proceedings of SC Conference, SC and journals read by groups at California Institute of Technology and Princeton University, influencing procurement decisions at national labs like Brookhaven National Laboratory.
HPCG influences system procurement, architecture design, and software optimization at supercomputing centers such as Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory, and informs algorithmic choices in simulation projects at NASA, CERN, and European Centre for Medium-Range Weather Forecasts. Vendors use HPCG to guide microarchitecture tuning and memory subsystem improvements for products by Intel, NVIDIA, AMD, and Fujitsu. The benchmark has prompted improvements in libraries and solvers maintained by projects at Netlib, Trilinos Project, and PETSc used across academic institutions like University of California, San Diego and University of Texas at Austin.
Critiques from researchers at National Energy Research Scientific Computing Center, Argonne National Laboratory, and universities such as Massachusetts Institute of Technology note that HPCG represents a specific class of sparse, memory-bound workloads and does not capture dense linear algebra behavior emphasized by Linpack nor I/O-bound patterns relevant to workflows at European Centre for Medium-Range Weather Forecasts or data centers run by Google and Facebook. Industry observers from Intel and NVIDIA caution against overreliance on a single metric for procurement, advocating combined evaluation with benchmarks maintained by communities such as SPEC and suites used in benchmarking studies at SC Conference. Other limitations cited by researchers at Sandia National Laboratories and Lawrence Livermore National Laboratory include sensitivity to implementation details in libraries like PETSc and solver stacks used in projects at Los Alamos National Laboratory.
Category:Benchmarks