Generated by GPT-5-mini| HPCG | |
|---|---|
| Name | HPCG |
| Developer | Lawrence Livermore National Laboratory; Sandia National Laboratories; Oak Ridge National Laboratory |
| Initial release | 2014 |
| Latest release | 2017 (major); ongoing updates |
| Programming languages | C++; Fortran wrappers; MPI; OpenMP support |
| Operating system | Linux distributions common on supercomputers; Cray-style environments |
| License | Open-source (BSD-style) |
HPCG HPCG is a high-performance computing benchmark designed to exercise memory, communication, and computation patterns representative of modern scientific applications. Developed by teams from Lawrence Livermore National Laboratory, Sandia National Laboratories, and Oak Ridge National Laboratory, it complements traditional dense linear algebra benchmarks by targeting sparse matrix computations and irregular communication. The benchmark influences procurement, system design, and research agendas at national laboratories, research universities, and commercial vendors.
HPCG was created to reflect workloads similar to those found in codes from Argonne National Laboratory, National Renewable Energy Laboratory, Pacific Northwest National Laboratory, Los Alamos National Laboratory, and research groups at Massachusetts Institute of Technology and Stanford University. The benchmark implements a conjugate gradient algorithm with multigrid preconditioning inspired by solvers used in projects at European Organization for Nuclear Research, CERN, and climate modeling centers such as NOAA and NASA centers. It targets system characteristics emphasized in system procurements at Department of Energy facilities and in rankings maintained by Top500 and related initiatives. Developers intended HPCG to complement metrics from benchmarks linked to the work of teams at IBM, Intel Corporation, NVIDIA Corporation, Hewlett Packard Enterprise, and Cray Inc..
HPCG implements a preconditioned conjugate gradient (PCG) method using multigrid-like operations informed by research from SIAM workshops and computational mathematics groups at University of California, Berkeley, Princeton University, University of Texas at Austin, and University of Illinois Urbana–Champaign. The reference implementation uses a 27-point stencil on structured grids, sparse matrix–vector products, local smoothing operations, global reductions, and prolongation/restriction operators resembling multigrid cycles studied at Courant Institute, ETH Zurich, École Polytechnique Fédérale de Lausanne, and Technical University of Munich. The codebase includes parallelization via MPI and shared-memory threading via OpenMP, and it has been ported to accelerators by teams at Oak Ridge National Laboratory with support from NVIDIA Corporation and research groups at Lawrence Berkeley National Laboratory exploring CUDA and OpenACC variants.
HPCG reports performance in gigaflops per second measured using a defined problem size and strong-scaling runs, with validation constraints and reproducibility enforced by test harnesses developed by consortia including Top500 and staff at Argonne National Laboratory. The methodology requires reproducible initial conditions and mandates convergence checks similar to those used in projects at Los Alamos National Laboratory and Sandia National Laboratories. Measurement captures time spent in sparse matrix–vector multiply, vector updates, inner products, and multigrid-like transfers, reflecting concerns raised in workshops at ACM and IEEE conferences. Results are documented and sometimes submitted by centers such as Oak Ridge National Laboratory's Summit and Lawrence Livermore National Laboratory's systems when reporting to procurement panels and academic review boards at institutions like University of Cambridge and University of Oxford.
Unlike the dense linear algebra focus of LINPACK, historically associated with work by Jack Dongarra and the Netlib community at University of Tennessee, HPCG emphasizes memory bandwidth and network latency akin to challenges studied by researchers at Los Alamos National Laboratory and Sandia National Laboratories. LINPACK (and its modern HPL implementations used by Top500) measures peak floating-point throughput on dense matrix solves, benefiting vendors such as IBM and Intel Corporation that optimize dense kernels; HPCG targets sparse solvers prevalent in codes from ANSYS, Abaqus users, and scientific packages developed at Argonne National Laboratory and Lawrence Berkeley National Laboratory. Other benchmarks such as STREAM (originating from John McCalpin), Graph500 (from Oak Ridge National Laboratory and Lawrence Livermore National Laboratory), and SPEC MPI suites complement HPCG by measuring bandwidth, graph traversal, and whole-application mixes; procurement committees at Department of Energy labs often consider suites combining these benchmarks.
HPCG results are used alongside LINPACK for richer assessments in Top500 listings and inform procurement decisions at Department of Energy facilities, national research centers like National Energy Research Scientific Computing Center and Argonne Leadership Computing Facility, and universities such as University of Chicago and Cornell University. System vendors including Hewlett Packard Enterprise, Dell Technologies, and Lenovo optimize interconnects—originally developed by firms like Mellanox Technologies—and memory hierarchies to improve HPCG scores when bidding on contracts from U.S. Department of Defense labs and international research councils such as European Commission funded HPC procurements. Procurement panels often request both HPL and HPCG figures from bidders to evaluate expected performance on codes used by groups at CERN, NASA, and earth system modeling centers.
Critics from academic groups at Massachusetts Institute of Technology, Stanford University, and industrial research labs argue that HPCG, while more representative than LINPACK for some applications, still does not capture heterogeneity seen in production workflows at Google Research, Amazon Web Services, or multi-physics codes used by Sandia National Laboratories. Comments published at SC Conference panels and in white papers by Cray Inc. engineers note that HPCG's fixed stencil and relatively small computational intensity can be gamed by vendor-specific optimizations, echoing earlier debates over LINPACK's relevance championed by figures like Jack Dongarra and reported in outlets such as IEEE Spectrum. Others at institutions including University of Edinburgh and Technical University of Denmark emphasize that full-application benchmarks or mini-apps—developed by collaborations involving Argonne National Laboratory and Lawrence Berkeley National Laboratory—are necessary for procurement decisions. Despite critiques, HPCG remains a useful component of multi-metric evaluation frameworks used by national laboratories, universities, and vendors.
Category:Benchmarks