LLMpediaThe first transparent, open encyclopedia generated by LLMs

Gustafson's law

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Computer architecture Hop 4
Expansion Funnel Raw 139 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted139
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Gustafson's law
NameGustafson's law
FieldComputer science
Introduced1988
ProposerJohn L. Gustafson
FormulaS = p + (1 − p) × s

Gustafson's law Gustafson's law is a principle in parallel computing that predicts scaled speedup as processor counts increase, proposed to address limitations identified by Gene Amdahl and to inform design choices in projects like Cray Research, Intel Corporation, IBM and Sandia National Laboratories. It reframes performance expectations for systems built by organizations such as Lawrence Livermore National Laboratory, Los Alamos National Laboratory, National Center for Supercomputing Applications, and influences architectures from Seymour Cray era designs through modern NVIDIA accelerators. The law is cited in discussions involving technologies like Message Passing Interface, OpenMP, MPI-IO, CUDA and in evaluations by teams at Google, Microsoft Research, Amazon Web Services, and Facebook.

Overview

Gustafson's law asserts that as the number of processors increases, problem sizes can be scaled so that the proportion of parallelizable work dominates, a perspective debated alongside claims from Gene Amdahl and examined in conferences like Supercomputing Conference and ACM SIGARCH gatherings. The statement shaped thinking at institutions such as Argonne National Laboratory, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, and in procurement by entities including Department of Energy labs and companies like Hewlett-Packard. It shifted performance modeling used in projects by Intel Labs, IBM Research, Cray Inc., and in academic groups at Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, and Carnegie Mellon University.

Mathematical formulation

The formulation contrasts fixed-size speedup by introducing scaled speedup S(p) = p − α(p − 1) in many expositions and alternatives used in analyses at Los Alamos National Laboratory and Sandia National Laboratories. In practice researchers from University of Illinois Urbana-Champaign, University of Cambridge, ETH Zurich, Princeton University, and California Institute of Technology use parameters for serial fraction and parallel fraction similar to models used in Amdahl's law papers, while toolchains from GNU Project, LLVM, Intel Parallel Studio and PGI Compilers implement microbenchmarks to estimate coefficients. The equation is employed in performance studies by teams at Siemens, General Electric, Boeing, Lockheed Martin and by academic groups at Imperial College London and University of Tokyo.

Comparison with Amdahl's law

Gustafson's law is frequently contrasted with Amdahl's law in analyses by researchers at ACM, IEEE, SIAM, and in textbooks from Addison-Wesley and MIT Press, where debates involve examples from Cray X-MP, IBM Blue Gene, Intel Xeon Phi, NVIDIA Tesla systems. Proponents at Lawrence Livermore National Laboratory and critics at Los Alamos National Laboratory have invoked case studies from CFD and finite element simulations used in engineering by Rolls-Royce, Siemens Energy, Shell and in climate modeling at NOAA, NASA, European Centre for Medium-Range Weather Forecasts and Met Office. Comparative studies often reference work by Gene Amdahl, John L. Gustafson, David Patterson, John Hennessy, Michael Flynn and cite benchmarks like LINPACK, HPL and SPEC.

Practical implications and applications

Engineers at NASA Jet Propulsion Laboratory, European Space Agency, Toyota, Volkswagen, Ford Motor Company and researchers at National Institutes of Health apply Gustafson-style scaling to large-scale tasks such as computational fluid dynamics, molecular dynamics, climate simulation and seismic imaging. Software ecosystems including OpenMPI, MPICH, Intel MPI, OpenMP, CUDA and HIP implement parallelism patterns that assume scalable workloads as promoted by Gustafson's perspective, influencing procurement at HPC centers like NERSC, Jülich Research Centre, Rutherford Appleton Laboratory, Pawsey Supercomputing Centre and corporate clouds at Google Cloud Platform, Microsoft Azure, Amazon EC2. It also informs algorithm design in initiatives at CERN, Large Hadron Collider, Human Genome Project, and in machine learning frameworks from TensorFlow, PyTorch, MXNet.

Limitations and criticisms

Critics from Los Alamos National Laboratory, Oak Ridge National Laboratory, University of California, San Diego, and University of Oxford note that Gustafson's assumptions may fail for workloads constrained by memory bandwidth, I/O subsystems designed by firms like Seagate Technology and Western Digital, and interconnects from Mellanox Technologies or Intel Omni-Path. Real-world limits discussed at SC Conference, IEEE Cluster Conference, Euro-Par and in journals from ACM Transactions on Computer Systems, IEEE Transactions on Parallel and Distributed Systems include Amdahl-style serial bottlenecks, synchronization overheads studied by teams at Google DeepMind, OpenAI, and economic limits considered by procurement offices at DARPA and European Commission. Empirical counterexamples appear in studies by University of Illinois, University of Texas at Austin, University of Washington, and KTH Royal Institute of Technology.

Historical background and development

John L. Gustafson proposed the idea in 1988 while affiliated with Sandia National Laboratories and presented results alongside contemporaneous work by Gene Amdahl and later commentary from David H. Bailey, Richard Hamming, Edgar F. Codd and others at venues such as SIGPLAN and SIGOPS. The debate influenced procurement and architecture at Cray Research, IBM and later Hewlett Packard Enterprise and informed curricula at Massachusetts Institute of Technology and Stanford University. Subsequent developments involved collaborations across DOE labs, centers like NERSC and international efforts at Riken and Fujitsu.

Examples and case studies

Case studies demonstrating scaled speedup come from simulations at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, and projects like Weather Research and Forecasting model, GROMACS molecular dynamics, LAMMPS and CESM climate modeling. Industry examples include scaling studies by Google, Facebook, Microsoft Research, and system evaluations for HPC installations at Argonne National Laboratory and NERSC; benchmarks often reference LINPACK results reported for systems such as Summit (supercomputer), Fugaku, Titan (supercomputer), Sequoia (supercomputer), and cloud-scale analyses from Amazon Web Services and Google Cloud Platform.

Category:Computer science laws