Generated by GPT-5-mini| TOP500 | |
|---|---|
![]() TOP500 · Public domain · source | |
| Name | TOP500 |
| Caption | List of the world's fastest supercomputers |
| Launched | 1993 |
| Current status | Active |
TOP500 is a biannual project that ranks the 500 most powerful supercomputers worldwide using standardized benchmarks. The list is widely cited by organizations such as Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Argonne National Laboratory, European Centre for Medium-Range Weather Forecasts, and National Energy Research Scientific Computing Center for tracking performance trends across vendors and research centers. Researchers from International Business Machines Corporation, Intel, NVIDIA Corporation, Cray Inc., and Fujitsu routinely use the list to compare systems built for institutions like CERN, NASA, Tokyo Institute of Technology, University of Tokyo, and Los Alamos National Laboratory.
The project was established in 1993 by staff at Lawrence Livermore National Laboratory and Mannheim University with contributions from University of Tennessee, Darmstadt University of Technology, National Supercomputer Center in Jülich, and High Performance Computing Center Stuttgart. Early lists reflected competition among systems from Cray Research, Fujitsu Limited, Hitachi, Ltd., and NEC Corporation deployed at centers including Oak Ridge National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory, and Purdue University. Major milestones include the shift from vector processors to massively parallel processors with systems such as IBM Blue Gene and later the adoption of accelerators from NVIDIA Corporation and coprocessors from Intel Xeon Phi exemplified in deployments at Sequoia (supercomputer), K computer, and Summit (supercomputer). Institutional milestones were reported at events like the International Supercomputing Conference and announcements at SC Conference gatherings.
Ranking is based primarily on the LINPACK benchmark, developed by researchers from Jack Dongarra's group at University of Tennessee, with implementations influenced by libraries from Netlib and standards from IEEE. Submissions require detailed hardware and software disclosure including processors from Intel Corporation, AMD, ARM Limited, and interconnect technologies from Mellanox Technologies and Cray Inc.; systems often run operating systems such as Linux, distributions from Red Hat, or custom kernels developed by Riken and laboratory computing centers. The methodology prescribes sustained floating-point performance measurements in gigaflops or petaflops, verification procedures influenced by practices at National Institute of Standards and Technology, and periodic audit-like scrutiny similar to practices at European Commission research programs.
The list chronicles record holders including machines at Oak Ridge National Laboratory (e.g., Summit (supercomputer)), Fujitsu's K computer at Riken, and systems built by IBM such as Sequoia (supercomputer). Records reflect technological shifts from vector systems like Cray-1 to massively parallel clusters built from Intel Xeon CPUs, accelerator-based systems using NVIDIA Tesla GPUs, and specialized architectures like Fugaku (supercomputer) using ARM architecture. National programs such as those in United States Department of Energy, Japan Science and Technology Agency, National Natural Science Foundation of China, and European Union Horizon 2020 have driven rankings, while milestones are often announced alongside procurement programs at Defense Advanced Research Projects Agency and large-scale projects at CERN and European Organization for Nuclear Research.
The list has influenced procurement decisions at institutions including National Oceanic and Atmospheric Administration, European Molecular Biology Laboratory, Max Planck Society, and Lawrence Berkeley National Laboratory but has drawn criticism from researchers at Stanford University, Massachusetts Institute of Technology, Imperial College London, and ETH Zurich for overreliance on the LINPACK benchmark. Critics cite distortions similar to debates around metrics used by Forbes and Times Higher Education rankings, arguing for broader evaluation using benchmarks such as HPCG, application-based suites from SPEC, and energy-efficiency measures highlighted by initiatives like the Green500 and standards promoted by International Electrotechnical Commission. Industry responses include alternative metrics developed by NVIDIA Corporation, Intel Corporation, and consortia involving Hewlett Packard Enterprise.
Notable entries include Fugaku (supercomputer) developed by Fujitsu and deployed at Riken, Summit (supercomputer) and Frontier (supercomputer) at Oak Ridge National Laboratory, K computer at Riken, and historically significant machines such as Sequoia (supercomputer), Roadrunner (supercomputer) at Los Alamos National Laboratory, and Blue Gene/L at IBM facilities. Academic and research installations at Lawrence Livermore National Laboratory, Argonne National Laboratory, Lawrence Berkeley National Laboratory, Swiss National Supercomputing Centre, and National Supercomputing Centre Singapore regularly appear with systems optimized for workloads from European Centre for Medium-Range Weather Forecasts, Human Genome Project-scale bioinformatics, and large-scale simulations for institutions like NASA and European Space Agency.
The project publishes machine lists and historical datasets used by analysts at Gartner, IDC, Bloomberg, and journalists at Nature (journal) and Science (journal); visualization tools and APIs support integration with platforms from Tableau, Grafana Labs, and custom dashboards developed by computing centers including NERSC and PRACE. Researchers and policymakers at United Nations Educational, Scientific and Cultural Organization, Organisation for Economic Co-operation and Development, and national labs use the data for trend analysis, while third-party projects hosted by GitHub and archived by Internet Archive provide community-driven visualizations and code.
Category:Supercomputing Category:Computer performance