Generated by GPT-5-mini| Supercomputing Conference | |
|---|---|
| Name | Supercomputing Conference |
| Status | Active |
| Genre | Scientific conference |
| Country | United States |
| First | 1988 |
| Organizer | Association for Computing Machinery |
| Frequency | Annual |
Supercomputing Conference The Supercomputing Conference is an annual international scientific meeting focused on high-performance computing that brings together researchers, engineers, vendors, and policymakers. Established in the late 1980s, it serves as a forum for advances in parallel processing, supercomputer architecture, scalable algorithms, and performance benchmarking. The event features peer-reviewed papers, technical exhibits, vendor demonstrations, panel discussions, and networking events that connect academic institutions, national laboratories, and technology companies.
The conference traces its roots to initiatives by the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery during the growth of parallel computing in the 1980s, alongside milestones such as the development of the Cray-1, the founding of Los Alamos National Laboratory, and projects at the Lawrence Livermore National Laboratory. Early editions reflected influences from the National Science Foundation funding programs, collaborations with Sandia National Laboratories, and contemporaneous events like the International Symposium on Computer Architecture and the IEEE Computer Society symposia. Over time the program evolved in response to research at institutions including Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, and Carnegie Mellon University, and to deployments at national sites such as the Oak Ridge National Laboratory and the Argonne National Laboratory. Technological inflection points—exemplified by the introduction of microprocessors from Intel Corporation, vector processors from Cray Research, and accelerators from NVIDIA—shaped conference themes, as did software advances from projects like MPI, OpenMP, and BLAS. The conference adapted to global developments in supercomputing policy influenced by organizations such as the European Commission, the Japan Science and Technology Agency, and the Chinese Academy of Sciences.
Governance traditionally involves committees drawn from the ACM SIGARCH and the IEEE TCPP, with program leadership including chairs affiliated with University of Illinois Urbana–Champaign, Princeton University, University of Texas at Austin, and Georgia Institute of Technology. Sponsorship and partnerships have included corporations such as IBM, Hewlett Packard Enterprise, Dell Technologies, Intel Corporation, AMD, NVIDIA, and Microsoft Corporation, as well as national agencies like the Department of Energy (United States), the Defense Advanced Research Projects Agency, and research consortia such as NERSC and PRACE. Local organizing committees have coordinated with municipal authorities in host cities including Denver, Portland, Oregon, Salt Lake City, Dallas, and Seattle. The conference operates through subcommittees for peer review, finance, exhibits, and student programs, with awards adjudicated by panels including representatives from IEEE, ACM, national laboratories, and major universities.
Technical programs cover topics spanning parallel algorithms, scalable numerical methods, machine learning on accelerators, and performance analysis. Recent sessions reflected research from groups at Google Research, Amazon Web Services, Facebook AI Research, DeepMind, and laboratories such as Argonne National Laboratory and Oak Ridge National Laboratory on exascale computing, energy-efficient architectures, and co-design efforts. Algorithmic work includes contributions referencing software stacks like MPI, OpenMP, CUDA, and runtime projects from Intel and AMD. Domains represented include climate modeling from NOAA, astrophysics from NASA Ames Research Center, computational chemistry from Lawrence Berkeley National Laboratory, and genomics from Broad Institute. Poster sessions and paper tracks often include references to benchmarking suites such as the TOP500, the HPCG benchmark, and efforts tied to the Green500 list, while industrial tracks highlight systems from vendors like Fujitsu, HPE Cray, and Lenovo.
Keynote speakers have included leaders affiliated with institutions such as Stanford University, Massachusetts Institute of Technology, University of Cambridge, and corporations like IBM Research, Intel Labs, and NVIDIA Research. Tutorial offerings often draw instructors from academic centers including University of California, San Diego, University of Wisconsin–Madison, ETH Zurich, and University of British Columbia, and cover tools and frameworks like TensorFlow, PyTorch, Kokkos, and RAJA. Workshops have been co-organized with communities such as the OpenACC consortium, the USENIX community, the ACM SIGPLAN group, and regional initiatives like PRACE and XSEDE, addressing reproducibility, software sustainability, and heterogeneous system programming.
The conference administers awards and competitions recognizing achievements in high-performance computing, including best paper awards judged by program committees with members from ACM, IEEE, and national laboratories. Student competitions and challenges have attracted teams from California Institute of Technology, Cornell University, University of Michigan, and Technical University of Munich. Demonstrations and mini-challenges frequently feature participation by vendors such as NVIDIA, Intel Corporation, and AMD, and collaborations with initiatives like the Human Brain Project and the European Exascale Software Initiative. Award categories have highlighted contributions to performance engineering, software tools, and scalable algorithms, with honorees later joining editorial boards of journals such as Communications of the ACM and IEEE Transactions on Parallel and Distributed Systems.
The conference has influenced processor design, interconnect research, and software ecosystems by disseminating results from projects at Cray Research, IBM, Intel, NVIDIA, and academic research groups. Proceedings have seeded advances in parallel numerical libraries such as ScaLAPACK and promoted standards like MPI and OpenMP. Community norms around benchmarking and procurement have been informed by presentations tied to the TOP500 list and exascale roadmaps from agencies like the European Commission and the U.S. Department of Energy. Cross-pollination among participants from Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, and commercial cloud providers such as Amazon Web Services and Microsoft Azure accelerated adoption of heterogeneous computing models and influenced curricula at universities including Imperial College London and Tsinghua University.
Annual attendance typically includes researchers, engineers, procurement officers, and students from institutions such as Stanford University, University of Cambridge, Tsinghua University, and national laboratories including Argonne National Laboratory and Oak Ridge National Laboratory, as well as corporate delegations from IBM, Intel, AMD, NVIDIA, Hewlett Packard Enterprise, Dell Technologies, Lenovo, Fujitsu, and cloud providers like Amazon Web Services, Google Cloud, and Microsoft Azure. Exhibits showcase systems, software, and services from vendors, startups, and research consortia, and hiring events often connect students from University of California, Berkeley, Carnegie Mellon University, and Princeton University with employers in the supercomputing ecosystem. Category:High-performance computing conferences