Generated by GPT-5-mini| International Conference on High Performance Computing | |
|---|---|
| Name | International Conference on High Performance Computing |
| Abbreviation | ICHPC |
| Established | 1990s |
| Frequency | Annual |
| Discipline | High performance computing |
| Country | International |
International Conference on High Performance Computing is an annual gathering that convenes researchers, engineers, and practitioners focused on Cray-class architectures, LLNL-scale simulations, and exascale program development. The conference serves as a nexus between laboratory centers such as Lawrence Livermore National Laboratory, national initiatives like the Exascale Computing Project, industry leaders including IBM, Intel, NVIDIA, and academic institutions such as Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, and ETH Zurich. It regularly features collaborations with projects funded by agencies like the DOE and partnerships that include ECMWF and CERN.
Origins trace to specialist workshops in the 1990s where communities around Thinking Machines, Silicon Graphics, and Cray Research exchanged results on parallel processing, vectorization, and scalable storage. Early meetings attracted contributors from Los Alamos National Laboratory, Sandia National Laboratories, and universities such as University of Illinois Urbana-Champaign and University of Cambridge. Through the 2000s the program expanded alongside initiatives like the PRACE and the Human Brain Project, reflecting shifts toward petascale milestones achieved by systems such as IBM Blue Gene and later exascale targets exemplified by Frontier. The conference evolved governance to include steering committees drawn from National Science Foundation grantees and industrial consortia including OpenMP Architecture Review Board participants.
The scope spans hardware innovations in processors from ARM cores to accelerators by NVIDIA and AMD, interconnect fabrics influenced by InfiniBand, and storage approaches such as parallel file systems used at Oak Ridge National Laboratory. Software topics include compiler technologies from GNU Compiler Collection, runtime systems employed by OpenMPI, programming models exemplified by CUDA, OpenCL, MPI, and emerging models from Kokkos and RAJA. Performance analysis and benchmarking draw on suites like SPEC and methodologies promoted by Top500 and Green500 lists. Application domains include climate modeling with groups from Met Office, cosmology codes used by NASA, computational chemistry from Pfizer collaborations, and machine learning frameworks such as TensorFlow and PyTorch adapted for HPC.
A rotating steering committee typically includes representatives from DOE, National Science Foundation, national laboratories like Argonne National Laboratory, major vendors (IBM, HPE), and leading universities (e.g., University of Texas at Austin). Program committees assemble track chairs from subcommunities including parallel programming, performance engineering, and storage. Sponsorships come from industry consortia including OpenACC and standards bodies like IEEE Computer Society. Proceedings are managed under publishers or societies such as ACM and often coordinated with workshops organized by groups like ISC High Performance and SC Conference organizers.
The conference format blends keynote addresses by figures affiliated with DOE Office of Science or corporate research labs, technical paper presentations, poster sessions, tutorials led by faculty from University of Oxford or Imperial College London, and co-located workshops running in partnership with PRACE and NERSC. Hands-on sessions showcase large-scale runs on systems at Oak Ridge National Laboratory and Argonne National Laboratory; industrial exhibits from Intel, NVIDIA, AMD, and Hewlett Packard Enterprise display hardware roadmaps. Birds-of-a-feather sessions and special interest groups (SIGs) coordinate around topics promoted by OpenMP Architecture Review Board, Linux Foundation, and IEEE Standards Association.
Landmark papers presented at the conference have included scalable algorithms for dense linear algebra aligned with work from LAPACK contributors, communication-avoiding methods connected to researchers at University of Colorado Boulder, and novel I/O strategies echoing designs deployed at Brookhaven National Laboratory. Contributions have influenced community codes such as GROMACS, LAMMPS, and Enzo, and have advanced performance counters and profiling tools used in Performance Co-Pilot and TAU Performance System. Reports on programming model extensions have paralleled developments in OpenMP and CUDA best practices, while benchmarking studies have informed placements on the Top500 list.
Attendees encompass staff from national laboratories (e.g., Lawrence Berkeley National Laboratory), academic researchers from institutions such as Princeton University and University of Washington, industry engineers from Google, Microsoft Research, cloud providers including Amazon Web Services, and representatives of consortia like Big Data Value Association. The conference fosters cross-pollination among communities active in projects like Human Cell Atlas computing, Square Kilometre Array data processing, and Large Hadron Collider simulation campaigns at CERN.
The conference bestows awards for best paper, distinguished contributions, and early-career researcher prizes often sponsored by ACM or IEEE. Lifetime achievement recognitions have honored figures with affiliations to Cray Research, Lawrence Livermore National Laboratory, and leading universities such as University of Illinois Urbana-Champaign. Awards have highlighted breakthroughs that later shaped procurements at Oak Ridge National Laboratory and influenced national programs like the Exascale Computing Project.