Generated by GPT-5-mini| High Performance Computing and Communications Initiative | |
|---|---|
| Name | High Performance Computing and Communications Initiative |
| Abbreviation | HPCCI |
| Launched | 1991 |
| Agency | National Science Foundation; Department of Energy; National Aeronautics and Space Administration; National Institutes of Health |
| Country | United States |
| Status | completed |
High Performance Computing and Communications Initiative The High Performance Computing and Communications Initiative was a coordinated federal effort in the early 1990s to accelerate capabilities in supercomputing, networking, and computational science. It brought together agencies from United States Department of Commerce, National Science Foundation, Department of Energy, National Aeronautics and Space Administration, National Institutes of Health, and Defense Advanced Research Projects Agency to fund hardware, software, and network infrastructure for scientific research. The Initiative influenced policy discussions at the White House and informed planning at institutions such as Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Sandia National Laboratories, Argonne National Laboratory, and Oak Ridge National Laboratory.
The Initiative emerged amid advances driven by projects at IBM, Cray Research, Silicon Graphics, Intel, and Sun Microsystems, and responded to strategic reports from the High Performance Computing Act of 1991 briefings, advisory panels convened by National Research Council, and recommendations from the Office of Science and Technology Policy. Objectives included accelerating development of petascale architectures similar to systems at National Center for Supercomputing Applications, expanding networks akin to the Internet, and supporting applications at NASA Ames Research Center, Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, and university centers like Stanford University, Massachusetts Institute of Technology, University of Illinois at Urbana–Champaign, and University of California, Berkeley.
Governance involved interagency coordination among the National Science Foundation, the Department of Energy, the National Aeronautics and Space Administration, the National Institutes of Health, and Defense Advanced Research Projects Agency, with policy oversight from the Office of Management and Budget and strategic direction from the White House Office of Science and Technology Policy. Program offices at NSF Directorate for Computer and Information Science and Engineering and the DOE Office of Science administered grants to consortia including San Diego Supercomputer Center, Pittsburgh Supercomputing Center, National Center for Atmospheric Research, University of Chicago, and Johns Hopkins University. Peer review panels drew experts from Association for Computing Machinery, Institute of Electrical and Electronics Engineers, American Physical Society, and Mathematical Association of America.
R&D spanned hardware design by companies like Cray Research, IBM, and Intel; compiler and middleware work at Lawrence Livermore National Laboratory and Argonne National Laboratory; and numerical method improvements led by faculty at Massachusetts Institute of Technology, Stanford University, Princeton University, University of Michigan, and Cornell University. Projects targeted climate modeling used at National Oceanic and Atmospheric Administration, genomics computation in collaboration with National Institutes of Health centers and Broad Institute, computational fluid dynamics for NASA, and cryptanalysis research linked to National Security Agency interests. Software efforts included parallel libraries inspired by Message Passing Interface research groups, visualization advances at University of Utah, and distributed file systems influenced by Andrew File System and Berkeley DB teams.
Deployment funded high-performance clusters and supercomputers at National Center for Supercomputing Applications, San Diego Supercomputer Center, Pittsburgh Supercomputing Center, Oak Ridge National Laboratory, and Argonne National Laboratory; network upgrades built backbone capacity through collaborations with National Science Foundation Network, Merit Network, Internet2, Sprint, and regional networks such as CENIC. Technologies deployed included scalable processors from IBM, vector units from Cray Research, interconnects inspired by research at Bell Labs and Lawrence Berkeley National Laboratory, and visualization facilities influenced by installations at California Institute of Technology and Georgia Institute of Technology. Data centers implemented security practices informed by National Institute of Standards and Technology guidelines and facility designs akin to those at Fermi National Accelerator Laboratory.
Funding instruments combined appropriations shepherded through the United States Congress with matched investments from industry partners IBM, Intel, Cray Research, Bell Laboratories, and Sun Microsystems along with university cost-sharing at University of Illinois at Urbana–Champaign, University of California, San Diego, University of Texas at Austin, and University of Wisconsin–Madison. Public–private partnerships included collaborations with Microsoft Research, AT&T, MCI Communications, and nonprofit consortia such as Corporation for National Research Initiatives. International cooperation involved exchanges with European Organisation for Nuclear Research, Centre national de la recherche scientifique, Japan Science and Technology Agency, and Deutsches Elektronen-Synchrotron.
The Initiative accelerated adoption of parallel computing practices at centers including Sandia National Laboratories and universities like Carnegie Mellon University, drove network scaling that fed into Internet2 deployments, and influenced procurement strategies at Department of Energy national laboratories. It contributed to advances cited by winners of the Turing Award, researchers at Los Alamos National Laboratory who worked on climate and materials science, and teams at National Institutes of Health that scaled bioinformatics workflows used by the Human Genome Project. Educational impacts reached programs at Stanford University, Massachusetts Institute of Technology, and University of California, Berkeley, and technologies seeded startups in Silicon Valley and research parks near Lawrence Berkeley National Laboratory.
Critics from think tanks such as RAND Corporation and policy analysts at Brookings Institution noted procurement cycles favoring incumbents like Cray Research and IBM and raised concerns echoed by members of United States Congress committees about equitable geographic distribution to institutions outside established hubs such as Pittsburgh, San Diego, and Chicago. Technical challenges included software portability issues highlighted by researchers at University of Illinois at Urbana–Champaign and Ohio Supercomputer Center, funding sustainability concerns discussed at panels convened by National Research Council, and security debates involving National Security Agency and Department of Defense stakeholders.
Category:High performance computing