Generated by GPT-5-mini| Leadership Computing Facility | |
|---|---|
| Name | Leadership Computing Facility |
| Established | 2004 |
| Location | Argonne National Laboratory, Oak Ridge National Laboratory |
| Type | Computational research facility |
| Operating agency | Office of Science |
Leadership Computing Facility
The Leadership Computing Facility provides high-performance computing resources for advanced scientific research and engineering, supporting projects from national laboratories, University of California, Berkeley, Massachusetts Institute of Technology, Stanford University, Princeton University, Harvard University, University of Illinois Urbana-Champaign, University of Texas at Austin, Georgia Institute of Technology, University of Michigan, California Institute of Technology, Columbia University, University of Chicago, Yale University, Cornell University, University of California, Los Angeles, University of Washington, Northwestern University, Purdue University, University of Colorado Boulder, Carnegie Mellon University, University of California, San Diego, University of Wisconsin–Madison, Johns Hopkins University, Duke University, Brown University, New York University, University of Pennsylvania, University of Florida, Ohio State University, University of Notre Dame, Texas A&M University, Michigan State University, Rutgers University, Pennsylvania State University, University of Arizona, University of Minnesota, University of Southern California, Rice University, University of California, Davis, University of California, Santa Barbara, University of California, Irvine, University of California, Santa Cruz and other institutions.
The Facility operates leadership-class supercomputers to accelerate computational investigations in fields supported by the United States Department of Energy, particularly programs in Office of Science portfolios that include the Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics and collaborations with National Nuclear Security Administration. It emphasizes scalable simulation, data-intensive computing, and algorithm development for projects involving institutions such as Los Alamos National Laboratory, Sandia National Laboratories, Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory, Idaho National Laboratory, Pacific Northwest National Laboratory, Brookhaven National Laboratory, Fermi National Accelerator Laboratory, NASA Ames Research Center, European Organisation for Nuclear Research, Riken, CERN, IBM, Intel, NVIDIA, Cray Research, Hewlett Packard Enterprise, AMD, Google, Microsoft Research.
The Facility was established to provide computational leadership in response to initiatives by the Office of Science and was influenced by strategic planning documents from the U.S. Congress, mandates echoed in reports by the National Research Council (United States), the President's Council of Advisors on Science and Technology, and community white papers from conferences like the Supercomputing Conference and workshops organized by Argonne National Laboratory and Oak Ridge National Laboratory. Early deployments built on partnerships with vendors such as Cray Research and IBM, and subsequent procurement cycles incorporated innovations from NVIDIA and Intel Corporation. Leadership-class procurements were informed by benchmark results from competitions such as the TOP500 and guided by software ecosystems developed at Lawrence Livermore National Laboratory and academic centers including NERSC at Lawrence Berkeley National Laboratory.
The Facility's infrastructure includes machine rooms and data centers at multi-laboratory sites, high-performance storage systems, high-bandwidth networks connected to the Energy Sciences Network, and software stacks integrating libraries from projects at Argonne Leadership Computing Facility partners and community codes originating at National Center for Supercomputing Applications. Cooling, power, and resilience systems were engineered with input from contractors and standards bodies like American Society of Mechanical Engineers and collaborated with regional utilities and grid operators, as practiced at Oak Ridge National Laboratory and Argonne National Laboratory campus facilities. User-facing services include science gateways, workflow tools, and visualization platforms developed in collaboration with teams from University of Illinois and Pittsburgh Supercomputing Center.
Major systems hosted have included successors to platforms built by Cray Inc., hybrid GPU-accelerated clusters leveraging NVIDIA GPUs and CPUs from Intel Corporation and AMD, and prototype exascale preparatory systems aligned with the Exascale Computing Project. Performance milestones have been demonstrated on industry benchmarks such as LINPACK (reported in the TOP500) and application benchmarks like STREAM, HPCG, and domain-specific performance tests used by researchers from Los Alamos National Laboratory, Sandia National Laboratories, Lawrence Livermore National Laboratory, and academic teams from Massachusetts Institute of Technology and Stanford University. Systems have been evaluated for energy efficiency in contests such as the Green500.
The Facility supports multidisciplinary research spanning computational chemistry, climate modeling, materials science, astrophysics, plasma physics, fusion, genomics, and accelerator modeling, partnering with programs at National Science Foundation, National Institutes of Health, NOAA, and international collaborators like European Space Agency and Max Planck Society. Collaborative initiatives include software co-design projects with Argonne National Laboratory divisions, algorithm development with researchers at University of Illinois Urbana-Champaign and University of California, Berkeley, and verification and validation efforts tied to experimental campaigns at Oak Ridge National Laboratory and facilities such as the Joint European Torus and ITER design teams.
Access is granted through competitive proposal programs overseen by governing bodies tied to the Office of Science and peer review panels drawing members from academic institutions and national laboratories including Argonne National Laboratory, Oak Ridge National Laboratory, Brookhaven National Laboratory and Lawrence Berkeley National Laboratory. Users receive support via training, documentation, and consulting provided by computational scientists from partner centers such as NERSC, National Center for Supercomputing Applications, and facility user support groups modeled on practices from XSEDE and regional consortia. Allocation frameworks borrow principles from programs at DOE Joint Genome Institute and resource management techniques developed for the National Energy Research Scientific Computing Center.
Research enabled by the Facility has produced advances in predictive climate simulations used by teams connected to the Intergovernmental Panel on Climate Change, materials design breakthroughs informing projects at Boeing and General Electric, and astrophysics simulations that complemented observations from Hubble Space Telescope and Chandra X-ray Observatory. Achievements include publications in leading journals associated with American Physical Society, Nature (journal), Science (journal), and awarded recognition through collaborations that won prizes in algorithm development and computational science honored by organizations such as the Association for Computing Machinery and the IEEE. The Facility's work contributed to national-scale initiatives such as the Exascale Computing Project and informed technology roadmaps by vendors including Intel Corporation and NVIDIA.
Category:Supercomputer sites in the United States