Generated by GPT-5-mini| Argonne Leadership Computing Facility | |
|---|---|
| Name | Argonne Leadership Computing Facility |
| Established | 2004 |
| Location | Argonne National Laboratory, Lemont, Illinois |
Argonne Leadership Computing Facility The Argonne Leadership Computing Facility is a United States Department of Energy national user facility located at Argonne National Laboratory in Lemont, Illinois. The facility supports computational science through high-performance computing resources, collaborating with institutions such as Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Sandia National Laboratories, Los Alamos National Laboratory and Fermilab. It serves researchers from University of Chicago, Massachusetts Institute of Technology, Stanford University, California Institute of Technology and other universities, as well as industry partners including IBM, Intel Corporation, NVIDIA and Cray Inc..
The facility grew out of initiatives tied to the Department of Energy Office of Science leadership computing programs and the Advanced Scientific Computing Research program, with early connections to projects at Argonne National Laboratory and collaborations with National Science Foundation-funded centers and the Oak Ridge Leadership Computing Facility. Its development involved procurement and deployment phases influenced by procurement cases such as the selection of systems from vendors like IBM, Cray Inc., and later collaborations with Hewlett Packard Enterprise and NVIDIA. Key milestones include hosting first-of-a-kind systems that contributed to milestones reported alongside work from Lawrence Livermore National Laboratory and Los Alamos National Laboratory, and participation in initiatives linked to the Exascale Computing Project and coordination with the U.S. Department of Energy leadership computing roadmap.
Located on the Argonne National Laboratory campus near Interstate 55 (Illinois), the facility occupies high-performance computing data centers with dedicated power, cooling, and network connectivity that integrates with the Energy Sciences Network and regional research networks tied to Internet2. Infrastructure components include petascale-class machine rooms, high-performance storage arrays from vendors like EMC Corporation and NetApp, Inc., high-speed interconnects based on technologies from InfiniBand Trade Association vendors, and center management integrating standards from National Institute of Standards and Technology data center guidelines. The site supports user-facing environments with visualization clusters connected to systems used by teams from University of Illinois Urbana–Champaign, Purdue University, Northwestern University and national labs such as Brookhaven National Laboratory.
The facility has hosted successive generations of flagship systems from vendors including IBM, Cray Inc., Hewlett Packard Enterprise and partners such as NVIDIA and Intel Corporation. Notable systems have included hybrid CPU–GPU architectures and multi-core CPU systems that complement machines at Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory. Hardware deployments emphasized scalable interconnects from Mellanox Technologies (now part of NVIDIA), accelerators from NVIDIA and co-design efforts involving the Exascale Computing Project. Software stacks incorporated compilers and tools from GNU Project toolchains, LLVM Project, message-passing libraries like Open MPI and performance tools aligned with practices from Argonne National Laboratory computing staff and community projects.
The facility supports scientific challenges in fields such as climate science collaborations with groups like NOAA, computational chemistry projects involving researchers from Rice University and University of California, Berkeley, and materials science efforts linked to Oak Ridge National Laboratory and Brookhaven National Laboratory. Research applications include large-scale simulations for astrophysics teams from Princeton University and Caltech, fusion energy modeling with ties to Princeton Plasma Physics Laboratory and General Atomics, and data-driven studies in genomics partnering with Broad Institute researchers. Cross-cutting efforts involve algorithm development with researchers from University of Texas at Austin, verification and validation activities connected to Sandia National Laboratories, and collaborative workflows with industry partners such as General Electric and Boeing.
Access to computing time is provided through peer-reviewed allocation programs coordinated with the Department of Energy Office of Science allocation policies, competitive review panels drawing experts from Stanford University, University of California, Berkeley, Massachusetts Institute of Technology and national labs including Los Alamos National Laboratory and Lawrence Livermore National Laboratory. Users from universities, national laboratories, and industry apply via proposals evaluated for scientific merit, readiness, and computational feasibility; allocations are managed in coordination with systems teams and supported by user services similar to programs at Oak Ridge National Laboratory and National Energy Research Scientific Computing Center. Training and outreach include workshops held with partners such as IEEE, the Association for Computing Machinery, and summer schools hosted in collaboration with universities including University of Illinois Urbana–Champaign.
The facility operates through funding and oversight from the U.S. Department of Energy and involves partnerships with national laboratories including Argonne National Laboratory, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory and Los Alamos National Laboratory. Industry collaborations have included procurement and technology collaborations with IBM, Cray Inc., Hewlett Packard Enterprise, Intel Corporation and NVIDIA. Programmatic partnerships extend to academic consortia at institutions such as University of Chicago, University of Illinois Urbana–Champaign, Massachusetts Institute of Technology and Stanford University, and align with federal initiatives including the Exascale Computing Project and Office of Science strategic plans.
Category:National Laboratories Category:High-performance computing