Generated by Llama 3.3-70B| Argonne Leadership Computing Facility | |
|---|---|
| Name | Argonne Leadership Computing Facility |
| Institution | United States Department of Energy |
| Location | Argonne National Laboratory |
Argonne Leadership Computing Facility is a United States Department of Energy-funded facility located at Argonne National Laboratory, which is managed by University of Chicago. The facility is part of the National Nuclear Security Administration's Advanced Simulation and Computing program and works closely with Los Alamos National Laboratory and Lawrence Livermore National Laboratory. It also collaborates with Oak Ridge National Laboratory and Sandia National Laboratories to advance high-performance computing capabilities. The facility's mission is to enable scientific computing and data analysis for materials science, climate modeling, and nuclear physics research, often in partnership with Massachusetts Institute of Technology, Stanford University, and California Institute of Technology.
The Argonne Leadership Computing Facility provides supercomputing resources to support scientific research in various fields, including materials science, climate modeling, and nuclear physics. The facility's high-performance computing capabilities enable researchers from University of California, Berkeley, Harvard University, and University of Michigan to simulate complex phenomena, such as nuclear reactions and climate change, using computational models developed at National Center for Atmospheric Research and National Oceanic and Atmospheric Administration. The facility's resources are also used to analyze large datasets, such as those generated by Large Hadron Collider experiments at CERN, in collaboration with European Organization for Nuclear Research and Fermi National Accelerator Laboratory. Additionally, the facility works with NASA and National Science Foundation to support research in astrophysics and cosmology.
The Argonne Leadership Computing Facility was established in 2006 as part of the United States Department of Energy's Advanced Simulation and Computing program, which also includes Lawrence Livermore National Laboratory and Los Alamos National Laboratory. The facility's first supercomputer, Blue Gene/P, was installed in 2007 and was used to support research in materials science and nuclear physics, in collaboration with University of Oxford and University of Cambridge. Since then, the facility has deployed several other supercomputers, including Mira and Theta, which have been used to support research in climate modeling and astrophysics, often in partnership with Max Planck Society and European Space Agency. The facility has also collaborated with Google and Microsoft to develop new machine learning algorithms and artificial intelligence techniques.
The Argonne Leadership Computing Facility operates several supercomputers, including Theta, which is a Cray XC40 system with over 11.7 petaflops of peak performance, and Polargas, a NVIDIA-based system with over 100 petaflops of peak performance. The facility's high-performance computing capabilities are supported by a storage system with over 100 petabytes of capacity, developed in collaboration with Western Digital and Seagate Technology. The facility also provides access to cloud computing resources, such as Amazon Web Services and Microsoft Azure, and works with Intel and IBM to develop new computing architectures. Additionally, the facility has partnered with Cray Inc. and Hewlett Packard Enterprise to develop new supercomputing systems.
The Argonne Leadership Computing Facility supports research in a wide range of fields, including materials science, climate modeling, and nuclear physics. The facility's high-performance computing capabilities are used to simulate complex phenomena, such as nuclear reactions and climate change, using computational models developed at National Center for Atmospheric Research and National Oceanic and Atmospheric Administration. The facility's resources are also used to analyze large datasets, such as those generated by Large Hadron Collider experiments at CERN, in collaboration with European Organization for Nuclear Research and Fermi National Accelerator Laboratory. Additionally, the facility works with NASA and National Science Foundation to support research in astrophysics and cosmology, often in partnership with University of California, Los Angeles and University of Texas at Austin.
The Argonne Leadership Computing Facility is located at Argonne National Laboratory, which is managed by University of Chicago. The facility occupies over 10,000 square feet of space and is equipped with high-performance computing systems, storage systems, and networking infrastructure, developed in collaboration with Cisco Systems and Juniper Networks. The facility is also connected to the Energy Sciences Network, which provides high-speed networking capabilities to support data transfer and collaboration with other research institutions, including Stanford Linear Accelerator Center and Brookhaven National Laboratory. Additionally, the facility has partnered with Dell and Lenovo to develop new computing systems.
The Argonne Leadership Computing Facility is managed by a team of computing professionals who are responsible for operating and maintaining the facility's high-performance computing systems, storage systems, and networking infrastructure. The facility is also supported by a team of research scientists who work with users to develop and optimize computational models and algorithms for materials science, climate modeling, and nuclear physics research, often in collaboration with University of Illinois at Urbana-Champaign and University of Wisconsin-Madison. The facility's operations are guided by a user advisory committee, which includes representatives from University of California, Berkeley, Harvard University, and Massachusetts Institute of Technology, and provides input on the facility's strategic direction and resource allocation. The facility also works with National Institute of Standards and Technology and National Institute of Health to develop new computing standards and best practices.
Category:Research facilities