Generated by GPT-5-mini| Carpet (adaptive mesh refinement) | |
|---|---|
| Name | Carpet |
| Title | Carpet (adaptive mesh refinement) |
| Developer | Albert Einstein Institute, University of Illinois Urbana–Champaign, Max Planck Society |
| Released | 2000s |
| Latest release version | community-maintained |
| Programming language | C++, Fortran (programming language), MPI (Message Passing Interface) |
| Operating system | Linux, macOS, Windows |
| License | GNU General Public License |
Carpet (adaptive mesh refinement) is a software infrastructure providing block-structured adaptive mesh refinement (AMR) and multi-block capabilities for numerical relativity, computational astrophysics, and high-performance computing. It integrates with the Cactus (framework) and is used in simulations involving Einstein field equations, relativistic hydrodynamics, and wave propagation, enabling refined spatial resolution where solution features such as shocks, singularities, or gravitational waves require it. Carpet supports time subcycling, parallel I/O, and load balancing to scale simulations on leadership-class supercomputers.
Carpet originated within projects at the Max Planck Society and the Albert Einstein Institute to support codes developed by researchers at institutions such as the University of Illinois Urbana–Champaign, Caltech, and the University of Maryland, College Park. It interoperates with the Cactus (framework) component model and complements evolution thorns implementing formulations like the BSSN formalism and the Generalized harmonic formulation. Carpet provides block-structured AMR, mesh refinement criteria, and utility services for checkpointing and restart on platforms including systems at the Oak Ridge National Laboratory and the National Energy Research Scientific Computing Center.
Carpet's architecture centers on a hierarchy of nested refinement levels composed of rectangular grid blocks managed during runtime by a grid scheduler. It implements ghost-zone management, prolongation and restriction operators, and boundary-condition handling compatible with thorns implementing hydrodynamics such as those used by the Einstein Toolkit. The codebase is written in C++ with performance-critical pieces in Fortran (programming language) and uses MPI (Message Passing Interface) for distributed-memory parallelism and HDF5-style parallel I/O for scalable data output. Integration points include driver thorns in Cactus (framework), analysis tools used by groups at Caltech, and visualization pipelines that use formats consumed by VisIt and ParaView.
Carpet implements block-structured AMR with Berger–Oliger time subcycling and supports a range of prolongation (interpolation) and restriction (averaging) operators suited to finite-difference and finite-volume schemes. Refinement triggers include curvature measures used in numerical relativity and shock indicators used in relativistic hydrodynamics modules developed by teams at University of Illinois Urbana–Champaign and Penn State University. Options for conservative flux correction and refluxing are available to maintain conservation across refinement boundaries for Euler and magnetohydrodynamics solvers commonly used in simulations inspired by work at NASA Ames Research Center and the Max Planck Institute for Astrophysics.
Carpet employs domain decomposition and dynamic load balancing to distribute AMR blocks across MPI ranks on systems such as those at the Argonne National Laboratory and the Lawrence Livermore National Laboratory. Performance engineering has targeted cache-aware memory layouts, thread-parallel inner loops compatible with OpenMP and accelerator strategies explored in collaborations with teams at the Oak Ridge National Laboratory and industry partners. Profiling and scaling studies often reference benchmarks on supercomputers like Summit (supercomputer), Sierra (supercomputer), and facilities at the National Center for Supercomputing Applications to demonstrate strong-scaling behavior for production runs of compact-object mergers and core-collapse scenarios investigated by groups at Caltech and the Max Planck Institute for Gravitational Physics.
Carpet is widely used in simulations of binary black hole mergers, neutron-star collisions, core-collapse supernovae, and scalar-field evolutions, supporting science led by collaborations such as the LIGO Scientific Collaboration, the Einstein Toolkit consortium, and research teams at MIT, Princeton University, and the University of Cambridge. It enables targeted high-resolution studies of gravitational-wave emission in work connected to the Laser Interferometer Gravitational-Wave Observatory and multimessenger modeling tied to observatories like the European Southern Observatory. Other use cases include relativistic magnetohydrodynamics in accretion-disk studies influenced by researchers at the Max Planck Institute for Astrophysics and scalar-field critical collapse studies developed in groups at University of Texas at Austin.
Validation suites for Carpet-driven codes include convergence tests for the BSSN formalism, exact-solution comparisons such as single black-hole evolutions and TOV-star stability, and shock-tube tests used in hydrodynamics validation workflows curated by the Einstein Toolkit community. Benchmarks demonstrate accuracy and efficiency against standard problems used in computational-relativity literature produced by authors at Caltech, University of Illinois Urbana–Champaign, and University of Michigan. Scaling studies report performance on leadership systems at centers including the National Energy Research Scientific Computing Center and compare throughput for I/O and timestepping to quantify efficiency gains from AMR and parallel optimizations.
Development of Carpet occurs within an open-science ecosystem centered on the Einstein Toolkit and the Cactus (framework), with contributions from researchers at institutions such as the Albert Einstein Institute, Caltech, University of Illinois Urbana–Champaign, and the Max Planck Society. The community coordinates via workshops, code sprints, and mailing lists tied to conferences like the International Conference for High Performance Computing, Networking, Storage and Analysis and meetings organized by the American Physical Society and the International Society on General Relativity and Gravitation. Training materials and user support are provided through community tutorials and collaboration-led repositories maintained by teams at CCTP (Caltech), AEI Potsdam, and partner universities.
Category:Adaptive mesh refinement Category:Numerical relativity Category:Scientific simulation software