Generated by GPT-5-mini| FLASH (code) | |
|---|---|
| Name | FLASH |
| Title | FLASH (code) |
| Developer | University of Chicago Flash Center for Computational Science; University of Illinois Urbana-Champaign; DOE laboratories |
| Released | 2000 |
| Programming language | Fortran, C |
| Operating system | Unix-like |
| Platform | x86, x86_64, ARM |
| Genre | Astrophysical hydrodynamics, magnetohydrodynamics, multiphysics simulation |
| License | Software License (proprietary and open-source components) |
FLASH (code)
FLASH is a parallel, adaptive mesh refinement (AMR) simulation framework originally developed for astrophysical fluid dynamics and later extended to a wide range of multiphysics problems. It began as a project combining expertise from computational physics, high-performance computing, and numerical analysis and has been used in studies spanning supernovae, galaxy clusters, laboratory astrophysics, and inertial confinement fusion. FLASH couples hydrodynamics, magnetohydrodynamics, gravity, radiation transport, nuclear reactions, and microphysics within a modular, extensible codebase that targets large-scale clusters and supercomputers.
Development of the code started at the University of Chicago in the late 1990s with collaborators at the University of Illinois Urbana-Champaign and national laboratories such as Los Alamos National Laboratory and Lawrence Livermore National Laboratory. Early work was informed by research programs at the Department of Energy and scientific priorities outlined by projects connected to the NASA Supernova program, the National Science Foundation, and laboratory astrophysics initiatives. The first public descriptions of the code appeared in the early 2000s alongside comparisons with results from the CASTRO code and benchmarking campaigns involving the Athena and Enzo communities. Influential collaborators included scientists affiliated with the Princeton Plasma Physics Laboratory, Stanford University, and Argonne National Laboratory who contributed modules for magnetohydrodynamics, self-gravity, and radiation. Over subsequent decades FLASH was applied in studies tied to experiments at facilities like the National Ignition Facility and missions supported by the Chandra X-ray Observatory and the Hubble Space Telescope.
FLASH employs a block-structured adaptive mesh refinement paradigm inspired by AMR frameworks developed at Lawrence Berkeley National Laboratory and influenced by earlier codes from groups at Los Alamos National Laboratory. The architecture separates physics units into modular components that can be composed for a problem, reflecting software-engineering practices seen at the Software Carpentry community and projects funded by the DOE Office of Science. Core infrastructure handles I/O, mesh management, and parallel communication; physics units implement hydrodynamics, magnetohydrodynamics, gravity, and microphysics. The codebase is written primarily in Fortran and C and was engineered to run on systems ranging from campus clusters to leadership-class machines at facilities such as Oak Ridge National Laboratory and Argonne National Laboratory. FLASH’s configuration system allows researchers from institutions like Caltech, MIT, and Harvard University to assemble problem setups without modifying the core, mirroring modular designs found in projects at the European Centre for Medium-Range Weather Forecasts and the Max Planck Society.
FLASH implements hydrodynamic solvers (Godunov-type shock-capturing schemes) and magnetohydrodynamic solvers using approximate Riemann solvers and constrained transport techniques comparable to methods used in Athena (software) and PLUTO (code). Gravity is handled via multigrid and Poisson solvers akin to approaches from the RAMSES (code) and ENZO (software) projects. Radiation transport options include flux-limited diffusion and multigroup schemes used in partnership with methods developed for CASTRO code and ZEUS-MP. Nuclear burning and reaction networks borrowed algorithms from reaction-rate compilations used in studies associated with the Joint Institute for Nuclear Astrophysics and the National Ignition Facility. Microphysical modules include thermal conduction, viscosity, and equation-of-state models consistent with tables produced by collaborations with the Los Alamos National Laboratory and the Livermore National Laboratory.
FLASH was designed to scale on distributed-memory systems using MPI-based parallelism and domain decomposition strategies similar to those adopted by BoxLib and Chombo AMR frameworks. Performance engineering targeted leadership systems at Oak Ridge National Laboratory (including Summit) and Lawrence Livermore National Laboratory installations. Load balancing, I/O performance, and checkpoint/restart features draw on experience from projects at NERSC and include optimizations for Lustre and GPFS file systems employed at Argonne National Laboratory. GPU-acceleration and hybrid MPI+OpenMP strategies have been explored in joint efforts with researchers at NVIDIA and computational groups at Sandia National Laboratories.
FLASH has been applied to core-collapse supernova simulations studied in collaboration with teams at University of Arizona and Max Planck Institute for Astrophysics, modeling instabilities and explosion asymmetries observed by the Chandra X-ray Observatory and inferred from Supernova 1987A. It underpins models of Type Ia supernova ignition relevant to programs at Harvard-Smithsonian Center for Astrophysics and University of California, Berkeley. Laboratory astrophysics experiments at the National Ignition Facility and the Z Machine used FLASH for design and interpretation. FLASH has been used to simulate jet dynamics studied by researchers at Princeton University and galaxy-cluster physics compared to observations by the XMM-Newton and Planck (spacecraft). Inertial confinement fusion modeling linked FLASH outputs to diagnostics developed by Lawrence Livermore National Laboratory and experimental campaigns at the Laboratory for Laser Energetics.
The FLASH code is developed by a community including the Flash Center at the University of Chicago, contributors from University of Illinois Urbana-Champaign, and collaborators at national laboratories such as Los Alamos National Laboratory and Lawrence Livermore National Laboratory. The project has operated under a mixture of institutional licenses and redistribution policies negotiated with funders like DOE and NSF, with community releases accompanied by documentation and test problems. Training workshops and tutorials have been held at conferences organized by American Astronomical Society divisions and meetings of the American Physical Society. The user community includes researchers from institutions such as Caltech, MIT, Princeton University, and Stanford University and is supported by mailing lists, collaborative repositories, and periodic code sprints.
FLASH is often compared to grid-based AMR codes like ENZO (software), RAMSES (code), and Athena (software) as well as to finite-volume and particle-based frameworks such as GADGET (code) and AREPO. Compared to GADGET (code)’s smoothed-particle hydrodynamics approach, FLASH’s Eulerian AMR excels at resolving shocks and contact discontinuities; comparisons with RAMSES (code) highlight differences in mesh implementation and gravity solvers. In radiation-hydrodynamics problems FLASH is benchmarked alongside CASTRO code and ZEUS-MP; for magnetized turbulence studies it is contrasted with PLUTO (code) and Athena (software). Performance trade-offs among these codes depend on problem setup, resolution, and target architectures provided by facilities like Oak Ridge National Laboratory and Argonne National Laboratory.
Category:Computational astrophysics