Generated by GPT-5-mini| JUWELS | |
|---|---|
| Name | JUWELS |
| Caption | Modular supercomputing system |
| Manufacturer | Atos |
| Release | 2018 |
| Cpu | Intel Xeon Scalable (Cluster Module), AMD EPYC (Booster Module) |
| Gpu | NVIDIA A100 (Booster Module) |
| Memory | 2–64 GB per core (module-dependent) |
| Storage | Lustre parallel filesystem |
| Speed | up to petaflops (modular scaling) |
| Power | variable (modular) |
| Location | Forschungszentrum Jülich |
JUWELS
JUWELS is a modular high-performance computing system installed at Forschungszentrum Jülich. It serves as a national and European resource for scientific research, enabling computations for physics, chemistry, climate science, neuroscience, and engineering. The system integrates multiple compute technologies to offer flexible scaling and supports collaborative projects across research institutions such as RWTH Aachen University, Max Planck Society, and Forschungszentrum Jülich partner centers.
JUWELS operates as a tier-0 supercomputer within the European HPC ecosystem, interfacing with initiatives like EuroHPC, PRACE, and Gaia-X. The system provides resources to projects funded by the German Federal Ministry of Education and Research and the European Commission, and hosts users from institutions including the University of Cologne, Technical University of Munich, and Helmholtz Association centers. JUWELS combines general-purpose CPU modules with accelerator-driven booster modules to address workloads ranging from grand-challenge simulations comparable to those run on systems at Lawrence Berkeley National Laboratory and Argonne National Laboratory to data-intensive pipelines akin to workflows at CERN and EMBL.
The modular architecture separates a Cluster Module based on Intel Xeon Scalable processors from an accelerator-rich Booster Module featuring AMD EPYC processors paired with NVIDIA GPUs. Interconnect topology uses high-speed fabric technologies analogous to Mellanox InfiniBand deployments at Oak Ridge National Laboratory and Barcelona Supercomputing Center. Storage and I/O subsystems use parallel filesystems similar to Lustre installations at Lawrence Livermore National Laboratory and storage strategies employed by CERN’s EOS. Management stacks and job schedulers align with Slurm and PBS-derived systems used at Max Planck Institutes and the Swiss National Supercomputing Centre. The configuration enables integration with software ecosystems from Intel, AMD, NVIDIA, HPE, and Atos components found across SuperMUC and JUQUEEN-class systems.
JUWELS achieves performance metrics demonstrated through LINPACK, HPCG, and application-level benchmarks used by institutions like Oak Ridge, Argonne, and Sandia. LINPACK results compare to petascale installations such as SuperMUC-NG and JUGENE-era systems, while application benchmarks span codes like GROMACS, NAMD, VASP, Quantum ESPRESSO, and OpenFOAM used at universities including ETH Zurich and Imperial College London. Performance tuning employs libraries from Intel MKL, AMD BLIS, NVIDIA cuBLAS, and parallel programming models exemplified at Barcelona Supercomputing Center and RIKEN. Benchmarking collaborations with PRACE and EuroHPC centers ensure reproducibility against standards set by the TOP500 and Green500 lists.
JUWELS was developed by Forschungszentrum Jülich in partnership with Atos and other European vendors, following precedents set by Jülich’s previous systems such as JUROPA and JUQUEEN. Initial deployment emphasized modularity inspired by architectures at CEA and LRZ, with subsequent upgrades adding booster modules and accelerator nodes analogous to retrofits at Piz Daint and ARCHER2. Project milestones include procurement phases coordinated with Federal agencies and collaborations with European research infrastructure programs. Upgrade paths have included expansions to GPU counts, memory per node increases, and software stack modernization reflecting trends from NVIDIA, AMD, and Intel roadmaps.
Researchers use JUWELS for large-scale molecular dynamics relevant to projects at EMBL and Max Planck Institutes, for climate modeling in collaboration with the European Centre for Medium-Range Weather Forecasts and DKRZ, for astrophysics simulations similar to those at the Max Planck Institute for Astrophysics and Harvard-Smithsonian Center for Astrophysics, and for materials science studies akin to work at MIT and Caltech. JUWELS supports machine learning and AI workflows comparable to research at DeepMind and Google Brain, bioinformatics pipelines like those at EMBL-EBI, and engineering simulations parallel to projects at Airbus and Siemens. Cross-institutional collaborations draw users from University of Oxford, Sorbonne Université, KU Leuven, and Barcelona Supercomputing Center.
Governance of JUWELS falls under Forschungszentrum Jülich’s supercomputing center leadership, coordinated with national bodies such as the Helmholtz Association and funding agencies including the German Federal Ministry of Education and Research and the European Commission. Operational partnerships involve vendors like Atos and system integrators with procurement frameworks similar to those used by the European Research Council and national computing centers like Gauss Centre for Supercomputing. Access policies follow allocation models used by PRACE, national allocation committees at JARA, and institutional agreements with universities such as RWTH Aachen and University of Bonn.
Energy efficiency measures for JUWELS incorporate cooling strategies and power management approaches comparable to those at the Swiss National Supercomputing Centre and the Jülich site’s other installations. Efficiency optimizations use workload scheduling, dynamic voltage and frequency scaling techniques studied at Lawrence Berkeley National Laboratory, and waste heat recovery concepts similar to district heating projects in Stockholm and Copenhagen. Assessments consider metrics reported to the Green500 program and sustainability initiatives pursued by the Helmholtz Association and European Commission programs.