Generated by GPT-5-mini| Fugaku (supercomputer) | |
|---|---|
| Name | Fugaku |
| Developer | RIKEN; Fujitsu |
| Introduced | 2020 |
| Precursor | K computer |
| Os | IXS (custom Linux) |
| Cpu | Fujitsu A64FX |
| Memory | HBM2 |
| Interconnect | TofuD |
| Flops | 442 PFLOPS (HPL, peak) |
| Power | ~28 MW (max) |
| Purpose | HPC, AI, simulation |
Fugaku (supercomputer) is a Japanese supercomputer developed by RIKEN and Fujitsu as a successor to the K computer. Commissioned at the Riken Center for Computational Science in Kobe, Fugaku aimed to advance national capabilities for computational science, climate modeling, epidemiology, and artificial intelligence. Fugaku achieved leading positions in multiple international rankings and served as infrastructure for projects involving Ministry of Education, Culture, Sports, Science and Technology (Japan), Japan Aerospace Exploration Agency, and international collaborations.
Fugaku was conceived during planning involving RIKEN, Fujitsu, and stakeholders including MEXT (Japan), with leadership drawing on lessons from the K computer project and input from Japanese research institutes such as RIKEN Center for Computational Science, University of Tokyo, and Kyoto University. Development milestones included prototype chips announced alongside collaborations with fabs and partners like ARM Limited for architecture consultation and international procurement involving vendors such as TSMC and suppliers engaged with JEDEC standards. Procurement and deployment were coordinated with national strategies influenced by initiatives linked to Society 5.0 and industrial partners including Toyota Motor Corporation, Hitachi, and NEC Corporation for ecosystem support.
The Fugaku system is built around the Fujitsu A64FX processor, a 48-core + 2 assistant-core ARMv8-A derivative implementing Scalable Vector Extension features and designed for high-bandwidth memory via HBM2. Nodes connect using Fujitsu's custom TofuD interconnect in a high-dimensional torus topology, integrating system elements from vendors such as Fujitsu Limited and involving standards groups like JEDEC for memory. Storage subsystems leveraged parallel file systems influenced by work from Lawrence Berkeley National Laboratory and technologies similar to Lustre implementations. The system chassis, racks, and power delivery echoed designs used at facilities such as Oak Ridge National Laboratory and Argonne National Laboratory while meeting facility constraints in the Riken Kobe data center.
Fugaku topped the TOP500 list with HPL and sustained high performance on benchmarks including HPCG and mixed-precision AI workloads; it achieved multi-hundred petaflop performance on HPL and set records in Graph500 and HPL-AI. Benchmark submissions were evaluated alongside historic systems like Summit (supercomputer), Sierra (supercomputer), Sunway TaihuLight, and Tianhe-2. Performance studies compared Fugaku results to systems at National Center for Supercomputing Applications, EMSL, and CERN compute clusters, demonstrating strengths in memory bandwidth, vectorized throughput, and low-latency interconnect for large-scale simulations.
Fugaku runs a Linux-based environment tailored by Fujitsu and RIKEN, supporting compilers and toolchains from vendors including Fujitsu Limited, GNU Project, and ecosystem tools influenced by OpenMP and MPI standards. Programming models emphasized support for HIP-like portability, vectorization via SVE-inspired intrinsics, and parallelism patterns compatible with libraries used at institutions such as Argonne National Laboratory, Riken Center for Computational Science, and Los Alamos National Laboratory. Development stacks included scientific libraries, optimized math kernels comparable to BLAS and LAPACK implementations, and middleware used by projects at NASA and European Centre for Medium-Range Weather Forecasts.
Fugaku supported a broad portfolio of applications: pandemic simulation efforts in coordination with Ministry of Health, Labour and Welfare (Japan) and public health researchers; climate and weather modeling aligned with Japan Meteorological Agency needs; disaster resilience simulations involving agencies like Cabinet Office (Japan) and urban planning institutes; materials science studies in collaboration with Riken, Tohoku University, and National Institute for Materials Science; and AI-driven drug discovery efforts connected to pharmaceutical groups such as Takeda Pharmaceutical Company. International collaborative projects linked Fugaku users with researchers at Imperial College London, Massachusetts Institute of Technology, ETH Zurich, Max Planck Society, and CNRS for cross-disciplinary simulations.
Fugaku emphasized energy efficiency, achieving top rankings on the Green500 list during evaluations by implementing power-aware schedulers and node-level optimizations similar to strategies at Jülich Research Centre and Lawrence Berkeley National Laboratory. Cooling solutions at the RIKEN Kobe facility incorporated advanced water-cooling and chilled-water systems informed by installations at Oak Ridge National Laboratory and compliance with regional utility partnerships and infrastructure managed with companies like Kobe City utilities. Energy management integrated monitoring and facility controls coordinated with national energy policy stakeholders including METI (Japan).
Fugaku entered full operation with allocation programs administered by RIKEN, supporting academic, industrial, and government research through peer-review programs modeled after allocation systems used at PRACE and XSEDE. Operations involved collaborations with international bodies such as IEEE and ISC Group for benchmarking and workshops. Fugaku's legacy includes influencing successor exascale planning at institutions like European Exascale Project partners, informing chip and interconnect roadmaps at Fujitsu Limited and others, and seeding software ecosystems across universities including Osaka University, Nagoya University, and Hokkaido University. The system contributed to publications in venues like Nature, Science, and proceedings of SC (conference), leaving impact on computational science, AI research, and national HPC strategy.