Generated by GPT-5-mini| RIKEN K computer | |
|---|---|
| Name | K computer |
| Developer | Riken, Fujitsu |
| Introduced | 2011 |
| Discontinued | 2019 |
| Type | Supercomputer |
| Cpu | SPARC64 VIIIfx processors |
| Cores | 705,024 |
| Peak | 10.51 petaflops (theoretical) |
| Memory | 1.4 PB |
| Storage | 1 PB Lustre |
| Power | 12.7 MW |
| Location | Kobe, Hyōgo Prefecture, Japan |
RIKEN K computer was a Japanese flagship supercomputer developed by Riken in collaboration with Fujitsu and installed at the Riken Advanced Institute for Computational Science in Kobe. Designed for large-scale simulations in fields such as climate change, seismology, astrophysics, and material science, it achieved world-leading performance milestones and influenced subsequent designs in high-performance computing. The system combined massive parallelism with energy-efficient engineering to tackle computational problems for national research programs and international collaborations.
The project was conceived by Riken and executed by Fujitsu with support from the Ministry of Education, Culture, Sports, Science and Technology (Japan), aiming to provide Japanese researchers access to exascale-enabling technologies. The machine used hundreds of thousands of SPARC64 cores across tens of thousands of nodes, interconnected via a proprietary network, and operated within the Riken Advanced Institute for Computational Science facility. The platform was made available to scientific communities through peer-reviewed allocation programs involving institutions such as University of Tokyo, Kyoto University, Tohoku University, and international partners.
The hardware architecture centered on the SPARC64 VIIIfx processor developed by Fujitsu and based on the SPARC instruction set architecture, optimized for double-precision floating-point throughput. Nodes contained multiple processor chips, integrating on-chip caches and coherent memory controllers, and were paired with substantial system memory built from technologies used by vendors like Hynix and Mitsubishi Electric. The interconnect used a proprietary torus/mesh hybrid network designed by Fujitsu to provide low-latency, high-bandwidth links across the machine's 88,128 physical compute nodes. Storage and I/O subsystems employed the Lustre parallel filesystem and were supported by vendors and projects related to OpenMP-style and MPI-based programming models; software stacks included compilers and tools from Fujitsu, libraries influenced by BLAS implementations, and middleware for job scheduling used concepts from systems like TORQUE and PBS Professional.
On the June 2011 TOP500 list, the system ranked first by achieving 8.162 petaflops on the LINPACK benchmark, later peaking at 10.51 petaflops theoretical performance. The machine's LINPACK result demonstrated scaling efficiency for dense linear algebra at unprecedented scales for its time, drawing comparisons with contemporaries such as IBM Blue Gene/Q and Cray XT5. Energy efficiency metrics were evaluated in relation to the Green500 list, and operational power consumption triggered innovations in cooling implemented similarly to approaches used at facilities like Oak Ridge National Laboratory and Lawrence Livermore National Laboratory. The architecture's performance on application benchmarks such as HPC Challenge and domain-specific codes showed strengths for stencil computations and structured-grid solvers, paralleling research efforts at institutions including Los Alamos National Laboratory and Argonne National Laboratory.
Researchers used the system for multi-physics simulations involving coupled models in climate change research, hurricane and ocean modelling akin to projects at NOAA, mantle convection and earthquake rupture simulations comparable with work at the United States Geological Survey, and direct numerical simulations in turbulence and aerodynamics similar to studies from NASA. In material science and chemistry, electronic structure calculations and first-principles molecular dynamics exploited massive parallelism, paralleling methodologies from groups at Max Planck Society and Lawrence Berkeley National Laboratory. Large-scale cosmological simulations and galaxy formation projects drew on algorithms used by teams at CERN and Princeton University, while bioinformatics and genomics workflows paralleled computational pipelines used by Broad Institute researchers. Allocation programs engaged academic and industrial partners including Toyota, NEC, and pharmaceutical collaborators for design optimization and data-intensive investigations.
Design and deployment involved collaboration between Riken and Fujitsu starting in the mid-2000s, with prototype validation influenced by earlier systems such as Earth Simulator and lessons from projects at University of Tsukuba. The machine entered service in 2011 and ran production workloads under a merit-based access policy administered by Riken; it was periodically upgraded and maintained until cessation of operations in 2019 when it was succeeded by successor initiatives in Japanese national computing strategy. During its operational lifespan, the platform supported international collaborations and hosted workshops involving organizations such as IEEE, SC Conference attendees, and researchers from European Centre for Medium-Range Weather Forecasts and other centres. Decommissioning followed planned transition paths to newer architectures while preserving scientific outputs in community repositories and publications in journals like Nature, Science, and Journal of Computational Physics.
The project influenced subsequent exascale roadmaps and designs pursued by vendors including Fujitsu, NEC, Cray (now HPE), and IBM through demonstrated scaling of multicore nodes, network topologies, and power-efficient practices. Technologies and software optimizations developed for the machine informed research at national laboratories such as Jülich Research Centre and Argonne National Laboratory, and contributed to standards work within consortia like TOP500 and Green500. Its impact is evident in later Japanese systems and global exascale projects funded by governments and institutions including European Commission initiatives and U.S. Department of Energy programs, shaping both hardware co-design philosophies and large-scale scientific application development.