Generated by GPT-5-mini| K supercomputer | |
|---|---|
| Name | K supercomputer |
| Developer | Fujitsu; RIKEN |
| Country | Japan |
| Introduced | 2011 |
| Architecture | massively parallel; distributed memory; SPARC64 |
| Peak | 10.51 petaflops (LINPACK); 8.162 PF sustained (LINPACK) |
| Storage | large-scale parallel file system |
| Power | ~10 MW |
K supercomputer
The K supercomputer was a Japanese petascale supercomputer developed by Fujitsu in collaboration with the Riken institute and installed at the Riken Advanced Institute for Computational Science in Kobe. Announced in the late 2000s and commissioned in 2011, the system achieved global attention by topping the TOP500 list before being succeeded by later systems such as Tianhe-1A, Sequoia, and Sunway TaihuLight. Its development involved partnerships with suppliers including Intel (for interconnect technology discussions) and relied on Japanese microprocessor design traditions exemplified by SPARC lineage and companies like NEC. The project paralleled international efforts at Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and Argonne National Laboratory to reach exascale-class performance.
The K project was funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology and managed by Riken with implementation by Fujitsu. It was part of Japan’s broader strategic roadmap, aligned with initiatives such as the Next-Generation Supercomputer Project and coordinated with agencies like the Japan Science and Technology Agency. K aimed to deliver sustained petaflops performance for scientific workloads in fields like climate modeling centers linked to World Meteorological Organization datasets, molecular dynamics groups collaborating with institutions such as EMBL and Max Planck Society, and engineering teams from corporations like Toyota and Mitsubishi Heavy Industries.
K employed a massively parallel architecture using custom nodes based on Fujitsu’s SPARC64 VIIIfx processors organized into cabinets with high-bandwidth interconnects. The system integrated tens of thousands of compute nodes, large memory per node, and a hierarchical network fabric influenced by research at EPFL and Lawrence Berkeley National Laboratory. Storage comprised parallel file systems and high-throughput I/O subsystems comparable to deployments at CERN and Los Alamos National Laboratory. Cooling and rack design drew on practices used in facilities at Fukushima-area data centers and international colocation sites such as those at Harvard University and Stanford University.
K reached a LINPACK performance that placed it at the top of the TOP500 in 2011, delivering over 10 petaflops peak and sustained multi-petaflops results used by teams across NASA centers and European Centre for Medium-Range Weather Forecasts partners. Benchmarking involved comparisons to systems like Blue Gene/Q and Cray XT5, with performance tuning informed by methods developed at Sandia National Laboratories and Los Alamos National Laboratory. K’s efficiency metrics influenced the evolution of metrics adopted by organizations such as the Green500 list and the broader High Performance Computing (HPC) community led by conferences like SC and International Supercomputing Conference.
The system supported parallel programming models and libraries common in the HPC ecosystem, including implementations referenced in work at Technische Universität München and University of Tokyo research groups. Software stacks included optimized compilers, numerical libraries, and middleware interoperable with standards discussed at OpenMP Architecture Review Board meetings and MPI Forum workshops. Application codes from collaborations with institutions like Princeton University, University of California, Berkeley, and Imperial College London were ported and profiled to exploit K’s architecture, leveraging toolchains akin to those used at National Institute of Advanced Industrial Science and Technology.
K was deployed in a purpose-built facility at Riken with dedicated power and cooling infrastructure, reflecting lessons from installations at Fermi National Accelerator Laboratory and Brookhaven National Laboratory. Power consumption was on the order of megawatts, prompting energy-efficiency research tied to programs at Japan Agency for Marine-Earth Science and Technology and collaborations with the Ministry of Economy, Trade and Industry. Innovations in floor layout, cooling loops, and power delivery paralleled work at Microsoft Azure datacenters and prompted comparative studies with systems like Frontera and Summit regarding PUE and thermal management.
K supported large-scale simulations across domains including climate change studies in partnership with IPCC contributors, seismology models coordinated with Japan Meteorological Agency, materials science research linked to KEK projects, and bioinformatics collaborations involving RIKEN Center for Biosystems Dynamics Research. Industrial use cases included computational fluid dynamics for companies like Nissan Motor Corporation and Mitsubishi Heavy Industries, as well as drug discovery efforts interfacing with Pfizer and academic groups at Kyoto University.
K influenced subsequent Japanese and international systems, informing design choices in successor projects at Riken and industry efforts at Fujitsu that contributed to later architectures used in systems such as Fugaku. Its operation affected policy discussions at the Council for Science and Technology Policy and technical roadmaps at IEEE conferences and ACM symposia. K’s legacy persists in HPC curricula at institutions like University of Tsukuba and in collaborative frameworks among national labs including Oak Ridge National Laboratory and Argonne National Laboratory.