Generated by GPT-5-mini| TSUBAME | |
|---|---|
| Name | TSUBAME |
| Developer | Tokyo Institute of Technology; Fujitsu; NVIDIA Corporation; Intel Corporation |
| Manufacturer | Fujitsu; NVIDIA Corporation |
| Introduced | 2006 |
| Predecessors | TSUBAME 1.0 |
| Successors | TSUBAME 3.0 |
| Country | Japan |
| Location | Tokyo |
| Operators | Tokyo Institute of Technology |
| Power | 1–4 MW (varies by generation) |
| Storage | petabyte-class parallel file systems |
| Memory | multi-terabyte aggregate |
| Cpu | Intel Xeon (various generations) |
| Gpu | NVIDIA Tesla (various generations) |
| Os | Linux-based systems |
| Applications | climate modeling; computational fluid dynamics; artificial intelligence; materials science |
TSUBAME is a series of Japanese supercomputers based at the Tokyo Institute of Technology designed for high-performance computing, deep learning, and computational science. Conceived to accelerate research across institutions such as the University of Tokyo, RIKEN, and industrial partners like NEC and Panasonic Corporation, TSUBAME installations combined cutting-edge processors from Intel Corporation and accelerators from NVIDIA Corporation with storage and interconnects from vendors including Fujitsu and Mellanox Technologies. The project contributed to national initiatives inspired by programs at Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, and international benchmarks like the TOP500 list.
TSUBAME series systems were built to support multidisciplinary workloads spanning collaborations with the Japan Aerospace Exploration Agency, Mitsubishi Heavy Industries, Hitachi, and research groups at Keio University and Osaka University. The systems emphasized heterogeneous architectures using CUDA-capable accelerators and x86 server nodes, mirroring trends visible in systems at Argonne National Laboratory and Sandia National Laboratories. TSUBAME installations were prominent in Asia alongside machines like K computer and later Fugaku, and engaged with global consortia such as the PRACE community and the IEEE technical societies.
Development began in the mid-2000s with TSUBAME 1.0, followed by iterative upgrades driven by collaborations involving Tokyo Institute of Technology, Fujitsu, and NVIDIA Corporation. The program's roadmap responded to shifts in accelerator strategy seen at National Center for Supercomputing Applications and inspired by architectures at Cray Research and IBM. Major milestones included adoption of NVIDIA Tesla GPUs for TSUBAME 1.2, deployment of InfiniBand fabrics from Mellanox Technologies mirroring deployments at Lawrence Berkeley National Laboratory, and successive procurement cycles reflecting national science policy influenced by the Ministry of Education, Culture, Sports, Science and Technology (Japan). International benchmarking entries on TOP500 and energy-efficiency rankings such as the Green500 documented TSUBAME's evolution.
TSUBAME combined multi-core Intel Xeon CPUs with many-core accelerators from NVIDIA Corporation across dense server racks supplied by vendors like Fujitsu and connectivity from Mellanox Technologies. Storage subsystems used parallel file systems analogous to Lustre and integrated with tape archives similar to infrastructures at National Institute of Informatics (Japan). Network topologies adopted fat-tree and hybrid interconnects paralleling designs used by Oak Ridge National Laboratory and Argonne National Laboratory. Management software stacks incorporated Linux distributions and resource managers comparable to SLURM and parallel libraries such as MPI and OpenMP, supporting toolchains like TensorFlow, PyTorch, and scientific suites used by groups at University of California, Berkeley and Massachusetts Institute of Technology.
TSUBAME entries on the TOP500 measured performance on the LINPACK benchmark, achieving competitiveness during deployment windows alongside continental peers such as K computer and national contemporaries like Fugaku in later years. Energy-efficiency assessments compared favorably on Green500 runs when leveraging GPU acceleration similar to optimizations pursued at Argonne National Laboratory for heterogeneous codes. Benchmarking also included domain-specific tests: computational fluid dynamics cases resembling workloads from NASA and climate simulations comparable to runs at the Met Office and European Centre for Medium-Range Weather Forecasts. Performance tuning drew on profiling techniques established at Lawrence Livermore National Laboratory and compiler optimizations from Intel Corporation.
Research on TSUBAME spanned climate science with collaborations involving Japan Meteorological Agency, materials discovery connected to groups at Tohoku University, computational chemistry similar to projects at California Institute of Technology, and machine learning research conducted with frameworks pioneered by teams at Google and Facebook. Industrial partnerships enabled simulations for Mitsubishi Heavy Industries and product design workflows at Panasonic Corporation and Canon Inc.. Academic use included student training programs modeled after curricula at Stanford University and University of Oxford, and joint projects with national labs such as RIKEN and JAXA.
Operation and maintenance were overseen by Tokyo Institute of Technology with funding from national bodies including the Ministry of Education, Culture, Sports, Science and Technology (Japan) and partnerships with corporations like Fujitsu and NVIDIA Corporation. Allocation policies resembled competitive access programs at National Science Foundation-funded centers and included peer-reviewed proposals akin to procedures at European Research Council-backed infrastructures. User support and training leveraged expertise from center staff and international collaborations with centers at Pawsey Supercomputing Centre and National Computational Infrastructure (Australia).
TSUBAME influenced Japan's trajectory toward heterogeneous, GPU-accelerated supercomputing and informed design choices for successors such as TSUBAME 3.0 and broader national systems including Fugaku. Its role in promoting GPU-centric workflows paralleled shifts seen at Oak Ridge National Laboratory and helped seed research communities in deep learning and accelerated simulation across institutions like University of Tokyo and Keio University. The system's operational lessons on cooling, scheduling, and energy efficiency contributed to best practices adopted by vendors including Fujitsu and NVIDIA Corporation and influenced procurement strategies at research centers internationally, including those associated with PRACE and the TOP500 ecosystem.