Generated by DeepSeek V3.2| Cray Gemini | |
|---|---|
| Name | Cray Gemini |
| Manufacturer | Cray Inc. |
| Active | 2010–2015 |
| Predecessor | Cray XT5 |
| Successor | Cray Aries |
| Operating system | Cray Linux Environment |
| Power | ~1.5 MW |
| Speed | ~1 petaFLOPS (peak) |
| Memory | DDR3 |
| Interconnect | Gemini network interface and router |
| Storage | Lustre (file system) |
Cray Gemini. It was a high-performance interconnect and network architecture developed by Cray Inc. for its Cray XE6 and Cray XK7 supercomputer systems. The technology represented a significant evolution from the previous SeaStar interconnect, focusing on improved scalability and lower latency for massively parallel applications. Its introduction was central to several systems that entered the TOP500 list, marking a key phase in the pursuit of exascale computing.
The development of this interconnect was driven by the need to overcome bottlenecks in earlier systems like the Cray XT5. Engineers at Cray Inc. designed it to provide a more tightly integrated and efficient fabric for linking thousands of AMD Opteron compute nodes and NVIDIA Tesla GPU accelerators. It first debuted in 2010 within the Cray XE6 "Baker" system at the United States Department of Energy's National Energy Research Scientific Computing Center. This architecture enabled more effective execution of complex simulations for fields such as climate modeling and computational fluid dynamics.
The architecture comprised a custom application-specific integrated circuit (ASIC) that combined network interface and routing functions on a single chip. This Gemini router ASIC utilized a 3D torus topology, a departure from the SeaStar's hypercube design, to enhance bisection bandwidth and reduce hop counts. Each router served two compute nodes, creating a highly integrated "node-to-router" pairing that minimized latency. The design also supported adaptive routing and integrated Global Address Space (GAS) features, facilitating efficient operations for programming models like UPC (programming language) and Coarray Fortran.
Major deployments included the Hopper (supercomputer) at NERSC, the Titan (supercomputer) at the Oak Ridge National Laboratory, and the Blue Waters system at the National Center for Supercomputing Applications. These machines were used for landmark research projects, such as materials science simulations for the United States Department of Energy and astrophysical modeling for the National Science Foundation. International installations, like the Helios (supercomputer) at the International Fusion Energy Research Centre in Japan, also utilized this interconnect for plasma physics research.
The interconnect demonstrated a significant performance leap, with a per-link bandwidth of 9.6 GB/s in each direction and a latency as low as 1.3 microseconds. Systems like Titan (supercomputer), which combined it with NVIDIA Kepler GPUs, achieved a peak performance of over 20 petaFLOPS, claiming the top spot on the TOP500 list in November 2012. Its power efficiency was also improved, though large-scale deployments like Blue Waters still required substantial infrastructure support from facilities like the University of Illinois at Urbana-Champaign.
The software stack was anchored by the Cray Linux Environment and the Cray Programming Environment, which included optimized compilers for Fortran, C (programming language), and C++. It supported the Message Passing Interface (MPI) standard, with the Cray MPICH library providing low-overhead communication primitives that leveraged the hardware's capabilities. Programming models for hybrid systems, such as OpenACC and CUDA, were also supported on Cray XK7 platforms, enabling scientists at institutions like the Swiss National Supercomputing Centre to port complex applications.
This interconnect was succeeded by the Cray Aries architecture, which debuted in the Cray XC30 series and offered enhanced scalability and Dragonfly topology. The lessons learned directly influenced the design of the Slingshot (interconnect) for future Cray Shasta systems. Its technological legacy is evident in the sustained operation of systems like Blue Waters through the mid-2010s and its role in advancing frameworks for exascale computing initiatives such as the CORAL (supercomputers) project. Key engineers involved in its development later contributed to interconnect designs for the Frontier (supercomputer) and Aurora (supercomputer) projects.
Category:Supercomputer interconnects Category:Cray Inc.