Generated by GPT-5-mini| Pleiades (supercomputer) | |
|---|---|
![]() Marco Librero, NASA Ames Research Center · Public domain · source | |
| Name | Pleiades |
| Manufacturer | Hewlett Packard Enterprise |
| Release | 2010 |
| Architecture | Intel Xeon CPUs, NVIDIA Tesla |
| Os | SUSE Linux Enterprise Server, Red Hat Enterprise Linux |
| Memory | 147 TB (as configured) |
| Storage | 4 PB (approx.) |
| Flops | 7.25 petaFLOPS (2018) |
| Location | Ames Research Center |
| Operator | NASA Advanced Supercomputing Division |
Pleiades (supercomputer)
Pleiades is a high-performance computing system operated by the NASA Advanced Supercomputing Division at NASA Ames Research Center. It supports large-scale simulation and modeling for projects associated with NASA, National Oceanic and Atmospheric Administration, United States Air Force, United States Department of Energy, and academic partners including Stanford University, Massachusetts Institute of Technology, and California Institute of Technology. The system has been upgraded in stages using hardware from Hewlett Packard Enterprise and accelerators from NVIDIA, sustaining placement on the TOP500 list and enabling work across programs tied to International Space Station, Artemis program, and climate research linked to Intergovernmental Panel on Climate Change assessments.
Pleiades provides petascale compute capability for computational fluid dynamics tasks connected to Boeing, Lockheed Martin, and NASA flight projects, as well as astrophysics simulations used by teams at Jet Propulsion Laboratory and Harvard University. It supports multi-disciplinary workloads including climate modeling coordinated with National Center for Atmospheric Research and reentry aerothermal studies relevant to European Space Agency collaborations. The platform underpins mission analyses for Mars Reconnaissance Orbiter, Hubble Space Telescope data processing, and high-fidelity design workflows for aerospace firms such as Northrop Grumman.
The system architecture is built around clusters of Intel Xeon processors and, in later expansions, NVIDIA Tesla GPU accelerators hosted in HPE Apollo enclosures. Interconnect fabric relies on InfiniBand networking from vendors like Mellanox Technologies to provide low-latency, high-bandwidth links between nodes. Storage tiers combine parallel file systems such as Lustre and enterprise arrays provided by EMC Corporation and NetApp for high-throughput I/O required by teams at Princeton University, University of California, Berkeley, and University of Texas at Austin. The system uses rack-level management solutions from Hewlett Packard Enterprise and firmware coordinated with Intel Corporation roadmaps.
Pleiades has achieved sustained performance results reported to TOP500 and Green500 submissions through LINPACK runs and application-specific benchmarks used by NASA Advanced Supercomputing Division. LINPACK scores placed it among leading systems alongside installations at Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and Argonne National Laboratory. Application benchmarks for computational aerodynamics compare to codes developed at Stanford University and Massachusetts Institute of Technology, while climate model throughput aligns with frameworks from NOAA Geophysical Fluid Dynamics Laboratory and European Centre for Medium-Range Weather Forecasts.
Initial deployment began under contracts with Hewlett Packard Enterprise and system integrations overseen by NASA Ames Research Center engineering staff working with procurement from General Services Administration. Major upgrade waves incorporated processor generations from Intel Xeon Phi and later Intel Xeon Scalable families as well as GPU generations from NVIDIA Tesla lines. Development milestones were coordinated with stakeholders at Jet Propulsion Laboratory, NASA Ames Research Center leadership, and external collaborators such as Carnegie Mellon University and University of Illinois Urbana-Champaign to support evolving mission requirements and research objectives tied to Earth Science Division programs.
Pleiades runs enterprise Linux distributions like SUSE Linux Enterprise Server and Red Hat Enterprise Linux with job scheduling via systems such as SLURM and legacy implementations of PBS Professional. Scientific software stacks include numerical libraries from Intel MKL, compilers from Intel Corporation and GNU Project, and application frameworks like OpenMPI and CUDA for GPU-accelerated workloads. Research groups at Caltech and University of Washington deploy community codes including WRF (model), OpenFOAM, and astrophysics packages used in collaboration with Space Telescope Science Institute. Data management workflows integrate with portals from NASA Earth Observing System Data and Information System.
Operational projects on Pleiades span aerodynamics simulations for SpaceX vehicle studies, climate projection ensembles supporting Intergovernmental Panel on Climate Change authors, and planetary science modeling for Mars Science Laboratory mission planning. It also enables design optimization used by Boeing and structural analysis programs at Northrop Grumman, as well as turbulence research led by teams at Princeton University and California Institute of Technology. Pleiades has supported astrophysics pipeline processing for Hubble Space Telescope and preparatory simulations for James Webb Space Telescope instrument teams.
Cooling infrastructure at NASA Ames Research Center for Pleiades incorporates chilled-water loops and economizer systems coordinated with facility engineering from NASA headquarters and contractors including Carrier Global. Power distribution and monitoring align with standards from American Society of Mechanical Engineers and energy management practices used at Lawrence Berkeley National Laboratory. Efficiency metrics have been tracked relative to Green500 guidance and compared with systems at Oak Ridge National Laboratory and Argonne National Laboratory to optimize performance per watt for scientific workflows.
Planned upgrades consider integration of next-generation processors from Intel Corporation and accelerators from NVIDIA or competing vendors, following technology roadmaps similar to those used by Department of Energy supercomputing centers. Coordination with stakeholders at NASA Ames Research Center, Jet Propulsion Laboratory, and university partners will determine timelines for refresh cycles and eventual decommissioning procedures in line with federal asset disposition rules and environmental compliance frameworks employed by National Archives and Records Administration and facility management at Moffett Federal Airfield.