LLMpediaThe first transparent, open encyclopedia generated by LLMs

Cray XC

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: NERSC Hop 5
Expansion Funnel Raw 98 → Dedup 10 → NER 7 → Enqueued 0
1. Extracted98
2. After dedup10 (None)
3. After NER7 (None)
Rejected: 3 (not NE: 3)
4. Enqueued0 (None)
Cray XC
NameCray XC
DeveloperCray Inc.
FamilyXC series
Release2015
TypeSupercomputer
CpuIntel Xeon, Intel Xeon Phi, NVIDIA GPU options
OsSUSE Linux Enterprise Server
MemoryScalable up to hundreds of terabytes
StorageParallel file systems (Lustre)
NetworkCray Aries

Cray XC

Cray XC is a series of high-performance supercomputers designed by Cray Inc. and integrated into systems by organizations such as Hewlett Packard Enterprise, targeted at national laboratories, research institutions, and enterprises including Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, Argonne National Laboratory, Sandia National Laboratories, and Los Alamos National Laboratory. The line emphasizes exascale-ready design, energy efficiency, and support for heterogeneous compute elements drawn from vendors such as Intel Corporation and NVIDIA Corporation. Cray XC systems have been used in projects associated with agencies such as the United States Department of Energy and international programs at institutions like CERN, EPFL, and CEA Grenoble.

Overview

The XC series succeeded previous models from Cray such as the Cray XE and Cray XK families, aiming to provide scalable performance for workloads in climate modeling, computational fluid dynamics, astrophysics, and materials science pursued at centers like National Center for Supercomputing Applications, Princeton Plasma Physics Laboratory, Max Planck Society, and European Centre for Medium-Range Weather Forecasts. Deployment partners and users include IBM competitors, academic consortia, and companies such as Boeing, Lockheed Martin, General Electric, and Siemens. Procurement and funding often involve bodies like the National Science Foundation, DARPA, NASA, European Commission, and national research councils in United Kingdom, Germany, France, and Japan.

Architecture and Hardware

XC systems integrate multi-core processors from Intel Corporation—including the Intel Xeon family—and many-core accelerators such as the Intel Xeon Phi (Knights Landing), and options for NVIDIA Tesla GPUs. The compute blades and cabinets are organized into rows connected by the Cray interconnect; vendors supplying memory and DRAM modules include Micron Technology, Samsung Electronics, and SK Hynix. Storage and I/O rely on parallel file systems such as Lustre and hardware from Seagate Technology and Western Digital. Power and cooling technologies have engaged firms like Schneider Electric and Rittal, and energy-efficiency metrics reference standards from Green500 and TOP500 participants like Fujitsu and Hewlett Packard Enterprise.

Interconnect and Networking

The XC line uses the Cray Aries interconnect, developed by Cray and leveraging technologies similar to those in prior systems from vendors like Quadrics and Mellanox Technologies. Aries provides a dragonfly network topology that scales latency and bisection bandwidth for large systems used by projects such as Large Hadron Collider simulations at CERN and climate ensembles at Met Office. Networking components interoperate with management and monitoring tools developed in collaboration with firms like Intel Corporation and software stacks influenced by OpenMPI, MVAPICH, and research from institutions like Los Alamos National Laboratory and University of Illinois Urbana-Champaign.

Software and Programming Environment

XC systems run operating environments based on SUSE Linux Enterprise Server and support programming models including MPI, OpenMP, OpenACC, and CUDA when GPUs are present. Compilers and toolchains are provided by vendors like Intel Corporation (Intel Compiler), GNU Project (GCC), and NVIDIA Corporation (nvcc), complemented by performance tools from Cray Inc.Platform offerings and third-party suites from Silicon Graphics International-era tooling and research groups at Argonne National Laboratory (e.g., PETSc). Scientific software commonly deployed includes packages from NCAR (e.g., Community Earth System Model), LAMMPS, GROMACS, NAMD, OpenFOAM, and libraries like HDF5 and NetCDF maintained by collaborations with University Corporation for Atmospheric Research.

Performance and Benchmarks

XC installations have ranked on the TOP500 and Green500 lists, measured with benchmarks such as LINPACK and application-specific kernels used by National Renewable Energy Laboratory and Sandia National Laboratories. Performance tuning involves math libraries from Intel Math Kernel Library and vendor-optimized BLAS and FFT libraries used in projects at Lawrence Berkeley National Laboratory and Argonne National Laboratory. Comparative analyses reference competitors like IBM Summit, Fugaku by Fujitsu, and systems built by Hewlett Packard Enterprise and Dell Technologies in publications from conferences such as SC Conference and International Supercomputing Conference.

Deployment and Notable Installations

Notable XC deployments include machines at Oak Ridge National Laboratory supporting initiatives such as the Exascale Computing Project, installations at Lawrence Livermore National Laboratory for simulation work tied to NNSA, and academic clusters at Stanford University, MIT, University of Cambridge, and ETH Zurich. International deployments occurred at national facilities like Forschungszentrum Jülich, CINES in France, and Riken affiliates in Japan. System integrators and service providers involved include Hewlett Packard Enterprise after its acquisition activities, cloud collaborations with Amazon Web Services and consulting from firms like Atos and Capgemini.

History and Development

Development traces back to Cray Inc.'s lineage founded by designers influenced by earlier supercomputers such as the Cray-1 and companies acquired or merged with entities like Silicon Graphics International and Tera Computer Company. Strategic partnerships with Intel Corporation and NVIDIA Corporation shaped accelerator support, while government-funded research from DOE Office of Science and collaborations with universities such as University of Illinois Urbana-Champaign and University of California, Berkeley informed scalability research. Milestones align with releases and benchmarks publicized at events including the SC Conference, IEEE symposiums, and announcements involving stakeholders such as Hewlett Packard Enterprise and national laboratories.

Category:Supercomputers