Generated by GPT-5-mini| Cray | |
|---|---|
| Name | Cray |
| Industry | Supercomputing |
| Founded | 1972 |
| Founder | Seymour Cray |
| Headquarters | Seattle, Washington |
| Fate | Acquired by Hewlett Packard Enterprise in 2019 |
Cray
Cray was an American company specializing in supercomputers and high-performance computing systems. Founded in 1972, it became synonymous with vector processing, parallel machines, and systems deployed by national laboratories, universities, and corporations. Its products and research influenced architectures used by Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, National Aeronautics and Space Administration, and commercial firms including ExxonMobil and JP Morgan Chase.
The company's origins trace to designs by Seymour Cray after leaving Control Data Corporation; early successes followed collaboration with CDC 6600 engineers and customers such as Sandia National Laboratories and Argonne National Laboratory. Throughout the 1970s and 1980s Cray developed successive machines competing with firms like IBM and Fujitsu, while responding to procurement programs from Department of Energy and projects such as the Accelerated Strategic Computing Initiative. In the 1990s market pressures and shifts toward massively parallel processing spurred acquisitions and leadership changes involving companies like Siemens and SGI. In the 2000s Cray refocused on scalable cluster systems and accepted contracts from organizations including European Organization for Nuclear Research and NASA Ames Research Center. In 2019 the company was acquired by Hewlett Packard Enterprise, marking consolidation of supercomputing vendors and integration with broader enterprise offerings.
Cray produced families of systems, starting with vector machines such as the Cray-1 and Cray-2, evolving to symmetric multiprocessing and massively parallel systems like the Cray T3E and Cray XC series. Hardware lineages included cold-water immersion cooling exemplified in designs inspired by early cryogenic experiments and later dense air- and liquid-cooled racks used at facilities like Argonne National Laboratory and Oak Ridge National Laboratory. Interconnect technologies such as proprietary high-speed meshes and networks competed with alternatives from InfiniBand vendors and influenced industry standards adopted by data centers at Lawrence Berkeley National Laboratory and European Centre for Medium-Range Weather Forecasts. Storage ecosystems integrated parallel file systems compatible with Lustre deployments used by institutions such as National Oceanic and Atmospheric Administration and Jet Propulsion Laboratory.
Architectural themes included vector pipelines, shared-memory NUMA designs, and scalable message-passing clusters employing MPI implementations used by researchers at Stanford University and Massachusetts Institute of Technology. Performance targets often focused on floating-point throughput measured in FLOPS, with peak systems appearing on the TOP500 list alongside machines from IBM, Fujitsu, and NVIDIA-accelerated clusters. Cray interconnect topologies—dragonfly for XC series—balanced latency and bisection bandwidth for applications such as climate modeling at Met Office and computational chemistry at California Institute of Technology. Accelerator integration later included GPUs from NVIDIA and co-processors from Intel adopted by users in projects sponsored by European Centre for Nuclear Research and energy firms like Chevron.
Software stacks emphasized parallel programming models and tools, offering optimized compilers for Fortran and C tailored for vectorization and parallelism, debugging and performance tools used by teams at Lawrence Livermore National Laboratory and Los Alamos National Laboratory. Operating environments supported variants of UNIX and specialized runtime services interoperable with middleware from OpenMPI and libraries such as BLAS and LAPACK leveraged in numerical simulations at Princeton University and University of Cambridge. Workflow and batch scheduling integrations worked with systems like Slurm and allowed coupling to community codes used in astrophysics at Max Planck Institute for Astrophysics and materials science at Argonne National Laboratory.
Corporate governance evolved through private partnerships, public offerings, and mergers involving investors and partners such as Sequoia Capital and strategic partners in Asia and Europe. Sales cycles were driven by large procurement programs from national laboratories, defense agencies including Lawrence Livermore National Laboratory contracts, and commercial customers such as Shell. Manufacturing and assembly were coordinated with suppliers of ASICs, memory modules from vendors like Samsung Electronics and network components from Mellanox Technologies. Research collaborations included academic consortia at University of Illinois Urbana-Champaign and government-funded initiatives through agencies like National Science Foundation.
Major deployments included national-scale systems at Oak Ridge National Laboratory and Argonne National Laboratory, dedicated systems for weather and climate modeling at Met Office and National Oceanic and Atmospheric Administration, and research clusters at European Organization for Nuclear Research and NASA Ames Research Center. Industrial users encompassed energy companies such as ExxonMobil and Chevron, financial firms including Goldman Sachs and JP Morgan Chase, and pharmaceutical research groups at Pfizer and Roche for computational drug discovery. Academic adopters ranged from Massachusetts Institute of Technology and Stanford University to international centers like Max Planck Society and CERN for particle physics simulation and data analysis.
Category:Supercomputer companies