LLMpediaThe first transparent, open encyclopedia generated by LLMs

ARCHER (supercomputer)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 79 → Dedup 11 → NER 9 → Enqueued 0
1. Extracted79
2. After dedup11 (None)
3. After NER9 (None)
Rejected: 2 (not NE: 1, parse: 1)
4. Enqueued0 (None)
ARCHER (supercomputer)
NameARCHER
ManufacturerCray
ModelXC30
OperatorHPE/EPCC
Introduction2013
Decommission2021
Memory900 TB (aggregate)
CpuIntel Xeon E5-2600v2
Nodes4920
InterconnectCray Aries
Storage15 PB Lustre
OsSUSE Linux Enterprise Server

ARCHER (supercomputer) ARCHER was the United Kingdom's national high-performance computing service hosted at the Edinburgh Parallel Computing Centre. It provided production compute cycles for academic and industrial projects across United Kingdom, supporting research linked to institutions such as the University of Edinburgh, Imperial College London, University of Oxford, and international collaborations with CERN, European Space Agency, and NASA. Funded and overseen by agencies including Engineering and Physical Sciences Research Council and UK Research and Innovation, ARCHER served as a focal point for computational science from 2014 until its scheduled retirement.

Overview

ARCHER operated as a capability-class resource delivering sustained throughput for simulations across disciplines represented by organizations like Met Office, National Oceanography Centre, British Antarctic Survey, and Health and Safety Executive. It formed part of UK national research infrastructure alongside systems such as Tier-1 resources and linked to European initiatives including PRACE and projects involving Horizon 2020 partners. Managed by the Edinburgh Parallel Computing Centre, ARCHER aimed to support workflows spanning computational fluid dynamics used by Rolls-Royce, climate modelling used by Hadley Centre, materials research at Diamond Light Source, and bioinformatics collaborations with Wellcome Trust.

Architecture and Hardware

ARCHER was built on the Cray XC30 architecture featuring Intel Xeon E5-2600v2 processors across 4920 compute nodes interconnected by the Cray Aries network. The system's aggregate memory totaled approximately 900 TB and the parallel file system provided around 15 PB using Lustre technology, hosted on storage hardware from vendors common to large installations like DDN and EMC Corporation. The chassis and cooling infrastructure reflected designs used by earlier Cray systems such as the Cray XT5 and incorporated practices from data centres like those at Hartree Centre. Power and floor-space planning referenced standards from facilities including UK National Grid operators and energy-efficiency programmes championed by European Commission initiatives.

Software and Programming Environment

ARCHER ran SUSE Linux Enterprise Server with scheduler and resource managers typical of Tier-1 systems, integrating job schedulers such as PBS Professional and tools compatible with SLURM. Compilers and libraries included Intel C++, Fortran, and GNU toolchains, along with math libraries from Intel Math Kernel Library and MPI implementations like Open MPI and Cray MPI. Scientific software ecosystems deployed on ARCHER encompassed codes like Nektar++, OpenFOAM, GROMACS, LAMMPS, and bespoke packages used by groups at University College London and University of Manchester. Performance analysis and profiling leveraged tools from ARM ecosystems, vendor tooling associated with Cray Inc., and community projects such as TAU and Valgrind.

Performance and Benchmarks

At peak, ARCHER delivered several hundred teraflops of Linpack-class performance and was ranked within national lists of capability computing resources alongside systems at Lawrence Livermore National Laboratory collaborators and other PRACE centres. Benchmarking used standards including High Performance LINPACK (HPL), application-level proxies, and domain benchmarks from groups like SPEC and Graph500. Performance characterization addressed memory bandwidth and interconnect latency relative to contemporaneous systems such as IBM Blue Gene/Q and Intel-based clusters at National Center for Supercomputing Applications. Studies published by the EPCC and partner universities compared strong- and weak-scaling behavior for workflows in computational chemistry, aerodynamics, and seismology.

Operational History and Upgrades

Commissioned in 2013 and entering production in 2014, ARCHER underwent operational management involving the Science and Technology Facilities Council stakeholders and community-facing support via the UK National Supercomputing Service model. Throughout its operational life, ARCHER received incremental software updates, storage expansions, and firmware maintenance coordinated with vendors such as Hewlett Packard Enterprise and Cray Inc.. User-support and training programmes were delivered in partnership with universities including University of Leeds and University of Glasgow, and outreach engaged policy entities like Department for Business, Innovation and Skills and funding bodies including Research Councils UK. The service participated in international benchmarking initiatives run by PRACE and engaged in procurement planning for successor systems.

Users, Research Applications, and Job Scheduling

ARCHER supported a diverse user community from academic groups at University of Cambridge, University of Manchester, University of Southampton, and industry partners including BP and Siemens. Scientific domains supported comprised climate science with groups at the Met Office Hadley Centre, astrophysics collaborations with STFC, computational engineering for firms like Jaguar Land Rover, and biomedical modelling tied to Medical Research Council projects. Job scheduling policies balanced large-scale allocations for centre projects, responsive queues for time-critical simulations linked to Natural Environment Research Council campaigns, and peer-reviewed allocations from schemes such as the Science and Technology Facilities Council allocations process. Usage reporting and accounting interfaced with national frameworks for research infrastructure prioritization overseen by UK Research and Innovation.

Decommissioning and Legacy

ARCHER was decommissioned in line with plans to transition capability to successor UK systems and European resources coordinated through PRACE and national procurement led by EPSRC. Its legacy includes datasets and simulation archives deposited at repositories such as UK Data Service and methodological advancements documented in journals published by societies like the Royal Society and Institute of Physics. Lessons from ARCHER informed design and procurement of successor systems at centres including Hartree Centre and contributed to training a generation of computational scientists across institutions like University of Bristol and University of Exeter.

Category:Supercomputers