Generated by GPT-5-mini| ARCHER | |
|---|---|
![]() U.S. Air Force/Master Sgt. Lance Cheung · Public domain · source | |
| Name | ARCHER |
| Developer | EPSRC / Edinburgh Parallel Computing Centre / UKRI |
| Released | 2013 |
| Latest release | 2017 |
| Programming language | Fortran (programming language) / C (programming language) / Python (programming language) |
| Operating system | Unix-like / Linux |
| Platform | HPC / Cray / Intel Xeon / ARM architecture |
| License | Proprietary / Academic access |
ARCHER
ARCHER was the United Kingdom's national academic supercomputing service delivered to enable computational research across science, engineering, and humanities. The service provided large-scale high-performance computing access to researchers at institutions such as University of Cambridge, University of Oxford, Imperial College London, University of Edinburgh, and University of Manchester. ARCHER supported projects funded by agencies including Engineering and Physical Sciences Research Council and Natural Environment Research Council, and interfaced with European infrastructures such as PRACE and collaborations with centres like EPCC.
ARCHER functioned as a centralized HPC facility delivering batch scheduling, parallel file systems, and software stacks for codes from disciplines represented by groups at CERN, UK Met Office, Diamond Light Source, Wellcome Trust Sanger Institute, and Rutherford Appleton Laboratory. Typical workloads included climate modelling used by teams at Met Office Hadley Centre, materials simulations by researchers associated with European Molecular Biology Laboratory, and astrophysics codes from groups at Institute of Astronomy, Cambridge. ARCHER offered access via national access programmes similar to allocations administered by NVIDIA-backed initiatives and integrated identity and access mechanisms used by Jisc and HEAnet partners.
ARCHER succeeded earlier UK platforms such as national service machines hosted at HECToR and built on experiences from projects at EPCC and vendor collaborations with Cray Inc. and Dell Technologies. Its procurement and commissioning involved consortia including Dell EMC, Intel Corporation, and national funding bodies EPSRC and UK Research and Innovation. Research groups from University of Leeds, University of Bristol, University College London, and London School of Economics contributed user requirements during design. Over its operational life ARCHER engaged with international initiatives like XSEDE, PRACE, and collaborations with supercomputing sites such as NERSC and Jülich Research Centre to harmonize middleware and training.
ARCHER's hardware architecture combined multi-socket nodes populated with Intel Xeon processors, high-speed interconnects by InfiniBand, and parallel storage systems employing Lustre (file system) technology. The system ran job schedulers compatible with SLURM, PBS Professional, and supported MPI implementations such as Open MPI and Intel MPI. Software modules included compilers from GNU Project and Intel Corporation, numerical libraries like FFTW, ScaLAPACK, and application packages such as NAMD, GROMACS, OpenFOAM, WRF (Weather Research and Forecasting Model), and CASTEP. Security and user authentication used federated identity frameworks allied with Jisc's identity federation and access control patterns used by Shibboleth.
ARCHER powered simulations and data processing in domains pursued at University of Southampton's marine research groups, University of Sheffield materials engineering, London School of Hygiene & Tropical Medicine genomics analyses, and University of York archaeomagnetic studies. It ran ensemble climate projections similar to runs produced by HadGEM and workflows used by teams contributing to IPCC assessments. ARCHER enabled quantum chemistry calculations for researchers linked to Max Planck Society collaborations, large-scale lattice simulations akin to work at CERN's theoretical physics groups, and industrial partnerships with companies such as Rolls-Royce, BP, and Siemens for computational fluid dynamics and aeroelastic modelling. The platform supported visualization pipelines interfacing with tools from Kitware and data repositories used by UK Data Service.
Performance characterization of ARCHER used standard benchmarks like LINPACK and application-level suites including SPEC and bespoke scientific mini-apps from communities such as UK Met Office and EPSRC-funded consortia. Measured performance demonstrated strong scaling for MPI-parallel workloads in molecular dynamics (e.g., GROMACS) and finite-element codes (e.g., Abaqus) while I/O-bound applications profited from Lustre tuning comparable to deployments at Los Alamos National Laboratory and Argonne National Laboratory. Comparative studies placed ARCHER within European HPC tier hierarchies alongside machines at Jülich and Barcelona Supercomputing Center based on sustained FLOPS on domain codes rather than peak theoretical FLOPS alone.
Governance of ARCHER involved oversight committees with representatives from funding agencies including EPSRC, stakeholder institutions such as University of Edinburgh and University of Manchester, and operational management by EPCC and national computing centres. Funding streams combined capital grants from UK Research and Innovation cohorts, operational contributions from partner universities, and allocation of compute time through peer-reviewed schemes analogous to PRACE merit review. Training, user support, and allocation policies were coordinated with national services such as Jisc and coordinated community engagements with bodies like Royal Society and learned societies across United Kingdom research sectors.