Generated by GPT-5-mini| High Performance Computing Centre | |
|---|---|
| Name | High Performance Computing Centre |
| Established | 1990s |
| Location | International |
| Type | Research facility |
High Performance Computing Centre
A High Performance Computing Centre is an institutional facility that provides supercomputing resources for scientific, engineering, and commercial research. These centres support computational tasks across domains associated with Large Hadron Collider, Human Genome Project, International Space Station, CERN, NASA, European Space Agency and national laboratories such as Argonne National Laboratory and Lawrence Berkeley National Laboratory. They integrate hardware, middleware and application stacks to enable projects linked to Stanford University, Massachusetts Institute of Technology, Imperial College London and industry partners like IBM, Intel, NVIDIA.
High Performance Computing Centres typically host clusters, grids and cloud resources used by investigators from institutions including University of Cambridge, Harvard University, Princeton University, ETH Zurich and Tsinghua University. Users run workloads related to initiatives such as Square Kilometre Array, IPCC assessments, Protein Data Bank modelling, Blue Brain Project simulation and collaborations with agencies like Department of Energy (United States), European Commission programmes and Wellcome Trust. Centres often interact with consortia such as Open Grid Forum, PRACE and XSEDE to provide federated access.
Physical facilities are designed for resiliency and connectivity, often sited near research campuses like Oak Ridge National Laboratory or urban hubs like Silicon Valley. Typical architecture includes data halls, raised floors, and containment systems informed by standards from Uptime Institute, ASHRAE and collaborations with vendors such as Dell Technologies and Hewlett Packard Enterprise. Network links use backbones like Internet2, GÉANT and submarine cables connecting to exchanges including LINX or DE-CIX. Disaster recovery plans reference frameworks from National Institute of Standards and Technology and integrate with regional partners such as European Organisation for Nuclear Research for distributed workflows.
Centres deploy compute nodes with processors from AMD and Intel and accelerators from NVIDIA and AMD Radeon Instinct, organized via interconnects like InfiniBand and Intel Omni-Path. Storage subsystems use parallel filesystems such as Lustre and IBM Spectrum Scale and object stores compatible with Amazon S3 semantics for archival tiers tied to systems like Tape Archive (magnetic). Software stacks include resource managers such as Slurm Workload Manager, container platforms like Singularity/Apptainer, and compilers from GNU Project and LLVM Project. Performance engineering references benchmarks such as TOP500 and High Performance Linpack to characterize capability.
Centres provide services including batch scheduling, interactive visualization, data analytics, and workflow orchestration used in projects like Climate modeling initiatives tied to Intergovernmental Panel on Climate Change reports, Computational fluid dynamics for aerospace partners such as Boeing and Airbus, and Molecular dynamics workflows relevant to Pfizer and Novartis. Application domains include astrophysics collaborations with European Southern Observatory, bioinformatics pipelines linked to European Bioinformatics Institute, and machine learning workloads informed by research from Google DeepMind and OpenAI partnerships.
Research collaborations span universities, national labs and industry partners, evidenced by joint efforts with Princeton Plasma Physics Laboratory, Max Planck Society, Riken, CSIRO and consortia like Gaia and Human Cell Atlas. Centres host user training and summer schools alongside organizations such as Software Carpentry and Carpentries to build capacity for projects funded by programs like Horizon 2020 and National Science Foundation. Collaborative software projects often interact with communities around MPI implementations, OpenMP initiatives, and middleware from Globus.
Governance models vary: university-led centres coordinate with departments such as Department of Physics, University of Oxford or Department of Computer Science, University of Toronto, while national centres receive funding from agencies like European Research Council, National Institutes of Health, Japan Science and Technology Agency and Australian Research Council. Funding mechanisms include competitive grants from Wellcome Trust and infrastructure investments by ministries such as Ministry of Education (People's Republic of China). Advisory boards commonly include representatives from partner institutions including Sandia National Laboratories and corporations such as Microsoft.
Security practices reference frameworks from NIST Cybersecurity Framework and operational guidance from ENISA for threat modeling, identity federations via eduGAIN, and incident response coordination with CERT teams like US-CERT. Sustainability efforts involve energy-efficiency measures informed by Green Grid metrics, use of waste heat recovery in collaboration with utilities, and sourcing electricity from providers engaged in Renewable energy programmes alongside initiatives like LEED certification for data centre buildings. Environmental reporting aligns with standards from ISO 14001.
Category:Supercomputing Category:Research institutes