Generated by GPT-5-mini| UNICORE | |
|---|---|
| Name | UNICORE |
| Title | UNICORE |
| Developer | German Electron Synchrotron; European Grid projects; UNICORE Forum |
| Released | 1997 |
| Latest release | ongoing |
| Programming language | Java |
| Operating system | Cross-platform |
| Genre | Grid middleware; distributed computing |
| License | open source |
UNICORE is a distributed computing middleware system designed to provide seamless, secure, and user-friendly access to heterogeneous high-performance computing and data resources across administrative domains. It was developed to integrate supercomputers, clusters, storage systems, and national research infrastructures into a cohesive service fabric, supporting scientific workflows, batch jobs, and data transfer. UNICORE has been used in academic, governmental, and industrial projects involving large-scale simulations, data analytics, and collaborative research.
UNICORE serves as middleware between user-facing science gateways and underlying computing resources such as supercomputers at Deutsches Elektronen-Synchrotron, national centers like National Center for Supercomputing Applications, and regional grids like PRACE and XSEDE. It provides job submission, data staging, workflow orchestration, and monitoring interfaces to resources including systems at Los Alamos National Laboratory, Oak Ridge National Laboratory, CERN, and European supercomputing centers. The system interacts with identity infrastructures such as European Grid Infrastructure, TERENA, and certificate authorities like DFN-Verein and supports integration with portals developed by groups at Forschungszentrum Jülich and FZJ-linked projects. Major collaborators and adopters have included IBM, Intel, Siemens, EUMETSAT, and European Commission research initiatives.
Development began in the late 1990s at Deutsches Elektronen-Synchrotron as part of efforts contemporaneous with projects such as Globus Toolkit and OGSA. Early work involved collaborations with GMD and EU-funded projects including FP5 and FP6. Successive European projects—such as UniGrid, GRIP, Int.eu.grid, and DILIGENT—expanded features, user communities, and interoperability with efforts like EGEE and NeSC. The project evolved through coordination with national initiatives such as DFG-funded centers and integration into infrastructures linked to PRACE and EUDAT. Governance shifted toward community models embodied by the UNICORE Forum and partnerships with industrial actors such as SiCortex and research labs including Jülich Supercomputing Centre.
UNICORE’s multi-tier architecture comprises client tools, gateway services, and target resource managers, with components implemented in Java and interoperating with systems like PBS Professional, SLURM, Torque, and LSF at resource sites. Key components include the UH (client), the Gateway that mediates network traffic, the NJS (Network Job Supervisor) for job control, and the Target System Interface for local execution. The architecture supports data movement via GridFTP and integrates with services such as Web Services Resource Framework and SOAP-based interfaces, aligning with standards from OGSA and WS-* families. The stack has been adapted for cloud environments interacting with OpenStack, Amazon Web Services, and container platforms like Docker through plugins and connectors developed in collaboration with teams from Fraunhofer-Gesellschaft and Max Planck Society.
UNICORE relies on X.509 public key infrastructure, integrating with certificate authorities and trust infrastructures including DFN-PKI, European eIDAS-related bodies, and national authorities. It implements single sign-on and delegated credentials via proxy certificates compatible with Grid Security Infrastructure and supports SAML assertions for federated identity when integrated with systems such as Shibboleth and eduGAIN. Authorization mechanisms map global identities to local accounts using gridmap-like mappings and role-based controls influenced by work from IETF and OGF specifications. Secure transport employs TLS and WS-Security standards, drawing on interoperable patterns used by projects like EGEE and EUDAT.
UNICORE has been deployed for computational chemistry campaigns at institutions like Lawrence Livermore National Laboratory, climate modeling at ECMWF and Met Office-linked centers, and fusion simulations for collaborations involving ITER research partners. It supports distributed bioinformatics pipelines used by groups at EMBL-EBI and genomics consortia, as well as engineering workloads by corporations such as Siemens and Boeing research labs. University centers including Technical University of Munich, University of Oxford computational groups, and ETH Zurich have used UNICORE to expose HPC resources to multidisciplinary teams. Interoperable deployments have connected national grids such as GridKa and regional e-infrastructures in Italy, France, and Poland.
UNICORE’s design aims for scalability across thousands of concurrent jobs and large data transfers, leveraging component decoupling and stateless gateway patterns similar to architectures used by Apache Hadoop and Kubernetes for scheduling and orchestration. Performance evaluations in projects aligned with PRACE and DEISA compared throughput and latency against alternatives like Globus Toolkit and showed competitiveness for tightly coupled workflow ensembles and high-throughput task farms. Scalability enhancements have included asynchronous job handling, database-backed state management using systems such as PostgreSQL and Oracle, and load-balancing strategies influenced by studies from NERSC and TACC.
UNICORE adheres to and implements standards from bodies such as the Open Grid Forum, W3C, and IETF, including OGSA-style web services, WS-Security, and SOAP/REST hybrid interfaces. Interoperability efforts have produced adapters for Globus, integration layers for OGC-related data services, and connectors to container orchestration stacks like Mesos and Kubernetes. Standards alignment facilitated participation in interoperability tests with initiatives including OGF Interoperability Testbed, EGEE, and EUDAT, enabling cross-infrastructure workflows that span national, European, and international computing facilities.
Category:Grid computing Category:High-performance computing Category:Middleware