Generated by GPT-5-mini| INDIGO-DataCloud | |
|---|---|
| Name | INDIGO-DataCloud |
| Formation | 2015 |
| Type | Research infrastructure project |
| Headquarters | Europe |
| Region served | International |
INDIGO-DataCloud is a European research infrastructure project that developed open-source cloud and data management solutions for scientific communities. It integrated middleware and platform services to support high-throughput computing, data-intensive research, and federated cloud deployments across institutions and research infrastructures. The project connected software stacks, virtualisation technologies, and community workflows to enable reproducible science across distributed resources.
INDIGO-DataCloud aimed to provide interoperable middleware and platform-as-a-service components for scientific computing, targeting communities in CERN, European Space Agency, European Organization for Nuclear Research, European Commission, Max Planck Society, and other research organisations. The project bridged technologies such as OpenStack, Kubernetes, Docker (software), Apache Mesos, HTCondor, Slurm Workload Manager, and TORQUE (software) to offer federation across infrastructures like EGI (e-Infrastructure) and PRACE. INDIGO-DataCloud emphasized standards and protocols used by Open Grid Forum, OAuth, OpenID Connect, SAML (computing), and X.509 certificates to enable secure access for communities including High Energy Physics, Astrophysics, Bioinformatics, Climate science, and Earth observation.
The project originated in response to challenges identified in initiatives such as EGEE, Enabling Grids for E-sciencE, Helix Nebula, European Grid Infrastructure, and research collaborations involving CERN and GARR (Italian academic network). Partners included academic institutions like Universidad Complutense de Madrid, Istituto Nazionale di Fisica Nucleare, Centro de Supercomputación de Galicia, and technology providers linked to IBM, Red Hat, and Canonical Ltd.. Funding and coordination referenced programmes under the Horizon 2020 framework and collaboration with projects such as EOSC (European Open Science Cloud) and EUDAT. Development milestones paralleled advances in orchestration from TOSCA (Topology and Orchestration Specification for Cloud Applications) and containerisation trends influenced by CoreOS and Google (company) initiatives.
INDIGO-DataCloud combined components addressing authentication, authorization, scheduling, storage, and orchestration. Identity and access relied on integrations with LDAP, Shibboleth, Keycloak, and GSI (Grid Security Infrastructure). Orchestration used TOSCA, Heat (OpenStack) and bespoke orchestrators aligning with Kubernetes and Apache Mesos. Storage integrations targeted systems such as Ceph, dCache, Lustre (file system), and GlusterFS, and used transfer tools akin to GridFTP and FTPS. Compute interfaces wrapped hypervisors like KVM, Xen (hypervisor), and container runtimes from runc and containerd, while scheduling interoperated with PBS Professional and Grid Engine (Sun)-style systems.
The software portfolio produced components for researchers and operators including PaaS orchestrators, data management tools, authentication modules, and scientific workflow enablers. Notable software patterns interfaced with Galaxy (platform), Jupyter, Nextflow, Snakemake, and Apache Airflow for workflow execution. Monitoring and logging solutions integrated with Prometheus, Grafana, Zabbix, and ELK Stack (Elasticsearch, Logstash, Kibana). Packaging and distribution leveraged Debian, RPM Package Manager, Ansible, Puppet (software), and Terraform for infrastructure as code. Container images and registries were compatible with Docker Hub, Quay.io, and private registry solutions adopted by institutions like CERN Container Registry.
INDIGO-DataCloud supported large-scale use cases across domains: processing for Large Hadron Collider datasets, pipelines for European Space Agency satellite missions, genomics workflows in European Bioinformatics Institute, climate model ensembles used by ECMWF, and remote sensing analyses for Copernicus Programme. It enabled collaborative platforms used by projects such as LOFAR, KM3NeT, ELIXIR, and BioMed Central-linked consortia. The stack facilitated reproducible analyses for journals and initiatives like Nature (journal), Science (journal), and community repositories such as Zenodo and GitHub.
Deployments occurred across computing centres and cloud providers including CERN IT Department, national research and education networks like GÉANT, regional e-infrastructures such as CESGA, and supercomputing centres participating in PRACE. Adoption pathways included integration with OpenNebula and OpenStack federations, pilot services in national projects funded by agencies like MIUR (Italian Ministry of Education, Universities and Research), MINECO (Spain), and collaborations with commercial cloud vendors exemplified by partnerships with Amazon Web Services, Google Cloud Platform, and Microsoft Azure for hybrid scenarios. Training and community uptake were fostered through events similar to NECOT (networking) workshops, summer schools at institutions like CERN Summer Student Programme, and contributions to the European Cloud Strategy discourse.
Governance combined a consortium of universities, research centres, and SMEs coordinated under European funding mechanisms including Horizon 2020 and predecessor frameworks related to FP7 (Seventh Framework Programme). Stakeholders encompassed research infrastructures such as European Strategy Forum on Research Infrastructures, funding agencies like ERC (European Research Council), and national research councils (for example CNR (Italy), CSIC (Spain)). Industry partners and open-source communities including Linux Foundation, OpenStack Foundation, and Cloud Native Computing Foundation contributed technical guidance. Financial and organisational oversight aligned with project management practices promoted by European Commission grant agreements and audit frameworks used by European Court of Auditors.
Category:European research projects