Generated by GPT-5-mini| gLite | |
|---|---|
| Name | gLite |
| Developer | CERN, EGEE project, Enabling Grids for E-sciencE |
| Released | 2004 |
| Discontinued | 2010s |
| Latest release version | 3.x |
| Programming language | C++, Java, Python |
| Operating system | Scientific Linux CERN, Red Hat Enterprise Linux |
| Genre | grid middleware |
| License | open-source |
gLite gLite was a grid middleware distribution developed to provide distributed computing services for large-scale scientific collaborations. It served as an integration platform connecting resources such as compute clusters at CERN, storage systems at Fermilab, and authentication infrastructures like European Grid Authentication Service to projects including LHC experiments and bioinformatics initiatives. Designed under the coordination of the EGEE project and adopted by research infrastructures across Europe, gLite enabled resource sharing across institutional and national boundaries, supporting workflows in high-energy physics, climate modelling, and life sciences.
gLite emerged from collaborative efforts involving CERN, the CNRS, INFN, STFC laboratories, and the European Commission funded EGEE and EGEE-II projects. It provided middleware functions such as job scheduling, data management, security, and information discovery to federated research communities like the ATLAS experiment, CMS experiment, LHCb, and ALICE. By integrating components from standards initiatives including Open Grid Forum specifications and interoperation with Globus Toolkit, gLite offered interoperable services for heterogeneous infrastructures spanning national research and education networks such as GÉANT and regional e-infrastructure projects like GridPP and PRACE.
The gLite architecture used a service-oriented design with modular layers: a user-facing suite of client tools, a core services layer, and a resource provider layer hosting compute and storage elements. Its information system leveraged the Lightweight Directory Access Protocol via Globus MDS influenced schemas, while workload management used match-making concepts similar to those in Condor (software) and Torque (software). Authentication and authorization relied on X.509 public key infrastructure and interactions with Virtual Organization Membership Service instances. Data handling integrated the Storage Resource Manager model and protocols compatible with RFIO and dCache, enabling seamless access to distributed datasets replicated across sites such as Tier-0 and Tier-1 centres associated with the Worldwide LHC Computing Grid.
Key gLite components included the Workload Management System (WMS), Computing Element (CE), Storage Element (SE), Information System (IS), and the Data Management tools. The WMS provided job submission, match-making, and brokerage similar to services in Sun Grid Engine deployments and interfaced with the LCG File Catalog for dataset discovery. The CE wrapped local batch systems like PBS Professional and LSF allowing experiments such as ATLAS to schedule tasks across resources. SEs implemented access protocols used by BaBar and CMS data distribution workflows. The Information System fed monitoring stacks such as Nagios and visualization tools used by operations teams from Grid Ireland and CESNET. User-facing utilities included command-line clients used by scientists at DESY and graphical portals based on Liferay exemplars.
gLite was deployed across national grid infrastructures including GridPP in the United Kingdom, INFN Grid in Italy, and the Nordic Data Grid Facility. It underpinned production use cases for the ATLAS experiment and CMS experiment during pre-LHC and early LHC data challenges, supporting Monte Carlo productions, large-scale reconstruction, and user analysis tasks. Earth science projects used gLite-enabled workflows to process climate datasets managed by ECMWF partners, while bioinformatics consortia integrated gLite with running services at ELIXIR precursor nodes for genome assembly pipelines. Educational deployments occurred at universities like University of Oxford and École Polytechnique, where gLite provided practical training environments for students familiarizing with distributed computing and resource brokerage.
The lifecycle of gLite involved coordinated releases managed by the EGEE project release team, contributions from institutions such as CERN IT and national research organizations, and quality assurance via continuous testing frameworks. Development followed roadmaps aligned with European grid strategy documents and collaboration agreements among partners including EIROforum members. Maintenance incorporated upstream patches from interoperability partners like the Globus Alliance and integration with configuration management tools inspired by Puppet (software) and CFEngine. Transition planning toward successor technologies involved collaborations with projects such as European Grid Infrastructure and adoption paths toward cloud-oriented platforms like OpenStack and orchestration tools championed by EU projects.
Security in gLite hinged on X.509-based authentication, delegation services, and Virtual Organization (VO) policies enforced by VOMS instances. Operational security practices mirrored auditing approaches used in Helix Nebula pilot studies and incident response coordinated among site administrators at CERN Computer Centre and national CERT teams such as CERT-FI. Reliability measures included redundancy for catalogues like the LCG File Catalog, monitoring aligned with Grid Operational Centre procedures, and periodic stress tests similar to the Worldwide LHC Computing Grid] ] service challenges. Integration with legal and policy frameworks from the European Commission and compliance efforts with data stewardship guidelines provided governance for multi-institutional deployments.
Category:Grid middleware