Generated by GPT-5-mini| VOMS | |
|---|---|
| Name | VOMS |
| Developer | CERN; INFN; GLUE Schema |
| Released | 2000s |
| Programming language | Java (programming language); C (programming language); Perl |
| Operating system | Unix; Linux; Microsoft Windows |
| License | Apache License; GNU General Public License |
VOMS
VOMS is a role-based attribute authority system originally developed for grid computing to support virtual organization management and fine‑grained authorization. It was created to add attribute assertions to X.509 credentials, enabling distributed services to make authorization decisions for users belonging to federated scientific collaborations such as those in high-energy physics and bioinformatics. VOMS has been used alongside middleware stacks and services from major research infrastructures and has influenced subsequent attribute-based access control efforts.
VOMS emerged from collaborations among institutions including CERN, INFN, DESY, GSI Helmholtz Centre for Heavy Ion Research, and projects such as EGEE and Worldwide LHC Computing Grid to address authorization needs for distributed resource sharing. It issues attribute assertions that supplement identity credentials issued by certificate authorities such as Entrust and DigiCert and aligns with standards like X.509 and SAML 2.0 in providing machine‑readable authorizations. VOMS integrates with authorization services used by middleware stacks like gLite, ARC (Advanced Resource Connector), and HTCondor to enable per‑user roles, group memberships, and capabilities across sites such as national compute centers and large experiments including ATLAS (experiment), CMS (experiment), ALICE (A Large Ion Collider Experiment), and LHCb. It influenced attribute frameworks later employed by cloud projects like OpenStack and federations such as eduGAIN.
The VOMS architecture centers on a server-client model with components that include a VOMS server, VOMS client utilities, and plugins for resource gatekeepers. The VOMS server implements a database backend—commonly using systems like MySQL or PostgreSQL—and exposes protocols over TLS aligning with OpenSSL stacks. Client tools produce attribute-bearing proxy certificates by contacting the server and embedding attributes into proxy certificates compatible with services expecting extensions defined by IETF profiles for grid proxy certificates. Gatekeeper components (e.g., plugins for Globus Toolkit or load balancers used by CERN IT sites) validate proxy certificates and map attributes to local accounts or authorization policies such as those expressed in LCMAPS or TOMCAT connectors. Management consoles and web interfaces often rely on frameworks like Apache HTTP Server and libraries from GNU Privacy Guard ecosystems for administration and auditing.
Authentication in the VOMS model typically leverages public key infrastructure with X.509 end‑entity certificates issued by recognized Certification Authorities such as TERENA members or national CAs participating in federations. Users obtain short‑lived proxy certificates through tools that combine private keys and issued attributes; these proxies are compatible with client software including Globus Toolkit command‑line utilities, gFTP clients, and experiment‑specific submission tools like PanDA. Authorization uses role and group attributes (roles like "role=analysis" or group affiliations) encoded in attribute certificates or extension mechanisms; services interpret these attributes through mapping rules or policy enforcement points like Perun or custom shims. Interoperability with token frameworks and assertions such as SAML 2.0 and emerging OAuth/OpenID Connect implementations has been explored to bridge grid credentials with web‑centric federated identity providers including InCommon and eduGAIN.
VOMS deployments are common in national and international research infrastructures and are usually integrated with compute clusters, storage systems like dCache, data transfer services such as FTS (File Transfer Service), and workflow managers like DIRAC (Distributed Infrastructure with Remote Agent Control). Integration patterns involve installing VOMS server clusters for high availability behind load balancers such as HAProxy and configuring client middleware on worker nodes so that job submission systems (e.g., HTCondor and PBS Professional) accept VOMS‑augmented proxies. Grid information systems and catalogues—examples include BDII—consume VOMS attributes for accounting and resource matchmaking. Integration with configuration management tools like Puppet or Ansible simplifies consistent rollout across sites in collaborations like WLCG and national e‑infrastructures.
Security relies on the trust anchors established by participating Certification Authorities and the correct issuance and revocation of X.509 certificates; components such as CRL distribution points and online certificate status protocols are part of operational security. VOMS introduces potential attack surfaces including improper attribute issuance, compromised VOMS server credentials, and man‑in‑the‑middle threats mitigated by TLS and mutual authentication. Privacy concerns arise from attribute disclosure; careful policy and minimal disclosure practices are recommended, with site enforcement using attribute mapping to local accounts to avoid unnecessary exposure. Auditing and logging using tools like ELK Stack or Splunk integration is used by operations teams to detect misuse and fulfill compliance obligations for large projects such as Human Brain Project and consortiums that require data provenance.
VOMS has been widely adopted by high‑energy physics collaborations such as ATLAS (experiment), CMS (experiment), and ALICE (A Large Ion Collider Experiment), national grid initiatives like UK National Grid Service, and multidisciplinary infrastructures including EGI and NeIC. Use cases include fine‑grained job submission control, VO‑based storage quotas in systems like EOS (CERN) and dCache, and privileged role delegation for data management tasks used by projects like OpenAIRE. While emerging cloud and federation technologies have introduced alternative attribute and token systems, VOMS remains in use where X.509‑centric infrastructures and legacy middleware continue to operate across major research facilities such as CERN and national laboratories.