Generated by GPT-5-mini| Job Submission Description Language | |
|---|---|
| Name | Job Submission Description Language |
| Author | Open Grid Forum, Condor Project, Globus Alliance |
| Released | 2000s |
| Latest release | varies by implementation |
| Programming language | XML, plain text |
| Operating system | Cross-platform |
| License | Open standards / various |
Job Submission Description Language
Job Submission Description Language is an open specification for describing computational jobs for submission to distributed computing systems. It provides a standardized way to declare executable binaries, arguments, file staging, environment variables, and resource requirements for batch and grid schedulers. The language has been applied in scientific computing, high-performance computing, cloud orchestration, and workflow management.
JSDELike specifications (hereafter the language) define a portable job manifest that can be consumed by middleware such as Globus Alliance, HTCondor, Sun Grid Engine, Univa Grid Engine, PBS Professional, Slurm Workload Manager, Torque (software), and LSF (software). Implementations bridge infrastructures operated by institutions like CERN, Lawrence Berkeley National Laboratory, Argonne National Laboratory, Oak Ridge National Laboratory, and Los Alamos National Laboratory. The syntax supports executable descriptors, standard I/O redirection, file transfer instructions, and resource constraints compatible with systems used at NASA, European Space Agency, National Institutes of Health, European Organization for Nuclear Research, and major supercomputing centers such as Oak Ridge Leadership Computing Facility.
The language emerged amid efforts by communities around Globus Toolkit, Open Grid Forum, and projects such as GridWay to enable federated compute across collaborations including GENI, TeraGrid, Open Science Grid, and Enabling Grids for E-sciencE. Influences include earlier job description approaches in Batch processing systems developed at institutions such as Lawrence Livermore National Laboratory and commercial offerings by IBM, Hewlett-Packard, and Sun Microsystems. Standardization efforts engaged stakeholders like European Grid Infrastructure and academic groups from Massachusetts Institute of Technology, Stanford University, University of Illinois Urbana-Champaign, Princeton University, and University of Cambridge.
The language often uses an XML- or key–value-based manifest to express attributes such as executable, arguments, environment, inputFiles, outputFiles, stdout, stderr, and requirements. Common translators map those attributes to scheduler directives used by SLURM, PBS Professional, Torque (software), HTCondor, and LSF (software). Job descriptions may include conditional constructs that reference platform metadata provided by services like LDAP directories hosted at facilities including National Energy Research Scientific Computing Center and NERSC. Interoperability is achieved using vocabularies aligned with efforts from Open Grid Forum and metadata registries maintained by organizations such as IEEE working groups and consortiums including W3C where relevant.
Adoption spans diverse projects: scientific workflows in GALAXY Project, Taverna (software), and Apache Airflow integrations; grid portals developed for Enabling Grids for E-sciencE and EGI.eu; and portal frameworks used at European Molecular Biology Laboratory, Max Planck Society, Wellcome Trust Sanger Institute, and Broad Institute. Commercial adopters include cloud orchestration layers provided by Amazon Web Services, Google Cloud Platform, and Microsoft Azure when integrating with HPC via partners like NVIDIA and Intel. Implementations appear in middleware such as Globus Toolkit, HTCondor, ARC (Advanced Resource Connector), UNICORE, and resource brokers developed by CERN for experiments including ATLAS and CMS.
The language is designed to interoperate with standards and protocols like GridFTP, SSH, HTTPS, and identity frameworks including OAuth 2.0, SAML 2.0, and X.509 certificates used across European Grid Infrastructure and national e-infrastructures. Integration patterns are documented in standards bodies such as Open Grid Forum and referenced in interoperability efforts with orchestration standards from OASIS and cloud interoperability initiatives involving Cloud Native Computing Foundation projects and the OpenStack community.
Security considerations include credential delegation, sandboxing, and least-privilege execution models enforced by middleware adopted at sensitive facilities like Department of Energy (United States), National Security Agency, and major research laboratories such as Brookhaven National Laboratory. Resource management semantics map to quota and scheduling policies implemented by SLURM, HTCondor, PBS Professional, and resource managers used at national centers including NERSC and Argonne Leadership Computing Facility. Provenance, auditing, and accounting integrations reference standards from OGF and identity management practices promoted by Internet Engineering Task Force groups.
A mature ecosystem surrounds the language: editors, validators, and translators produced by projects like Globus Alliance, HTCondor Project, EGI.eu, and academic labs at University of Oxford, ETH Zurich, Technical University of Munich, and University of Tokyo. Workflow engines and portals integrate job description support within tools such as Pegasus (workflow management), Nextflow, Snakemake, Galaxy Project, and Apache Airflow. Commercial tools from IBM, Hewlett-Packard Enterprise, Red Hat, and services from Amazon Web Services provide connectors and wrappers for enterprise adoption. Ongoing community development continues through forums including Open Grid Forum, user groups at Supercomputing Conference, and collaborative projects funded by agencies such as National Science Foundation and European Commission.