LLMpediaThe first transparent, open encyclopedia generated by LLMs

Portable Batch System

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Grid Engine Hop 5
Expansion Funnel Raw 28 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted28
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Portable Batch System
NamePortable Batch System
DeveloperAltair Engineering; originally by NASA Ames Research Center
Released1991
Latest releaseVarious forks and commercial editions
Programming languageC, C++
Operating systemUnix, Linux, macOS, Windows (via ports)
GenreJob scheduler, Batch system
LicenseMixed: open-source and proprietary variants

Portable Batch System.

Portable Batch System (PBS) is a family of job scheduling systems designed to manage batch workload on compute clusters, supercomputers, and high-performance computing environments. PBS provides queuing, resource allocation, policy-driven scheduling, and accounting features for batch jobs, enabling scientific, engineering, and enterprise workloads to run on shared compute resources. PBS and its descendants have influenced cluster management tools and resource managers used at research centers, national labs, and commercial data centers.

Overview

PBS originated as a toolkit to submit, queue, prioritize, and monitor batch jobs on UNIX and Linux clusters at research institutions such as NASA Ames Research Center, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Argonne National Laboratory. Variants have been adopted by organizations including National Center for Supercomputing Applications, CERN, European Organization for Nuclear Research, Sandia National Laboratories, and Fermilab. The system integrates with resource schedulers and accounting systems used at National Energy Research Scientific Computing Center and other HPC centers.

Architecture and Components

PBS-based systems typically comprise a server daemon, a scheduler, and node-level agents. The server manages job submission and queues, often interacting with a scheduler such as the PBS scheduler or third-party schedulers used at Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory. Node daemons run on compute nodes similar to agents deployed at Los Alamos National Laboratory clusters. PBS systems integrate with resource managers and batch submission clients used at Argonne National Laboratory and interoperate with workload managers deployed at European Southern Observatory facilities. Components include command-line tools adopted by user communities at California Institute of Technology and web portals used at Imperial College London.

Job Scheduling and Resource Management

Job scheduling in PBS involves queuing policies, priority calculations, and backfill strategies implemented at centers like Sandia National Laboratories and Oak Ridge National Laboratory. Resource management supports CPU cores, memory, GPUs as seen in deployments at CERN and Fermilab, and specialized resources such as accelerators used at Lawrence Livermore National Laboratory. Policies for fair-share, queue limits, and reservation integrate with accounting systems used by National Center for Supercomputing Applications and job arrays techniques known from workflows at European Molecular Biology Laboratory.

Administration and Configuration

Administrators configure PBS servers, queues, authentication, and resource limits; practices reflect operational guidance from National Oceanic and Atmospheric Administration compute centers and procedures at NASA Ames Research Center. Integration with authentication and authorization services mirrors deployments at Princeton University and University of Cambridge research clusters. Monitoring and logging approaches draw on tools employed at Argonne National Laboratory and Los Alamos National Laboratory, while backup and high-availability setups parallel strategies used at Oak Ridge National Laboratory.

Implementations and Variants

Multiple implementations and commercial products evolved from the original PBS, including distributions and forks used at Altair Engineering, and projects influenced by PBS at Univa. Open-source forks and derivatives have been maintained by communities at OpenPBS-related groups and research institutions such as National Energy Research Scientific Computing Center. Commercial editions have been offered by vendors serving customers like European Organization for Nuclear Research and Sandia National Laboratories. Implementations often interoperate with schedulers and resource managers from Slurm communities and grid middleware used at GridPP sites.

History and Development

Development began in the early 1990s at NASA Ames Research Center to address batch processing needs for computational science. PBS spread through collaborations with national labs including Lawrence Livermore National Laboratory and Los Alamos National Laboratory, and through adoption by university clusters at University of California, Berkeley and Massachusetts Institute of Technology. Commercialization and forks involved organizations such as Altair Engineering and entities that provided enterprise support to customers like CERN and Fermilab. The evolution of PBS influenced subsequent workload managers developed at Oak Ridge National Laboratory and projects at European Centre for Medium-Range Weather Forecasts.

Usage and Applications

PBS and its variants support scientific simulations, data analysis, and large-scale workflows at institutions such as National Center for Supercomputing Applications, Lawrence Berkeley National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. Typical applications include computational fluid dynamics used at Imperial College London and California Institute of Technology, climate modeling initiatives at European Centre for Medium-Range Weather Forecasts, molecular dynamics projects at European Molecular Biology Laboratory, and data processing pipelines at CERN experiments. PBS-based schedulers are integrated into research infrastructures at Fermilab and national laboratories to manage batch campaigns, parameter sweeps, and ensemble runs.

Category:Job scheduling systems