Generated by GPT-5-mini| Journal of Parallel and Distributed Computing | |
|---|---|
| Title | Journal of Parallel and Distributed Computing |
| Discipline | Computer science |
| Abbreviation | J. Parallel Distrib. Comput. |
| Publisher | Elsevier |
| Country | Netherlands |
| Frequency | Monthly |
| History | 1984–present |
| Impact | (see Impact and Reception) |
Journal of Parallel and Distributed Computing The Journal of Parallel and Distributed Computing is a peer-reviewed periodical covering research on parallel processing, distributed systems, high-performance computing, and related areas. Founded in the 1980s, the journal has published contributions from researchers affiliated with institutions such as Stanford University, Massachusetts Institute of Technology, University of California, Berkeley, Carnegie Mellon University, and University of Cambridge. The journal serves readers from academic centers like Princeton University and ETH Zurich as well as national laboratories such as Los Alamos National Laboratory and Lawrence Berkeley National Laboratory.
The journal was established amid advances at institutions including IBM Research, Bell Labs, Xerox PARC, Hewlett-Packard Laboratories, and Intel Corporation during a period marked by projects like Intel iPSC, Cray Research systems, and efforts at NASA Ames Research Center. Early editorial leadership drew on scholars from University of Illinois Urbana-Champaign, University of Toronto, University of Washington, Cornell University, and University of Michigan. Milestones in the journal’s history coincide with events such as the rise of MPI (Message Passing Interface), development of PVM (Parallel Virtual Machine), and initiatives like the DARPA programs that funded distributed computing research. Over decades the journal has chronicled shifts associated with technologies from Seymour Cray-era supercomputers to modern systems developed by NVIDIA, AMD, Google, Microsoft Research, and Amazon Web Services.
The journal covers topics including algorithm design tested on platforms from Cray-1 descendants to clusters at Oak Ridge National Laboratory, performance modeling used at Sandia National Laboratories, and middleware evaluated in projects from European Organization for Nuclear Research (CERN) collaborations. Subjects extend to concurrency control explored in work related to Oracle Corporation databases, fault tolerance techniques studied at Los Alamos National Laboratory, scheduling algorithms influenced by research at National Institute of Standards and Technology, and security issues intersecting with research from SRI International. Other focal areas reported in the journal include parallel languages connected to efforts at University of Tokyo, distributed file systems in projects like Andrew File System, and cloud computing research paralleling deployments by IBM Cloud and Google Cloud Platform.
The editorial board has traditionally recruited editors and reviewers affiliated with universities such as University of California, San Diego, University of Texas at Austin, Imperial College London, University of Southern California, and Purdue University. Editors have come from research groups at Bell Labs, Honeywell, Siemens, AT&T Labs Research, and Toshiba Research. Peer review policy aligns with standards practiced by publishers including Elsevier and editorial practices mirror those in journals associated with societies like Association for Computing Machinery and Institute of Electrical and Electronics Engineers. Publication workflows have integrated submission systems used by institutions such as Springer Nature and platforms like Elsevier Editorial System, while special issues have been guest-edited by contributors connected to conferences including ACM Symposium on Operating Systems Principles, ACM/IEEE International Symposium on Computer Architecture, International Conference for High Performance Computing, Networking, Storage and Analysis, and USENIX workshops.
The journal is indexed alongside titles listed in databases produced by Elsevier and indexing services tied to Clarivate Analytics and Scopus. Abstracting coverage overlaps with services provided by INSPEC, DBLP, and the catalogues of university libraries such as Harvard University Library and British Library. Citation tracking appears in indices maintained by Web of Science and datasets curated by organizations like Crossref and Google Scholar. Libraries at institutions including Yale University, Columbia University, University of Oxford, and McGill University include the journal in their electronic holdings and interlibrary loan networks.
The journal’s impact metrics are discussed in venues such as reports from Clarivate Analytics and analyses from Scopus and Google Scholar Metrics. Articles published in the journal have influenced developments at companies like Intel Corporation, NVIDIA, Microsoft, Amazon, and research centers including Argonne National Laboratory and National Energy Research Scientific Computing Center. Recognition of influential work has appeared in retrospectives at conferences like ACM SIGPLAN meetings and awards given by organizations such as IEEE Computer Society and Association for Computing Machinery.
The journal has published influential papers that intersect with projects from MPI Forum, studies referenced in reports by DARPA, and method papers cited by researchers at Los Alamos National Laboratory and Oak Ridge National Laboratory. Special issues have focused on themes tied to conferences like International Conference on Parallel Processing, workshops sponsored by European Commission research programs, and collaborative collections edited by scholars associated with University of Illinois and Technische Universität München. Notable topics have included large-scale simulations used in work related to National Center for Supercomputing Applications, data-intensive computing aligned with MapReduce-era research at Google Research, and emerging paradigms influenced by TensorFlow and machine learning research at Stanford University and DeepMind.
Category:Computer science journals