LLMpediaThe first transparent, open encyclopedia generated by LLMs

PARSEC (benchmark suite)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: RISC-V Hop 5
Expansion Funnel Raw 73 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted73
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
PARSEC (benchmark suite)
NamePARSEC
TitlePARSEC (benchmark suite)
DeveloperPrinceton University; Intel; University of Texas at Austin
Released2007
Latest release2.1 (2012)
GenreBenchmark suite; workload characterization; computer architecture
LicenseBSD-style (original)

PARSEC (benchmark suite) is a benchmarking suite created to evaluate multicore and parallel system performance, emphasizing realistic multithreaded workloads drawn from academic and commercial software. The suite was developed by researchers at Princeton University, Intel Corporation, and University of Texas at Austin with contributions from groups associated with Berkeley Lab, Los Alamos National Laboratory, and industry partners. PARSEC targets evaluation of multicore processors, chip multiprocessors, memory subsystems, and runtime systems by offering a set of representative benchmarks and standardized input sizes.

Overview

PARSEC was introduced to address shortcomings in earlier suites such as SPEC CPU, TPC-C, and LINPACK by focusing on modern multithreaded applications used in production environments at organizations like Google, Facebook, and Yahoo!. The suite aggregates benchmarks originally developed or used at institutions including Massachusetts Institute of Technology, Carnegie Mellon University, University of Illinois at Urbana–Champaign, and University of California, Berkeley. PARSEC provides workload characterization that complements suites maintained by SPEC.org and proposals from the OpenMP and MPI Forum communities, offering reproducible, open-source workloads under permissive licensing similar to some releases from BSD Project contributors.

Workloads and Benchmarks

PARSEC includes a mix of compute-bound and memory-bound programs drawn from domains represented by projects at Apache Software Foundation, Mozilla Foundation, Blender Foundation, and scientific codes used at Argonne National Laboratory and Oak Ridge National Laboratory. Standard benchmarks in PARSEC cover data analytics, multimedia, and scientific visualization and include names familiar to researchers at NASA Ames Research Center, Bell Labs, and XR Lab-style groups. The suite provides multiple input sizes (e.g., simsmall, simmedium, simlarge) to align experiments with the scale used by laboratories such as Lawrence Berkeley National Laboratory and corporations like Microsoft Research and IBM Research. Each workload is accompanied by a reference implementation and harnesses that facilitate integration with performance tools from vendors like Intel and measurement infrastructures similar to those used by Google Research.

Design and Methodology

PARSEC's design emphasizes representative workloads, reproducibility, and ease of integration with simulators and hardware prototypes produced at institutions such as Stanford University, Caltech, and ETH Zurich. Methodological choices reflect influences from established practices at National Institute of Standards and Technology, DARPA-funded initiatives, and benchmarking conventions used by European Processor Initiative collaborators. The suite includes mechanisms for thread pinning, affinity control, and standardized execution scripts to ensure comparability across experiments performed by groups like ARM Holdings, AMD, and NVIDIA. PARSEC authors provided guidelines on dataset selection, runtime configuration, and statistical treatment of results consistent with conventions advocated by researchers at SIGARCH, ISCA, and MICRO conferences.

Performance Evaluation and Results

Published evaluations using PARSEC have been cited in papers from venues such as ISCA, MICRO, ASPLOS, and POPL and influenced product roadmaps at companies like Intel Corporation, Qualcomm, and Samsung Electronics. Results typically report metrics including throughput, latency, cache miss rates, and energy efficiency, often measured using infrastructure from SPEC.org and experimental platforms at Lawrence Livermore National Laboratory and university labs. Comparative studies using PARSEC have been used to demonstrate effects of cache hierarchies investigated by teams at Tokyo Institute of Technology, branch prediction strategies studied at University of Pennsylvania, and memory scheduling algorithms proposed at University of Michigan.

Adoption and Use Cases

PARSEC has been adopted by academic groups at MIT, Princeton University, University of Texas at Austin, University of California, San Diego, and industrial labs at Intel Research, IBM Research, and Google. Use cases include architectural simulator validation at projects like gem5, compiler optimization assessments in research from LLVM contributors, and operating-system scheduling studies by researchers affiliated with Red Hat and Microsoft Research. The suite has been integrated into teaching curricula at institutions such as Carnegie Mellon University and University of Cambridge and used in industry benchmarking by startups and corporations similar to Dropbox and LinkedIn for server-class evaluation.

Limitations and Criticisms

Critiques of PARSEC have come from researchers at University of Illinois and ETH Zurich who argue that the suite may not reflect emerging workloads from platforms owned by Amazon Web Services, Alibaba Group, or smartphone ecosystems led by Apple Inc. and Google LLC. Critics point out limitations in representing latencysensitive services prevalent at Netflix and Twitter and in modeling heterogeneity seen in systems from ARM Ltd. and GPU-accelerated platforms by NVIDIA Corporation. Subsequent benchmark efforts, including domain-specific suites produced by MLPerf contributors and cloud benchmarking initiatives at SPEC.org and Cloud Native Computing Foundation, have attempted to address gaps identified in PARSEC by introducing containerized, microservice, and machine-learning workloads.

Category:Computer benchmarks