Generated by GPT-5-mini| SMT-COMP | |
|---|---|
| Name | SMT-COMP |
| Status | Active |
| Discipline | Automated theorem proving |
| Frequency | Annual |
| First | 2005 |
| Organizer | SMT-LIB Initiative |
SMT-COMP SMT-COMP is an annual international competition for satisfiability modulo theories solvers that evaluates performance on standardized benchmarks. It brings together developers, researchers, and teams from institutions such as Microsoft Research, Google, Stanford University, Massachusetts Institute of Technology, and University of Cambridge to compare tools used in verification, synthesis, and static analysis. The event influences research agendas at venues like CADE, IJCAR, CAV, and TACAS and shapes benchmark collections used by projects at NASA, Intel Corporation, ARM Holdings, and Bloomberg L.P..
SMT-COMP measures solver capabilities on problems drawn from the SMT-LIB repository, using a fixed execution environment provided by organizations such as Carnegie Mellon University, University of Oxford, ETH Zurich, and CNRS. The competition emphasizes soundness, correctness, and performance under resource constraints modeled after infrastructure maintained by Amazon Web Services, Google Cloud Platform, and Hewlett-Packard Enterprise. Awards and rankings are announced in sessions at conferences like CAV and FLoC and publicized via mailing lists maintained by SMT-LIB and research groups at University of Illinois Urbana-Champaign and Princeton University.
The competition arose from community efforts associated with the SMT-LIB initiative and early solver projects at SRI International, IBM Research, and Z3 Project. Early editions consolidated benchmarks from tools developed at Z3 (theorem prover), CVC4, and Yices and engaged research groups from University of California, Berkeley, University College London, and KTH Royal Institute of Technology. Over time, SMT-COMP incorporated contributions from teams at Facebook AI Research, Amazon Research, Siemens, and GrammaTech, and evolved parallel to developments presented at POPL, ICFP, and PLDI.
SMT-COMP is organized into divisions reflecting logical fragments and application areas, such as quantifier-free theories, bitvectors, arrays, and quantified logics, mirroring classifications used by projects at Microsoft Research and Oracle Corporation. Divisions permit specialized entrants from academic labs like University of Toronto and EPFL as well as industrial teams including NVIDIA and Siemens AG. Hardware and execution rules reference standards from SPEC and scheduling practices discussed at EuroSys, and scoring protocols draw on methodologies used in SAT Competition and International SAT Competition.
Benchmarks originate from community repositories including SMT-LIB, case studies from NASA Jet Propulsion Laboratory, and verification tasks contributed by groups at Toyota Research Institute, Siemens Research, and ARM Research. Evaluation uses rigorous time limits, memory caps, and correctness checks facilitated by harnesses developed at The University of Manchester and University of California, Santa Cruz. Statistical analysis of results leverages techniques common to papers at NeurIPS, ICML, and STOC while reproducibility efforts align with guidelines from ACM and IEEE.
Notable participating solvers and teams have included projects such as Z3 (theorem prover), CVC4, Boolector, Yices, MathSAT, Bitwuzla, and implementations from groups at University of Iowa, University of Edinburgh, University of Twente, and University of Pisa. Industrial contributors have fielded entries from Microsoft Research, Google, Amazon, Facebook, and Intel. Community development is often coordinated through workshops hosted by Dagstuhl and presentations at ILC and SAS.
SMT-COMP results have driven advances cited in publications at CAV, LICS, POPL, and ICFEM and informed tool adoption at NASA Ames Research Center, General Electric, and Bosch. Records set by solvers have influenced product features at Microsoft, academic theses at University of Cambridge, and research directions at Max Planck Institute for Informatics. High-profile achievements are frequently summarized in keynote talks at CAV and award announcements coordinated with ACM SIGPLAN and IEEE Computer Society.
The competition is governed by steering committees drawn from the SMT-LIB community, with organizational support from institutions such as Max Planck Society, CNRS, University of Oxford, and Princeton University. Rules, scoring, and benchmark curation are overseen by program chairs, benchmark coordinators, and artifact reviewers who coordinate with conferences including CAV, IJCAR, and CADE. Funding and infrastructure have been provided by a mix of academic grants from agencies like National Science Foundation, Engineering and Physical Sciences Research Council, and industry sponsorship from Microsoft, Google, and Intel Corporation.
Category:Automated theorem proving competitions