Generated by GPT-5-mini| TPC-C | |
|---|---|
| Name | TPC-C |
| Developer | Transaction Processing Performance Council |
| First release | 1992 |
| Genre | Database benchmark |
| License | Public domain specification |
TPC-C
TPC-C is a standardized benchmark for evaluating on-line transaction processing performance for database systems. It models a multi-warehouse wholesale supplier and measures transactional throughput under a mixed workload of read and write operations. The benchmark specification is used by hardware vendors, database vendors, cloud providers, and research institutions to compare system performance across platforms.
The benchmark models a multi-warehouse order-entry environment derived from retail and distribution scenarios encountered by organizations such as Walmart, Costco, Amazon (company), Target Corporation, and Kroger. Its workload mixes transactions inspired by order-entry systems used by Oracle Corporation, IBM, Microsoft, SAP SE, and Cisco Systems. The benchmark load factors, transaction mix, and data scaling rules reflect practices seen at FedEx, UPS, DHL (company), and XPO Logistics. Academic and industry studies involving institutions like Massachusetts Institute of Technology, Stanford University, Carnegie Mellon University, University of California, Berkeley, and ETH Zurich have used the benchmark to study concurrency control, locking strategies, and disk I/O patterns similar to those in systems deployed by Facebook, Google, LinkedIn, Twitter, and Netflix.
TPC-C was introduced by the Transaction Processing Performance Council in the early 1990s amid a period of rapid growth in relational database deployments led by vendors such as Sybase, Ingres Corporation, Informix, and PostgreSQL Global Development Group. The benchmark evolved in the context of architectures promoted by Sun Microsystems, DEC (Digital Equipment Corporation), HP (Hewlett-Packard), and Intel Corporation. Key design choices were influenced by transaction processing research performed at laboratories including Bell Labs, IBM Research, Microsoft Research, and government-funded projects at DARPA and NSF (National Science Foundation). Subsequent updates and clarifications were driven by submissions and audits involving consultancies like Accenture, Capgemini, Deloitte, and performance teams at cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
The specification defines a schema, transaction types, workload mix, and scaling rules that reflect practice at enterprise customers such as Citigroup, JPMorgan Chase, Bank of America, Goldman Sachs, and Morgan Stanley. It mandates a schema with tables for warehouses, districts, customers, orders, order-lines, items, and stock, paralleling record structures used by systems from SAP SE, Oracle Corporation, and IBM. Transaction families include New-Order, Payment, Order-Status, Delivery, and Stock-Level, analogous to operations in systems developed by Infosys, Tata Consultancy Services, Accenture, and Capgemini. The scaling model links number of warehouses to database size, reflecting deployment considerations relevant to data centers run by Equinix, Digital Realty, and enterprises like Target Corporation and Walmart. The spec prescribes durability and ACID semantics consistent with implementations from PostgreSQL Global Development Group, MySQL, Oracle Corporation, Microsoft SQL Server, and NewSQL systems such as VoltDB and Cockroach Labs.
Primary metrics include tpmC (transactions per minute-C) and price/performance, metrics used by vendors including Dell Technologies, Hewlett Packard Enterprise, Lenovo, and cloud providers like Amazon Web Services to market systems. Reported results have appeared alongside hardware announcements from NVIDIA (GPU-accelerated databases), AMD, and storage vendors such as EMC Corporation and NetApp. Research papers from ACM SIGMOD, IEEE ICDE, VLDB Endowment, and USENIX conferences use TPC-C numbers to compare concurrency control methods, indexing strategies, and replication protocols implemented in projects at MIT CSAIL, UC San Diego, University of Toronto, University of Washington, and University of Illinois Urbana-Champaign.
Implementation requires careful tuning of operating systems and middleware from vendors like Red Hat, Canonical (company), SUSE, and virtualization/containment platforms such as VMware, Kubernetes, and Docker (software). Compliance audits are performed by independent auditors and submission to the council must follow rules enforced by firms like Ernst & Young, KPMG, PwC, and BDO International. Database vendors such as Oracle Corporation, Microsoft, IBM, MariaDB Corporation, Percona, and open-source projects like PostgreSQL Global Development Group publish audited results following the council’s disclosure and run rules. Performance tuning often involves storage technologies from Samsung Electronics, Western Digital, Seagate Technology, and NVMe arrays marketed by Pure Storage and NetApp.
Critics in academia and industry—including researchers from Stanford University, MIT, Princeton University, Cornell University, and organizations such as IEEE and ACM—have argued that the benchmark favors certain architectures and can be gamed by vendor-specific tuning found in systems from Oracle Corporation and Microsoft. Observers at The London School of Economics and Harvard University have noted that real-world workloads at firms like Uber, Airbnb, Spotify, and Pinterest differ substantially from the TPC-C model. Limitations highlighted in studies by Google Research, Facebook AI Research, and OpenAI include lack of representativeness for analytical workloads typical of Snowflake Inc., Databricks, Cloudera, and data lakes used at Netflix and YouTube. Proposals for successor benchmarks have been advanced by consortia including SPEC (Standard Performance Evaluation Corporation), IEEE, and academic groups at UC Berkeley and ETH Zurich.
Category:Benchmarks