Generated by GPT-5-mini| TPC (benchmark) | |
|---|---|
| Name | TPC benchmark |
| Developer | Transaction Processing Performance Council |
| Genre | performance benchmark |
TPC (benchmark) is a family of standardized performance benchmarks for measuring the throughput, latency, and price/performance of database systems, transaction processing platforms, and data-intensive applications. It was developed and maintained by the Transaction Processing Performance Council and used by vendors, researchers, and procurement organizations to compare hardware, software, and system configurations from across the information technology landscape. The benchmarks influence purchasing decisions and academic evaluations across major firms and institutions.
The benchmarks provide reproducible workloads and reporting formats to evaluate systems produced by firms such as IBM, Oracle Corporation, Microsoft, SAP SE, and Amazon Web Services; they address industry needs articulated by consortium members like Intel Corporation, AMD, Dell Technologies, Hewlett Packard Enterprise, and Cisco Systems. The suite includes transaction-oriented and decision-support workloads that reflect use cases encountered at organizations such as Walmart, Bank of America, Goldman Sachs, JPMorgan Chase, and Citigroup. Governance and standards work draws input from participants including University of California, Berkeley, Massachusetts Institute of Technology, Stanford University, Carnegie Mellon University, and University of Toronto.
Origins trace to industry efforts in the early 1990s when trade groups and vendors debated comparable measures, with milestones that involved organizations like Bell Laboratories, DEC, Sun Microsystems, and Sequent Computer Systems. Formal chartering under the consortium followed models used by bodies such as IEEE, ISO, and IETF to create reproducible test specifications. Major revisions paralleled technological shifts exemplified by the rise of Teradata, Sybase, Informix, PostgreSQL, and MySQL and later adaptations for cloud providers including Google LLC and Microsoft Azure. Working groups collaborated with representatives from National Institute of Standards and Technology, European Commission, and corporate research labs at Bell Labs and IBM Research.
The council defined multiple suites to reflect distinct domains, and vendors implemented them to produce published results alongside systems from Oracle Database, Microsoft SQL Server, Amazon Aurora, MongoDB, Inc., and Couchbase, Inc.. Workloads emulate scenarios similar to operations run by American Express, Visa, Mastercard, FedEx, and UPS. Typical suites include transaction processing workloads comparable to systems used at Target Corporation, Costco Wholesale, and Home Depot, and decision-support/analytics workloads like those run at Facebook, Twitter, and LinkedIn. Specialized variants have been crafted to assess mainframe-centric platforms such as IBM Z and distributed analytic clusters leveraging frameworks from Apache Software Foundation projects like Apache Hadoop and Apache Spark.
Primary metrics include throughput measures similar to those used in performance studies by SPEC, TPC-C, and TPC-H design philosophies; they quantify transactions per minute, query-per-hour, and price/performance ratios that buyers at The World Bank, International Monetary Fund, and United Nations may examine. Methodology emphasizes repeatability, auditability, and documented setup procedures inspired by standards committees like W3C and OASIS. Results often accompany disclosure of hardware by NVIDIA Corporation (for GPU-accelerated systems), storage arrays from NetApp, Inc. and EMC Corporation, and networking gear from Arista Networks and Juniper Networks.
Enterprises such as Procter & Gamble, ExxonMobil, Shell plc, Pfizer, and Johnson & Johnson reference published outcomes in procurement. Academic labs at Princeton University, Yale University, Columbia University, and University of Cambridge use the suites to compare database engines and architectures. Cloud marketplaces provided by Alibaba Group and Oracle Cloud Infrastructure surface certified results to assist cloud architects at Salesforce, ServiceNow, and Workday. Trade publications like The Wall Street Journal, The Financial Times, and The Economist have reported on benchmark trends and vendor claims.
Critics from research groups at University of California, Berkeley, MIT Computer Science and Artificial Intelligence Laboratory, and ETH Zurich argue that standardized workloads do not capture bespoke operational patterns at firms like Netflix or Spotify and may favor vendors that tailor systems to the test. Observers including analysts at Gartner, Forrester Research, and IDC note potential for optimization artifacts and configuration tuning that reduce real-world applicability. Legal and procurement teams at European Commission and U.S. Department of Defense have cautioned against sole reliance on published numbers without context from field trials and benchmarks such as those from SPEC CPU and YCSB.
Implementation requires adherence to published rules, audited run logs, and third-party validation performed by independent laboratories and academic auditors from institutions like National Institute of Standards and Technology and commercial test houses used by Underwriters Laboratories or SGS S.A.. Certification demands disclosure of component lists that may include CPUs from ARM Holdings, Intel Xeon, and AMD EPYC families, storage solutions from Seagate Technology or Western Digital Corporation, and virtualization platforms such as VMware and KVM. Vendors submit reports that are then reviewed by the council and made available as official entries used by procurement teams at Department of Veterans Affairs, NASA, and multinational corporations.
Category:Benchmarks