LLMpediaThe first transparent, open encyclopedia generated by LLMs

TPC

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: IBM Db2 Hop 4
Expansion Funnel Raw 62 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted62
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
TPC
NameTPC
AbbreviationTPC
TypeBenchmarking standard
DeveloperTransaction Processing Performance Council
First published1988

TPC

The TPC designation refers to a family of industry-standard benchmarks and associated specifications developed to evaluate performance and price/performance of data processing systems. It is widely used by vendors, integrators, and researchers to compare server hardware, storage, networking, and database management systems across workloads represented by real-world transaction and decision-support scenarios. Major published results influence procurement decisions by enterprises, government agencies, cloud providers, and research laboratories.

Definition and abbreviations

The acronym TPC commonly stands for the Transaction Processing Performance Council, the consortium that defines the benchmarks, but it also identifies the benchmark suite used for evaluating transactional throughput and analytical query performance. Notable benchmark names include TPC-C, TPC-E, TPC-H, TPC-DS, TPCx-HS, and TPCx-BB; these names are often cited alongside vendor platforms such as IBM, Oracle Corporation, Microsoft, Dell Technologies, and Amazon Web Services. The council’s documents reference metrics like tpmC (transactions per minute C), QphH (queries per hour H), and price/performance ratios reported in dollars per tpmC or dollars per QphH; such metrics are used by organizations like Intel Corporation, NVIDIA, Hewlett Packard Enterprise, and Google for marketing, procurement, and comparative studies.

History and development

The TPC emerged in the late 1980s from needs articulated by database vendors, integrators, and research groups to create reproducible, audited workloads for online transaction processing and decision support. Early adoption involved participants including IBM, Oracle Corporation, Digital Equipment Corporation, and Sun Microsystems collaborating with academic groups from institutions such as Massachusetts Institute of Technology and Stanford University. Over subsequent decades the council released successive benchmarks—TPC-A and TPC-B (retired), TPC-C (transactional industry standard), TPC-H and TPC-R (analytical, with TPC-R retired), and newer benchmarks like TPC-E and TPC-DS to reflect evolving workloads described in reports from National Institute of Standards and Technology and studies by ACM and IEEE. Major revisions responded to changes driven by hardware advances from Intel Corporation and AMD, storage innovations from Seagate Technology and Western Digital Corporation, and networking shifts involving Cisco Systems and Arista Networks.

Types and classifications

TPC benchmarks are broadly classified into online transaction processing (OLTP), decision support systems (DSS), big data/scale-out, and big-data sort-like workloads. Representative classifications include: - OLTP: TPC-C (warehousing and order-entry), TPC-E (brokerage trading) with metrics like tpmC and tseconds, often performed on platforms from Oracle Corporation, Microsoft, SAP SE, and IBM. - DSS/analytics: TPC-H (ad hoc reporting) and TPC-DS (complex analytics and star-schema queries), used to compare systems from Teradata, Snowflake, Cloudera, and Databricks. - Big data/scale-out: TPCx-HS (Hadoop/Spark Sort), TPCx-BB (big bench) that reference ecosystems including Apache Hadoop, Apache Spark, Apache Hive, and vendors like Hortonworks and MapR. - Specialized and retired: TPC-A/B/R for legacy workloads, and community-driven harnesses adopted by research groups at Carnegie Mellon University and University of California, Berkeley.

Applications and industries

TPC benchmarks are applied across finance, retail, telecommunications, healthcare, government, and cloud services for procurement validation, competitive positioning, and capacity planning. Financial trading firms such as Goldman Sachs and JPMorgan Chase consult TPC-E style metrics when evaluating low-latency platforms from NVIDIA and Intel Corporation; retailers like Walmart and Amazon.com analyze TPC-C class workloads to size order management systems on platforms by Oracle Corporation or SAP SE. Telecommunications carriers such as Verizon Communications and AT&T use analytical results from TPC-DS class assessments when planning analytics clusters built on infrastructure by Dell Technologies and Cisco Systems. Government agencies follow procurement rules similar to those influenced by General Services Administration or standards guidance from National Institute of Standards and Technology when referencing published TPC reports.

Standards and benchmarking

TPC produces formal benchmark specifications, execution rules, and auditing requirements; valid results must be publicly disclosed, audited by independent firms like PricewaterhouseCoopers or KPMG, and conform to published run and reporting rules. The council interacts with standards bodies and conferences such as ISO, INCITS, ACM SIGMOD, and VLDB to align definitions and present methodological research. Published result tables list system configuration, component suppliers (processors from Intel Corporation or AMD; storage from Samsung Electronics》 and Micron Technology), measurement periods, and price/performance calculations; vendors use these certified disclosures in submissions to trade shows such as VMworld and Oracle OpenWorld.

Technical design and implementation

Each benchmark defines schema, data generation tools, query mixes, transaction profiles, and ramp/steady-state execution protocols. Implementations require careful tuning of database management systems—such as PostgreSQL, MySQL, Oracle Database, Microsoft SQL Server—operating systems like Red Hat Enterprise Linux or Microsoft Windows Server, and middleware stacks including Apache Kafka for ingestion. The specifications mandate neutral scaling rules (scale factors), result validation checks, timing and concurrency models, and recovery criteria; auditors inspect configuration files, source snippets, and workload trace samples. Modern implementations address parallelism, NUMA effects on processors from Intel Corporation and AMD, NVMe storage arrays from Dell EMC and NetApp, and distributed file systems such as Hadoop Distributed File System and Ceph to meet throughput and latency targets.

Category:Benchmarks