LLMpediaThe first transparent, open encyclopedia generated by LLMs

tc (Linux)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: eBPF Hop 5
Expansion Funnel Raw 84 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted84
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
tc (Linux)
Nametc
DeveloperLinux Kernel Organization
Released1999
Programming languageC (programming language)
Operating systemLinux
LicenseGNU General Public License
WebsiteLinux Foundation

tc (Linux) tc is a command-line utility for configuring kernel packet scheduler settings on Linux systems. It provides mechanisms to control bandwidth, latency, prioritization, and queuing for network interfaces used by Internet Protocol, Ethernet, and other link-layer technologies. tc interfaces with kernel subsystems and tools from the Netfilter and iproute2 ecosystems to enforce network policies for services such as Apache HTTP Server, PostgreSQL, and Kubernetes-managed workloads.

Overview

tc operates as part of the iproute2 suite and communicates with the Linux Kernel packet scheduler subsystem via netlink to manage queuing disciplines on interfaces like eth0 and ens33. Administrators use tc alongside utilities such as ip (iproute2 utility), iptables, nftables, and ifconfig to shape traffic for applications including NGINX, OpenSSH, and Docker. tc interacts with kernel features developed by contributors from organizations such as Red Hat, Canonical (company), Intel, Google, and Cisco Systems to implement queuing and policing for scenarios like Quality of Service in Voice over IP and Streaming media.

Architecture and Components

tc's architecture comprises userland control logic in the iproute2 tools, netlink communication to the kernel, and kernel-side packet scheduler modules including queuing disciplines, filters, and classifiers. Key kernel components include the Generic Traffic Scheduler (GTS), queuing discipline modules such as pfifo_fast, and queuing frameworks influenced by research from institutions like Massachusetts Institute of Technology, University of California, Berkeley, and IETF. tc supports queuing disciplines implemented as kernel modules developed by projects and vendors including Open vSwitch, Broadcom, and Intel Corporation. Integration points include Traffic Control (Linux) hooks, Network Namespace isolation used by systemd, and control groups from cgroups for container orchestration systems like Kubernetes and Docker Swarm.

Usage and Examples

Common tc workflows use commands to add, change, delete, and show qdiscs, classes, and filters for interfaces managed by systemd-networkd or network managers from Canonical (company) and Red Hat. For example, administrators shaping traffic for PostgreSQL replication streams or prioritizing SSH can combine tc with iptables or nftables classifiers and flow identifiers used by OpenVPN and WireGuard. Examples often reference concepts and tools from Linux Foundation projects like Cilium and Flannel for cloud networking, and are used in environments run by companies such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, DigitalOcean, and OVHcloud.

Classification and Queuing Disciplines

tc supports a range of queuing disciplines (qdiscs) including classful and classless types, with implementations like pfifo_fast, Hierarchical Token Bucket, and Stochastic Fairness Queueing influenced by standards from IETF working groups and research from Stanford University. Classifiers used by tc include u32, fw, route, and rsvp that intersect with protocols and standards developed by IEEE, IETF, and organizations such as 3GPP and ETSI for mobile and fixed networks. Vendors like Broadcom, Intel Corporation, and Mellanox Technologies provide hardware offloads and drivers that expose capabilities consumed by tc's qdiscs for platforms deployed by Facebook and Twitter.

Filters and Policing

Filters in tc match traffic based on keys from protocols such as IPv4, IPv6, TCP (Transmission Control Protocol), and UDP (User Datagram Protocol), and leverage kernel frameworks and helpers authored by contributors from Netfilter, Linux Kernel Organization, and projects like Open vSwitch. Policing actions include drop, restructure, and reclassify implemented via qdisc policers and egress shapers used by service providers like AT&T, Verizon, and Deutsche Telekom. Complex filter chains often integrate with monitoring and observability systems such as Prometheus, Grafana, and ELK Stack deployed by operators like Netflix and Airbnb.

Performance and Use Cases

tc is used for latency-sensitive workloads including Voice over IP, online gaming, and real-time streaming, and for traffic management in data centers run by Google, Amazon, and Microsoft. Performance considerations involve kernel version features, CPU and NUMA topology on hardware from Intel and AMD, and NIC offload support from vendors like Mellanox Technologies and Broadcom. Use cases include traffic shaping for multi-tenant clouds managed by OpenStack, enforcing service-level objectives for distributed databases like Cassandra and MongoDB, and prioritizing control plane traffic in Kubernetes clusters.

Development and History

tc evolved from early packet scheduling work in the Linux Kernel Organization and contributions from developers affiliated with IETF and academic institutions such as University of California, Berkeley and Massachusetts Institute of Technology. The tool is maintained within the iproute2 project with contributions from companies including Red Hat, Intel Corporation, Google, and Cisco Systems. Over time, tc incorporated queuing disciplines inspired by research from Stanford University, standards from IETF working groups, and production requirements from operators like Verizon and AT&T, while evolving alongside networking stacks used in projects such as Open vSwitch, Cilium, and Calico.

Category:Linux networking