Generated by GPT-5-mini| Fair Queuing | |
|---|---|
| Name | Fair Queuing |
| Type | Scheduling algorithm |
| Inventor | Anonymous |
| Field | Computer networking |
Fair Queuing
Fair Queuing is a packet scheduling technique designed to allocate link bandwidth fairly among competing flows in packet-switched networks. It approximates ideal processor-sharing behavior by serving packets from per-flow queues in a round-robin or weighted fashion, providing isolation between flows and improving quality for interactive and real-time applications. The algorithm and its descendants influenced router design, congestion control research, and quality-of-service mechanisms across academia and industry.
Fair Queuing arose to address unequal bandwidth allocation in routers and switches when mixes of bursty and greedy flows coexist. Key motivations cited in networking literature include fairness between persistent flows studied by researchers at University of California, Berkeley, designs used by companies such as Cisco Systems and Juniper Networks, and protocol-level interactions with Transmission Control Protocol implementations from projects like BSD and Linux kernel. The technique contrasts with first-in, first-out disciplines employed in early systems by organizations including Digital Equipment Corporation and standards bodies like the Internet Engineering Task Force.
Several variants of Fair Queuing exist that trade off complexity and fidelity to ideal sharing. Classical packet-based Fair Queuing (PFQ) and byte-based variants emulate Generalized Processor Sharing, a model developed in research groups at Massachusetts Institute of Technology and Stanford University. Algorithms such as Weighted Fair Queuing (WFQ) extend the approach to support differentiated weights, drawing on scheduling ideas from AT&T research and telecommunications standards used by European Telecommunications Standards Institute. Deficit Round Robin (DRR) reduces per-packet overhead and was popularized in implementations by vendors like Cisco Systems and projects including FreeBSD. Hierarchical Fair Service Curve (HFSC) integrates latency and bandwidth contracts and has been used in systems designed at institutions like Carnegie Mellon University and Hewlett-Packard.
Analytical work on Fair Queuing evaluates fairness, latency, and throughput under adversarial and stochastic traffic models. Performance proofs often reference max-min fairness concepts explored by economists and engineers associated with Bell Labs and game-theoretic fairness studied at Princeton University and Harvard University. Metric-driven analyses compare departure times to ideal fluid models from scholars at University of California, Los Angeles and University of Cambridge. Stability and buffer-sizing properties have been derived alongside congestion-control models such as Additive Increase/Multiplicative Decrease examined by researchers at Cornell University and MIT Lincoln Laboratory.
Fair Queuing and its variants have been implemented in commercial routers from Cisco Systems, Juniper Networks, and Huawei Technologies, open-source stacks like Linux kernel traffic control (tc) and FreeBSD queue disciplines, and in software-defined networking platforms developed at Open Networking Foundation. Applications include traffic policing for enterprise networks deployed by companies such as IBM and Microsoft, multimedia streaming services like those operated by Netflix and YouTube to reduce jitter, and satellite and cellular systems researched by teams at NASA and Qualcomm. Integration with active queue management and policing mechanisms standardized by organizations like the Internet Engineering Task Force enables deployment in metropolitan and carrier networks run by operators such as AT&T and Deutsche Telekom.
Empirical evaluations measure fairness indices, latency distributions, and link utilization across testbeds at facilities including National Institute of Standards and Technology and university labs at University of Illinois Urbana–Champaign. Metrics such as Jain's fairness index introduced by scholars at Bell Labs and queuing delay histograms analyzed in studies from Stanford University are commonly reported. Simulation and experimental platforms include ns-2, ns-3, and emulation testbeds used by researchers at Georgia Institute of Technology and University of Washington. Comparative studies often contrast Fair Queuing against simple disciplines in scenarios inspired by real deployments from Verizon and research prototypes funded by agencies like the National Science Foundation.
The conceptual roots of Fair Queuing trace to processor-sharing models and scheduling theory pursued at institutions including Massachusetts Institute of Technology and Bell Labs in the late 20th century. Foundational publications emerged from collaborations among researchers affiliated with University of California, Berkeley and Stanford University, and influenced later work in quality-of-service mechanisms standardized by the Internet Engineering Task Force and integrated into commercial products from Cisco Systems and Juniper Networks. Related lineages include Priority Queuing used in early telephony systems at AT&T, Class-Based Queuing designs from Hewlett-Packard, and congestion-control paradigms developed in Internet research projects at MIT and UC Berkeley.
Category:Packet scheduling