Generated by GPT-5-mini| Shannon–Hartley theorem | |
|---|---|
| Name | Shannon–Hartley theorem |
| Field | Information theory, Electrical engineering |
| Discovered by | Claude Shannon; Ralph Hartley |
| Year | 1948 |
Shannon–Hartley theorem
The Shannon–Hartley theorem gives a fundamental limit on the maximum rate at which information can be transmitted over a communication channel subject to additive white Gaussian noise, relating bandwidth, signal power, and noise power. It connects the work of Claude Shannon and Ralph Hartley with practical implementations in Bell Labs, influencing technologies developed by AT&T, Bell Telephone Laboratories, and later standards by IEEE. The theorem underpins modern systems ranging from modulation schemes used by Nokia and Ericsson to capacity analyses in designs by NASA and European Space Agency.
The theorem originated from foundational research in the 1920s–1940s linking information concepts of Ralph Hartley and the mathematical formalization by Claude Shannon at Bell Labs. It formalizes capacity as a function of channel bandwidth B and signal-to-noise ratio S/N, and it framed later advances by researchers at MIT, Princeton University, and Harvard University in coding theory. Historical impacts include influence on the development of Pulse-code modulation, quadrature amplitude modulation, and standards by ITU and 3GPP.
The Shannon–Hartley relationship states the channel capacity C (in bits per second) as a function of bandwidth B (Hz) and signal-to-noise ratio S/N (dimensionless). In its canonical form, capacity is expressed using logarithms and involves natural constants familiar to theoreticians at Institute for Advanced Study and Royal Society. The formula guides performance limits in systems implemented by Intel, Qualcomm, and military projects at DARPA where bandwidth allocation and power budgets interplay with restrictions set by regulators like Federal Communications Commission and Ofcom.
Proofs draw on stochastic process theory developed at Kolmogorov Institute and measure-theoretic probability advanced by scholars at University of Cambridge and University of Göttingen. The standard derivation treats the channel as an additive white Gaussian noise process, models codebooks informed by results from Richard Hamming and Marian Rejewski-era coding foundations, and uses entropy rates introduced by Norbert Wiener and rigorized by Andrey Kolmogorov. Rigorous arguments often reference inequalities and transforms used by researchers at Stanford University and Caltech, leveraging properties of Gaussian distributions studied by Carl Friedrich Gauss and statistical methods refined at University of Chicago.
Practically, the theorem sets theoretical upper bounds used in design choices by Motorola, Sony, and telecommunications operators such as Vodafone. It informs compression and error-control strategies developed at Bell Labs Innovations and industrial research at Siemens. In wireless communications, it shapes spectral efficiency targets in LTE and 5G NR rollouts coordinated by 3GPP and multinational consortia like ITU-R. In satellite and deep-space missions, agencies such as NASA and European Space Agency use the capacity limit to plan telemetry links and link budgets for probes like Voyager 1 and missions managed by Jet Propulsion Laboratory. The theorem also influenced cryptanalysis approaches considered by institutions such as NSA and standards committees at IETF.
Generalizations include capacity results for channels with memory studied by groups at Bell Labs Research and network information theory advanced by Thomas Cover and Aaron D. Wyner at Princeton University and University of Southern California, and multicast capacity explored by teams at IBM Research. The Gaussian channel model has been extended to fading channels relevant to urban deployments by Samsung and propagation studies at University of Texas at Austin, to MIMO systems from work by researchers at University of Southern California and University of California, San Diego, and to quantum channel capacities investigated by researchers at Perimeter Institute and MIT. Further theoretical extensions connect to source-channel coding theorems proved by scholars at Columbia University and to rate-distortion theory developed by I. J. Good and contemporaries at Cambridge University Library.