LLMpediaThe first transparent, open encyclopedia generated by LLMs

Nyquist–Shannon sampling theorem

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Harry Nyquist Hop 4
Expansion Funnel Raw 92 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted92
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Nyquist–Shannon sampling theorem
Nyquist–Shannon sampling theorem
Original: Metacomet Vector: Editor at Large · Public domain · source
NameNyquist–Shannon sampling theorem
FieldSignal processing
Discovered byHarry Nyquist, Claude Shannon
First published1928, 1949

Nyquist–Shannon sampling theorem The Nyquist–Shannon sampling theorem formalizes conditions under which a continuous-time signal can be exactly reconstructed from discrete samples, and it underpins digital communication, analog-to-digital conversion, and information theory. Originating from work by Harry Nyquist and later synthesized by Claude Shannon, the theorem links concepts in harmonic analysis, Fourier theory, and practical engineering for telephony, radio, and computing. It serves as a bridge between continuous models in physics and discrete implementations in devices by specifying a critical sampling rate tied to a signal's bandwidth.

History

The theorem evolved from research by Harry Nyquist on telegraphy and thermal noise, developments by Ralph Hartley in information measures, and formalization by Claude Shannon in his landmark work on information theory and the Shannon–Hartley theorem. Early mathematical antecedents include work by E. T. Whittaker on interpolation, contributions by Vladimir Kotelnikov and Boris Yakovlevich, and parallel results by E. I. Zames and E. A. Jackson in signal analysis. Practical impetus came from engineering projects at institutions including Bell Laboratories, AT&T, and Western Electric, and was influenced by technological milestones such as the development of telephony, radio broadcasting, radar, sonar, and pulse-code modulation. Subsequent adoption intersected with standards bodies like ITU and organizations such as IEEE, impacting systems developed by companies like RCA, Motorola, Intel, Texas Instruments, and National Semiconductor.

Statement and variants

The basic statement asserts that a bandlimited signal with highest frequency component F_max can be exactly reconstructed from samples taken at a rate greater than twice F_max, often called the Nyquist rate; this formulation appears alongside the sampling theorem variants attributed to Vladimir Kotelnikov and E. T. Whittaker. Equivalent formulations relate to theorems in Fourier analysis and expansions used by Norbert Wiener and in the work of Andrey Kolmogorov on stochastic processes; discrete counterparts connect to the Z-transform and to matrix formulations used by John von Neumann and Richard Bellman. Variants include the bandpass sampling theorem used in systems by companies like Analog Devices, multirate sampling in the work of P. P. Vaidyanathan, and generalized Paley–Wiener conditions studied by I. M. Gelfand and M. G. Krein.

Mathematical proof and derivation

Proofs typically use the Fourier transform and the Poisson summation formula as treated in texts by Akhiezer and Rudin, or employ sinc-interpolation kernels related to the Whittaker–Shannon interpolation formula. Rigorous derivations draw on results by Salomon Bochner and rely on sampling series convergence theorems explored by Jean-Pierre Kahane and Paul Koosis. Alternative proofs use reproducing kernel Hilbert spaces studied by Naftali Levinson and Arne Beurling, or exploit distribution theory popularized by Laurent Schwartz and spectral factorization techniques employed by Norbert Wiener. For stochastic signals, proofs utilize ergodic theory and the Wiener–Khinchin theorem referenced in work by A. Papoulis and S. Karlin.

Practical considerations and sampling rate criteria

Real-world systems must consider anti-aliasing filters designed following analog filter theory developed by B. B. Yates and component advances from Texas Instruments and Maxim Integrated, as well as quantization concepts advanced by Shannon and engineering practices codified by IEEE. Practical sampling rates often exceed the theoretical Nyquist rate to allow for transition bands and filter roll-off, a convention used in audio standards from AES and in digital telephony standards from ITU-T. Criteria for oversampling, decimation, and sigma-delta modulation draw on work by Jens R. Jensen and S. R. Norsworthy, while multirate DSP techniques trace to research by Crochiere and Kailath. Design choices impact EMC regulations overseen by bodies like the FCC and interoperability frameworks developed by MPEG and ISO.

Applications and implementations

The theorem is fundamental to technologies from analog-to-digital converters in companies such as Analog Devices and Texas Instruments to audio systems by Dolby Laboratories, digital imaging sensors by Sony and Canon, and telecommunications infrastructure deployed by Ericsson, Nokia, and Huawei. It underlies compression standards like MP3, AAC, and MPEG-2, medical imaging modalities including MRI and CT, and instrumentation used by laboratories at CERN and MIT Lincoln Laboratory. Implementations appear in microprocessors by Intel and ARM, software libraries from MathWorks and GNU, and embedded systems by ARM Holdings and STMicroelectronics.

Generalizations connect to the Shannon–Hartley theorem in capacity limits for channels such as those studied at Bell Labs and to sampling results in nonuniform frameworks developed by Grochenig and Aldroubi. Related theorems include compressed sensing initiated by David Donoho, sparse sampling theory advanced by Emmanuel Candès and Terence Tao, and frames theory from Duffin and Schaeffer. Extensions reach into harmonic analysis topics by Elias Stein and Terence Tao (mathematician), the theory of wavelets by Ingrid Daubechies and Yves Meyer, and time-frequency analysis as used by Leon Cohen and Gabor in work on the short-time Fourier transform.

Category:Sampling theorems