LLMpediaThe first transparent, open encyclopedia generated by LLMs

Nyquist sampling theorem

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 66 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted66
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Nyquist sampling theorem
NameNyquist sampling theorem
FieldSignal processing
ProposerHarry Nyquist
Year1928
RelatedShannon–Hartley theorem, Whittaker–Shannon interpolation formula

Nyquist sampling theorem The Nyquist sampling theorem is a fundamental result in signal processing that gives conditions under which a continuous-time bandlimited signal can be perfectly reconstructed from its samples. It links the maximum frequency content of a signal to the minimum sampling rate required for lossless digitization, and underpins digital communication and measurement systems across engineering and science.

Introduction

The theorem originated from work by Harry Nyquist and was later formalized in the context of information theory by Claude Shannon and others such as E. T. Whittaker and Vladimir Kotelnikov. It addresses representation of bandlimited functions encountered in contexts like analog telephony from Bell Telephone Laboratories, radar systems developed in United States Navy research, and early analog-to-digital conversion efforts at institutions including Bell Labs and Massachusetts Institute of Technology. The result is central to practical technologies created by companies and projects such as AT&T, Nokia, Sony Corporation, IBM, Intel Corporation, and standards bodies like the Institute of Electrical and Electronics Engineers.

Mathematical formulation

Let x(t) be a continuous-time signal whose Fourier transform X(f) is zero for |f| ≥ B (bandlimited to B hertz). The Nyquist sampling theorem states that x(t) is completely determined by its samples x[n] = x(nT) if the sampling interval T satisfies T ≤ 1/(2B). Equivalently, the sampling frequency fs = 1/T must satisfy fs ≥ 2B. Reconstruction is achieved by the Whittaker–Shannon interpolation formula using the sinc kernel: x(t) = Σ x[n] sinc(π (t − nT)/T), provided ideal conditions similar to proofs by Norbert Wiener and formalizations used in Fourier analysis and work by John von Neumann. The theorem relies on properties of the Fourier transform, bandlimitedness assumptions formalized in classical analysis by Bernhard Riemann and Joseph Fourier and functional-analytic rigor introduced by mathematicians like Stefan Banach and Andrey Kolmogorov.

Sampling rate and Nyquist frequency

The Nyquist rate is twice the highest frequency B present in the signal; the Nyquist frequency is B (or fs/2 when fs is given). For example, audio sampling in consumer audio conforms to standards such as Compact Disc specification (fs = 44.1 kHz) to cover the audible band referenced in psychoacoustics literature connected to investigators at Bell Labs and institutions like the Royal Institution. Digital telephony with standards from International Telecommunication Union and systems developed by Western Electric and ITU-T often invoke a 8 kHz sampling rate to capture voice bandwidth. Sampling theory interacts with spectral analysis techniques employed by algorithms developed in projects at Bell Labs, AT&T Bell Labs, and computing platforms by Microsoft and Apple Inc..

Aliasing and anti-aliasing techniques

If fs < 2B, spectral copies overlap in the frequency domain producing aliasing, a phenomenon analyzed in studies by Harry Nyquist and Claude Shannon and observed in instrumentation developed at National Institute of Standards and Technology and laboratories at MIT Lincoln Laboratory. Aliasing manifests in digital audio and imaging systems from Kodak and Canon Inc.; in computer graphics, techniques evolved in conjunction with research by Pixar and academic groups at Stanford University. Anti-aliasing methods include analog low-pass filters (anti-aliasing filters) implemented using components from companies like Texas Instruments and filter designs inspired by mathematical work of Cauchy and George Gabriel Stokes; oversampling and digital filtering strategies are used in converters by Analog Devices, Cirrus Logic, and integrated circuits from Intel Corporation. In image processing, spatial anti-aliasing employs methods from research groups at University of California, Berkeley and standards in graphics by Khronos Group.

Practical applications and implementations

The theorem underlies analog-to-digital converters (ADCs) in devices by Analog Devices, Texas Instruments, and National Semiconductor used in Siemens medical imaging modalities and industrial instrumentation. Digital audio and standards such as Compact Disc and streaming platforms by Spotify and Apple Music rely on sampling theory for fidelity; telecommunications systems standardized by 3GPP and IEEE 802.11 use sampling principles in modulation and baseband processing. Control systems in aerospace firms like Boeing and Lockheed Martin depend on sampling and reconstruction for flight control and radar signal processing developed with research from NASA centers. Scientific measurement systems at CERN and European Space Agency deploy high-rate samplers and anti-aliasing chains designed around these criteria.

Extensions and related results include the Shannon–Whittaker interpolation formula, the Kotelnikov sampling theorem, multirate sampling in wavelet theory developed by researchers at Ecole Polytechnique Fédérale de Lausanne and Princeton University, and compressed sensing pioneered by Emmanuel Candès, David Donoho, and Terence Tao. Generalizations address nonuniform sampling studied by J. R. Higgins and frame theory contributions by Olle Christensen; filter bank theory and multirate signal processing work by Ronald Lyon and groups at Bell Labs expand practical frameworks. Information-theoretic bounds such as the Shannon-Hartley theorem relate channel capacity to bandwidth and sampling considerations, while modern research integrates sampling with machine learning efforts at Google and DeepMind.

Category:Signal processing