Generated by GPT-5-mini| Low-density parity-check code | |
|---|---|
| Name | Low-density parity-check code |
| Type | Error-correcting code |
| Inventor | Robert G. Gallager |
| Year | 1960s |
| Field | Information theory |
| Related | Turbo code, Reed–Solomon code, Convolutional code |
Low-density parity-check code is a class of linear error-correcting codes defined by sparse parity-check matrices that enable near-capacity performance for noisy communication channels. Developed in the context of Claude Shannon's information-theoretic framework, these codes exploit iterative message-passing algorithms to approach the Shannon limit on channels such as the Additive white Gaussian noise channel and the Binary symmetric channel. Practical deployment across standards and systems ties them to organizations and technologies including European Space Agency, 3GPP, DVB-S2, NASA, and modern 5G NR implementations.
Low-density parity-check codes were introduced to provide reliable transmission over noisy links as envisioned by Claude Shannon and further formalized in information theory by researchers at institutions such as Massachusetts Institute of Technology and MIT Lincoln Laboratory. The codes are specified by a sparse binary matrix whose low weight per row and column differentiates them from dense schemes like Reed–Solomon code and legacy Hamming code. Iterative decoding techniques for these codes draw on concepts from probabilistic inference used in algorithms developed at places such as Bell Labs and influenced by work at AT&T Laboratories. Their resurgence in the 1990s followed advances by researchers affiliated with University of Illinois Urbana–Champaign, Caltech, and Laboratoire de l'Information.
Constructions of these codes include random ensembles and algebraic constructions from graphs studied in combinatorics at Princeton University and University of Cambridge. Representations crucial to design use bipartite graphs known as Tanner graphs, introduced by researchers at University of California, Berkeley and extensively analyzed in graph theory literature from Institute for Advanced Study. Degree distributions and girth properties connect to the work of mathematicians at ETH Zurich and University of Waterloo, while protograph and quasi-cyclic constructions relate to engineering groups at Nokia, Ericsson, and Huawei. Parity-check matrices are often designed with constraints inspired by coding theorists at California Institute of Technology and University of Illinois to control stopping sets and trapping sets studied in discrete mathematics departments at Harvard University and Stanford University.
Decoding relies on iterative message-passing methods such as belief propagation, sum-product, and min-sum algorithms developed with input from researchers at INRIA, EPFL, and University of Tokyo. Belief propagation connects to probabilistic graphical models advanced by scholars at University of California, San Diego and Carnegie Mellon University, and is implemented in hardware by teams at Intel and Qualcomm. Density evolution and extrinsic information transfer (EXIT) chart techniques used to analyze convergence were developed in collaboration among groups at Bell Labs, University of Southern California, and University of Michigan. Hardware-friendly approximations and scheduling strategies were influenced by implementation studies at Stanford University and MIT Lincoln Laboratory.
The asymptotic performance approaches the theoretical limits established by Claude Shannon and studied in depth in publications from IEEE Transactions on Information Theory and conferences such as International Symposium on Information Theory. Finite-length scaling laws link to finite-blocklength results from researchers at Princeton University and ETH Zurich, and the role of code ensembles and degree profiles was characterized by teams at Université Paris-Sud and Tel Aviv University. Comparisons with turbo codes connect to the work of inventors associated with Nokia Research Center and France Télécom, while union bounds and error-floor analyses were refined by groups at University of Arizona and Georgia Institute of Technology.
Applications span satellite communications developed by European Space Agency and Intelsat, wireless standards standardized by 3GPP and implemented by companies such as Qualcomm and Samsung Electronics, and deep-space missions supported by NASA and Jet Propulsion Laboratory. Storage systems integrate these codes in solid-state devices engineered by Seagate Technology and Western Digital, and optical transport networks employ them in designs by ITU-T and Optical Internetworking Forum. Research deployments in quantum error correction and network coding link to laboratories at MIT, Caltech, and QuTech.
The codes were first proposed by Robert G. Gallager as a doctoral researcher at Massachusetts Institute of Technology in the early 1960s, but early reception by contemporary engineers at Bell Labs and publishing venues like IEEE was muted in favor of algebraic schemes such as BCH code and Reed–Solomon code. Renewed interest during the 1990s followed breakthroughs by researchers associated with MIT, CNRS, and University of Cambridge who demonstrated their practical viability with iterative decoders and powerful hardware implementations by companies like Texas Instruments and Broadcom. Subsequent standardization and commercial adoption were driven by collaborative efforts across 3GPP, DVB Project, ETSI, and aerospace agencies including ESA and NASA.
Category:Coding theoryCategory:Error detection and correction