Generated by DeepSeek V3.2| Low-density parity-check code | |
|---|---|
| Name | Low-density parity-check code |
| Type | Linear code |
| Block length | Variable |
| Message length | Variable |
| Rate | Variable |
| Notation | [n, k] code |
| Distance | Good asymptotic distance |
| Decoder | Iterative decoding |
| Discovered | Robert G. Gallager |
| Year | 1960 |
Low-density parity-check code. A class of error-correcting codes defined by a sparse parity-check matrix, enabling highly efficient iterative decoding algorithms. These codes are celebrated for approaching the Shannon limit, the theoretical maximum for reliable communication over noisy channels, making them foundational in modern digital communications. Their structure and performance bridge concepts from information theory, coding theory, and graph theory.
A Low-density parity-check code is a type of linear block code characterized by a parity-check matrix where the number of 1s is very small relative to the matrix dimensions. This sparsity is formalized by the property that the number of 1s per row and per column grows only linearly with the block length. The code can be graphically represented using a Tanner graph, a bipartite graph connecting variable nodes (corresponding to codeword bits) to check nodes (corresponding to parity constraints). Key properties include a low decoding complexity due to the sparse structure and the ability to achieve performance extremely close to channel capacity under iterative decoding. The design often involves optimizing the degree distribution of the nodes in the Tanner graph to approach the theoretical limits established by Claude Shannon.
Early constructions were largely random, as proposed by Robert G. Gallager in his doctoral thesis at the Massachusetts Institute of Technology. Modern systematic approaches include algebraic constructions and those based on finite geometries, such as codes derived from Euclidean geometry or projective geometry. A highly influential method is the use of protographs, small template graphs that are copied and permuted to create larger codes, which simplifies design and analysis. The Irregular LDPC code construction allows for varying node degrees, optimized using tools like density evolution to surpass the performance of regular constructions. Many standards, including those by the Consultative Committee for Space Data Systems for deep-space communication and the IEEE 802.11 family for Wi-Fi, specify carefully constructed LDPC codes.
Decoding is performed using efficient message-passing algorithms on the Tanner graph. The foundational algorithm is the belief propagation algorithm, also known as the sum-product algorithm, which passes probabilistic messages between variable and check nodes. A common simplification for binary symmetric channels is the bit-flipping algorithm, a hard-decision method. For practical implementations, the min-sum algorithm offers a lower-complexity approximation to belief propagation by minimizing computational operations. These algorithms are iterative, allowing performance to improve with each cycle until a valid codeword is found or a maximum number of iterations is reached. The efficiency of these decoders is a direct consequence of the sparsity enforced by the parity-check matrix.
When combined with advanced modulation like bit-interleaved coded modulation, these codes deliver performance within a fraction of a decibel of the Shannon limit on channels such as the additive white Gaussian noise channel. This near-optimal performance has led to their widespread adoption in numerous international standards and commercial systems. They are specified in digital video broadcasting standards like DVB-S2 for satellite television, in data storage formats for hard disk drives, and in modern wireless standards including 5G NR developed by the 3rd Generation Partnership Project. Their application extends to flash memory systems and deep-space communication protocols managed by agencies like the National Aeronautics and Space Administration.
The codes were first invented by Robert G. Gallager in his 1960 doctoral dissertation at the Massachusetts Institute of Technology, but they were largely ignored for decades due to the computational limitations of the era. They were independently rediscovered in the 1990s by David J.C. MacKay and Radford M. Neal, who demonstrated their near-capacity performance with practical iterative decoders. Concurrently, work by Michael Luby and others on Tornado codes, a class of erasure codes, reinforced the power of sparse graph-based codes. This revival catalyzed intensive research, linking LDPC codes to other graphical models like turbo codes and leading to their incorporation into major global telecommunications standards. The 2023 Nobel Prize in Physics, awarded for experimental methods in quantum information, indirectly highlights the foundational role of advanced error correction, including principles underlying these codes, in modern technology.
Category:Error detection and correction Category:Coding theory Category:Information theory