LLMpediaThe first transparent, open encyclopedia generated by LLMs

Recursive least squares filter

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Wiener filter Hop 4
Expansion Funnel Raw 54 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted54
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Recursive least squares filter
NameRecursive least squares filter
ClassAdaptive filter
Data structureArray data structure
Time avgO(n²)
SpaceO(n²)

Recursive least squares filter. In the field of signal processing and system identification, it is a fundamental adaptive filter algorithm used to recursively find the coefficients that minimize a weighted linear least squares cost function relating to input signals. Unlike its batch counterpart, the least squares method, it efficiently updates the estimate with each new data point, making it crucial for real-time applications. The algorithm is characterized by its fast convergence and is a cornerstone in adaptive control, communications engineering, and time series analysis.

Overview

The core principle is to update the parameter estimates for a linear system as new observations arrive, without reprocessing the entire dataset. It maintains an estimate of the inverse covariance matrix, often denoted as *P*, which is recursively updated. This approach was significantly advanced by the work of Plackett and others in the mid-20th century, building upon foundational statistical concepts from Carl Friedrich Gauss. Key advantages include its rapid tracking of time-varying systems compared to simpler algorithms like the least mean squares filter, though it requires greater computational resources. The filter is extensively documented in texts by Simon Haykin and forms a basis for many algorithms in Kalman filter theory.

Algorithm derivation

The derivation begins with the standard weighted least squares cost function. The goal is to minimize the sum of squared errors, with a forgetting factor, λ, often introduced to weight recent data more heavily. The solution involves applying the Woodbury matrix identity, also known as the matrix inversion lemma, to derive a recursive update for the gain vector and the inverse covariance matrix. This lemma allows the efficient update of *P* without performing a direct matrix inversion, which is computationally expensive. The update equations involve the innovation (signal processing) and the Kalman gain, drawing a direct parallel to the Kalman filter. The mathematical rigor is supported by principles from linear algebra and estimation theory.

Variants and extensions

Several important variants have been developed to address specific limitations. The Exponentially Weighted RLS uses a constant forgetting factor to discount older data, ideal for tracking non-stationary processes. The Sliding Window RLS maintains a fixed window of the most recent data points, discarding older ones completely. For improved numerical stability, the QR-RLS algorithm uses QR decomposition and was notably advanced by researchers at Stanford University. The Fast Transversal Filter (FTF) and Lattice RLS are computationally efficient structures derived from the core algorithm. Extensions to nonlinear systems include the Kernel RLS, which applies the kernel trick from machine learning.

Applications

This filter is pivotal in modern digital signal processing. In communications engineering, it is used for channel equalization in systems like the Global System for Mobile Communications and for echo cancellation in teleconferencing equipment. Within adaptive control, it facilitates real-time parameter estimation for controllers in aerospace systems, such as those developed by NASA. It is also fundamental in beamforming for sensor arrays and radar systems, including those used by the United States Department of Defense. Furthermore, it serves as a benchmark algorithm in financial engineering for time series prediction and in biomedical engineering for analyzing electroencephalography signals.

Comparison with other methods

Compared to the least mean squares filter, it offers significantly faster convergence and lower misadjustment error, but at the cost of O(n²) computational complexity versus LMS's O(n). The Kalman filter is a more general state estimator but reduces to a similar form for certain parameter estimation problems; RLS is often viewed as a special case. The Affine projection algorithm provides a compromise between convergence speed and complexity. In terms of robustness, the H-infinity methods in control theory can offer better performance in the presence of worst-case disturbances, whereas RLS is optimal for minimizing squared error. The choice between these methods, including those developed at institutions like the Massachusetts Institute of Technology, depends on the specific requirements for convergence, stability, and computational load in applications ranging from seismology to wireless networking.

Category:Digital signal processing Category:Estimation theory Category:Algorithms