LLMpediaThe first transparent, open encyclopedia generated by LLMs

QR algorithm

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Alston S. Householder Hop 4
Expansion Funnel Raw 57 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted57
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
QR algorithm
NameQR algorithm
Introduced1961
InventorJohn G. F. Francis; Vera Kublanovskaya
FieldNumerical linear algebra
RelatedQR decomposition, Hessenberg matrix, eigenvalue problem

QR algorithm is an iterative method for computing eigenvalues and eigenvectors of matrices. Developed independently by John G. F. Francis and Vera Kublanovskaya in the early 1960s, it transformed computational approaches used on machines such as the Atlas and by institutions like Courant Institute and Princeton University. The method builds on orthogonal transformations and matrix factorizations to produce numerically stable results widely employed in scientific computing, engineering, and applied mathematics.

Overview

The algorithm uses a sequence of orthogonal matrices from Householder or Givens transforms to reduce a general matrix to a near-triangular form, typically Hessenberg, then iteratively applies factorizations akin to the QR decomposition to converge to triangular form revealing eigenvalues. Early implementations were influenced by work at IBM laboratories and by software libraries like EISPACK and LAPACK, used by researchers at Argonne National Laboratory and UC Berkeley. Variants incorporate shifts inspired by ideas from Francis and techniques from Wilkinson to accelerate convergence for real and complex spectra.

Mathematical Background

At foundation is the factorization of a square matrix A into an orthogonal matrix Q and an upper triangular matrix R via Gram–Schmidt, Householder reflectors, or Givens rotations. The algorithm exploits similarity transformations A_{k+1} = R_k Q_k to preserve eigenvalues, a concept linked to the spectral theorems developed by Hilbert, von Neumann, and Schur. Reduction to Hessenberg or tridiagonal form prior to iteration reduces arithmetic cost, leveraging results from Schur decomposition and properties used by researchers at Stanford University and MIT. Convergence theory references estimates by Wilkinson and subsequent asymptotic analyses by mathematicians at University of Cambridge and University of Oxford.

Algorithm Variants and Implementation

Practical implementations use several variants: the simple unshifted iteration, single-shift and double-shift strategies, and the implicitly shifted form inspired by Francis. Real forms handle complex conjugate pairs via real arithmetic using techniques popularized in EISPACK and extended in LAPACK by contributors at Oak Ridge National Laboratory and Argonne National Laboratory. Bulge-chasing implementations manage fill-in during implicit shifts, an approach refined by researchers at Bell Labs and in computational suites developed at NPL. For symmetric or Hermitian matrices, reduction to tridiagonal form and use of specialized routines yields efficient eigenpair computation, a strategy adopted by teams at Los Alamos National Laboratory and by software packages used in NASA simulations. Parallel and blocked implementations target modern architectures from Intel and NVIDIA, with research contributions from Lawrence Livermore National Laboratory and University of Illinois Urbana–Champaign.

Convergence and Complexity

Convergence behavior depends on spectral separation and conditioning; results by Wilkinson and later by Trefethen and Bau quantify convergence rates and sensitivity to perturbations. Single-shift iterations can stagnate on clustered eigenvalues, prompting double-shift or multi-shift strategies developed at IBM and in theoretical work at Caltech. Complexity for dense n×n matrices is typically O(n^3) arithmetic per full reduction, with Hessenberg-preprocessing and implicit shifts reducing constants; asymptotic behavior informed by analyses from Brent and Inman supports algorithmic choices in high-performance computing centers like CERN and Fermilab. Backward stability properties, central to acceptance in numerical libraries, were formalized by researchers at University of Manchester and ETH Zurich.

Practical Applications and Numerical Issues

The method is standard in eigenvalue problems arising in aerospace engineering simulations, structural analysis in firms like Boeing and Airbus, vibrational analysis in ABB projects, and quantum mechanics computations by groups at CERN and Lawrence Berkeley National Laboratory. Numerical issues include sensitivity to round-off, loss of orthogonality in computed eigenvectors, and slow convergence for nearly defective matrices; mitigation strategies involve careful shifting, balancing transformations devised by Parlett, and refinement via inverse iteration or Rayleigh quotient iteration developed in studies at Imperial College London and University of Toronto. Implementations in scientific computing environments such as MATLAB, SciPy, and Octave trace lineage to algorithms engineered at Netlib and collaborative efforts across SIAM conferences.

Category:Numerical linear algebra