LLMpediaThe first transparent, open encyclopedia generated by LLMs

Krylov subspace

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Ivo Babuška Hop 5
Expansion Funnel Raw 84 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted84
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Krylov subspace
NameKrylov subspace
FieldNumerical linear algebra
Introduced1931
InventorNikolai Krylov

Krylov subspace is a sequence of nested vector subspaces generated by successive applications of a linear operator to a vector, used to approximate solutions of large linear systems and eigenvalue problems. Originating in work by Nikolai Krylov and developed through connections with methods from John von Neumann, Richard Hamming, Alan Turing, David Hilbert, and later researchers at institutions such as Massachusetts Institute of Technology, Stanford University, University of Cambridge, Princeton University, the concept underpins many iterative algorithms in computational science. Krylov techniques link foundational results from Peter Lax, Eugene Wigner, John von Neumann's spectral theory, and modern implementations in libraries like LAPACK, PETSc, and Trilinos.

Definition and basic properties

A Krylov subspace of order m for a linear operator A acting on a vector b is spanned by {b, Ab, A^2 b, ..., A^{m-1} b}, connecting to polynomial approximation theory in the spirit of Andrey Kolmogorov and Sergei Natanovich Bernstein. Basic algebraic properties echo results by Issai Schur and Carl Gustav Jacobi: invariance under A, nested sequence behavior comparable to the chain studied by Emmy Noether and dimension bounds related to the minimal polynomial studied by Évariste Galois and Niels Henrik Abel. The minimal Krylov dimension equals the degree of the minimal polynomial of A relative to b, a relation leveraged in classical spectral decompositions by John von Neumann and in companion matrix constructions familiar from work of Arthur Cayley and James Joseph Sylvester.

Krylov subspace methods

Krylov subspace methods include iterative solvers and eigenvalue algorithms such as the Arnoldi iteration, Lanczos algorithm, Conjugate gradient method, Generalized minimal residual method, Bi-Conjugate Gradient Stabilized, and variants used in software from Argonne National Laboratory and Sandia National Laboratories. These methods were advanced by researchers like Yurii Saad, Walter Kahan, Gene Golub, Jack Dongarra, Cleve Moler, Michael Overton, and Olvi Mangasarian, and applied in industrial contexts involving IBM, Intel, NVIDIA, Microsoft Research, and Google. Implementations exploit sparse matrix representations popularized by work at Los Alamos National Laboratory and Lawrence Livermore National Laboratory.

Construction and numerical stability

Construction uses orthonormalization processes such as Gram–Schmidt and modified Gram–Schmidt, echoing numerical analysis traditions from J. H. Wilkinson, Alan Turing, and John Tukey. Stability concerns invoke backward error analysis by James Wilkinson and floating-point model work associated with Donald Knuth and William Kahan, with reorthogonalization strategies influenced by studies at Stanford University and University of California, Berkeley. Breakdown handling and look-ahead techniques trace to investigations by Zdeněk Strakoš and Youcef Saad, while preconditioning strategies connect to domain expertise at Argonne National Laboratory, National Center for Supercomputing Applications, and European Centre for Medium-Range Weather Forecasts.

Applications and use cases

Krylov subspace approaches appear across computational physics, computational chemistry, and engineering, supporting simulations in CERN experiments, NASA mission analyses, Los Alamos National Laboratory modeling, and Sandia National Laboratories design tasks. They underpin electronic structure solvers used by researchers at Lawrence Berkeley National Laboratory and Oak Ridge National Laboratory, facilitate implicit time integration in studies by National Aeronautics and Space Administration teams, and enable large-scale data analysis pipelines developed at Google, Facebook, and Amazon Web Services. In finance, risk and portfolio computations at institutions like Goldman Sachs and J.P. Morgan employ Krylov-based solvers via platforms produced by Bloomberg LP and Reuters.

Theoretical results and convergence

Convergence theory builds on polynomial approximation and spectral distribution results related to Gauss–Seidel theory, Chebyshev polynomials studied by Pafnuty Chebyshev, and Ritz value analysis linked with Rayleigh–Ritz procedures from Lord Rayleigh and John William Strutt. Bounds and asymptotic behavior reflect contributions from Trefethen and Bau, Greenbaum, and Saad, integrating with perturbation theory established by T. Kato and Eugene Wigner. Superlinear and finite termination properties connect to structure exploited in control theory research at Massachusetts Institute of Technology and algebraic insights by Alexander Grothendieck in related spectral contexts.

Variants and extensions

Extensions include block Krylov methods, rational Krylov spaces, restarted schemes such as GMRES(m), two-sided Lanczos, and harmonic variants developed by authors affiliated with INRIA, Max Planck Society, and Barcelona Supercomputing Center. Rational and extended Krylov approaches leverage shift-and-invert strategies used in eigenvalue software developed at ETH Zurich and University of Illinois Urbana–Champaign, while tensorized Krylov methods align with multilinear algebra research from Yves Meyer and Gene Golub. Hybrid methods combine Krylov ideas with multigrid frameworks investigated at Princeton University and Imperial College London.

Category:Numerical linear algebra