Generated by GPT-5-mini| Newton's method | |
|---|---|
![]() Cham at French Wikipedia · Public domain · source | |
| Name | Newton's method |
| Also known as | Newton–Raphson method |
| Type | Numerical root-finding |
| Field | Numerical analysis |
| Introduced | 17th century |
| Inventor | Isaac Newton; Joseph Raphson |
Newton's method is an iterative numerical technique for finding zeros of real- or complex-valued functions. It generates a sequence of approximations that, under favorable conditions, converges rapidly to a simple root. The method plays a central role in computational mathematics, with connections to optimization, differential equations, and scientific computing.
Newton's method approximates a root of a function using local linearization and successive correction. Starting from an initial guess, the algorithm forms a sequence using the function and its derivative to produce increasingly accurate estimates. The method is widely used in contexts ranging from algebraic equation solving in Royal Society-era mathematics to modern computational tasks in institutions such as Los Alamos National Laboratory and CERN. Prominent mathematicians and scientists associated with its development and dissemination include Isaac Newton, Joseph Raphson, Leonhard Euler, Augustin-Louis Cauchy, and Carl Friedrich Gauss.
The classical iteration uses the function value and its derivative to update an approximation. Variants replace or augment derivative information, adapt step sizes, or operate in higher dimensions. Derivative-free adaptations include the secant method and various finite-difference schemes developed in the lineage of work by Brook Taylor and Joseph-Louis Lagrange. Multivariate extensions use Jacobian matrices and are prominent in nonlinear systems and optimization problems tackled at organizations such as Bell Labs and IBM Research. Quasi-Newton methods, notably the BFGS update named after Broyden, Fletcher, Goldfarb, and Shanno, trade exact Jacobians for approximations and are central in large-scale problems at institutions like Stanford University and Massachusetts Institute of Technology.
Secant-type methods reduce derivative evaluation cost and are used in applied settings from Princeton University numerical laboratories to Siemens engineering suites. Globalization strategies such as line-search and trust-region frameworks draw on theoretical foundations developed by researchers affiliated with Courant Institute, ETH Zurich, and University of Cambridge. Homotopy and continuation methods, used for difficult polynomial systems by groups at California Institute of Technology and Imperial College London, can incorporate Newton iterations as local correctors.
Local quadratic convergence for simple roots is a hallmark of the method, a result formalized in the analysis literature by figures like Cauchy and Gauss. Conditions for quadratic convergence typically require nonvanishing derivatives at the root and sufficiently close initial guesses; proofs employ tools from Augustin-Louis Cauchy's analysis and subsequent abstract formulations in functional analysis associated with Stefan Banach and John von Neumann. For multiple roots, convergence degrades to linear unless the iteration is modified; multiplicity adjustments trace back to investigations by Joseph-Louis Lagrange.
Global convergence results rely on stronger assumptions and are studied via merit functions and monotonicity conditions used in optimization theory shaped by researchers at Princeton University and University of Chicago. Stability and sensitivity analyses link to backward error concepts advanced by James Wilkinson and robustness studies in numerical linear algebra at National Institute of Standards and Technology.
Implementations appear throughout scientific computing libraries and commercial packages provided by entities such as Wolfram Research, MathWorks, and Google's scientific toolchains. Applications include solving nonlinear algebraic systems in computational physics at CERN, root-finding in control engineering practiced at Siemens and General Electric, and parameter estimation in statistical models employed by research groups at Harvard University and University of Oxford. In computer graphics, inverse kinematics and intersection problems use Newton-like solvers developed by studios and companies including Pixar.
Robust practical use combines analytic derivatives, automatic differentiation engines pioneered by teams at Google and Stanford University, and sparse linear algebra solvers from projects such as those at Argonne National Laboratory. For large-scale PDE-constrained optimization, Newton-Krylov methods developed at Oak Ridge National Laboratory and Los Alamos National Laboratory couple Newton iterations with Krylov subspace solvers like GMRES.
Newton iterations can fail when derivatives vanish, when initial guesses are poor, or when the function is non-smooth. Chaotic basins of attraction arise in complex dynamics studies inspired by the work of Henri Poincaré and explored in the context of Julia and Mandelbrot sets by researchers at University of Paris-Sud and University of Warwick. Nonconvergence and cycling can be exacerbated by ill-conditioned Jacobians, issues central to numerical analysis concerns raised by Alan Turing and John von Neumann.
Multiple roots and near-singular Jacobians slow convergence or require specialized deflation techniques developed in computational algebraic geometry at institutions like University of California, Berkeley and ETH Zurich. Floating-point rounding and catastrophic cancellation concerns connect to the legacy of Wilkinson and modern standards from IEEE 754 committees.
The method's origins are associated with iterative root-finding ideas in the work of Isaac Newton in the 17th century and a later presentation by Joseph Raphson in the early 18th century. Subsequent formalizations and analyses were provided by Leonhard Euler, Cauchy, and Gauss. The dual attribution as Newton–Raphson reflects this layered history and the dissemination of ideas across mathematical centers such as Cambridge University, Trinity College, Cambridge, and the broader European scientific networks including Royal Society correspondences. The adoption into computational practice accelerated with the advent of electronic computing at centers like Manchester University and research laboratories including Bell Labs during the 20th century.