Generated by GPT-5-mini| Gauss–Markov theorem | |
|---|---|
| Name | Gauss–Markov theorem |
| Field | Statistics |
| Statement | Best linear unbiased estimator in ordinary least squares |
| Named after | Carl Friedrich Gauss; Andrey Markov |
Gauss–Markov theorem The Gauss–Markov theorem states that, under specified linearity and error assumptions, the ordinary least squares estimator is the best linear unbiased estimator of regression coefficients. It is central to linear regression theory and has influenced development in estimation theory, econometrics, and experimental design. The theorem connects work by Carl Friedrich Gauss and Andrey Markov and has been discussed in contexts involving Ronald A. Fisher, Jerzy Neyman, Eggenberg–Pólya research, John von Neumann, and institutions such as Princeton University and University of Cambridge.
The Gauss–Markov result asserts that, for a linear model with parameters estimated by ordinary least squares (OLS), the OLS estimator has minimum variance among all linear and unbiased estimators. This statement underpins methods used at Harvard University, Massachusetts Institute of Technology, London School of Economics, Columbia University, and University of Chicago in applied work linking to programs at World Bank, International Monetary Fund, Nobel Memorial Prize in Economic Sciences recognition, and policy analyses developed by Rand Corporation. The theorem’s significance extends to testing frameworks from Karl Pearson–style regression to modern procedures adopted at Stanford University, Yale University, and University of California, Berkeley.
The classical conditions include linearity in parameters, expectation of error zero, homoscedasticity, and no serial correlation among errors; these assumptions were considered in correspondence among Gauss and later refined by Andrey Markov and commentators at University of Göttingen and Imperial College London. Specifically, the design matrix must be nonrandom or treated conditionally, with full column rank so that normal equations are solvable—a consideration in curricula at Princeton University and ETH Zurich. Homoscedastic and uncorrelated errors are central, as highlighted in critiques by Jerzy Neyman and applications in datasets used by United Nations agencies and statistical groups at Office for National Statistics and U.S. Census Bureau.
A standard proof uses quadratic forms and projection geometry in Euclidean space, tools employed by John von Neumann and elaborated in texts from Princeton University Press and Cambridge University Press. The proof expresses any linear unbiased estimator as the OLS estimator plus a zero-mean linear correction, then compares variances via positive semidefinite matrices—arguments familiar in lectures at Columbia University and University of Oxford. One shows that the variance difference equals the variance of the correction term, which is nonnegative; related matrix decompositions were developed by Carl Gustav Jacob Jacobi and algorithmized at Bell Labs and AT&T research. The geometric view treats OLS as orthogonal projection onto the column space of the design matrix, a perspective used in courses at Massachusetts Institute of Technology and Stanford University.
The theorem justifies OLS in econometric studies by researchers at National Bureau of Economic Research, policy evaluations at World Bank, and empirical analyses in publications of The Econometric Society. It underlies textbook examples applied to data from U.S. Bureau of Labor Statistics and case studies taught at London School of Economics and Columbia Business School. Practical examples include estimating supply and demand curves in research by Milton Friedman-influenced scholars, modeling growth regressions in studies associated with Sergiu Hart collaborators, and calibration problems arising in engineering projects at General Electric and Siemens. In clinical research, OLS justification appears in methodology sections of trials sponsored by National Institutes of Health and analyses by teams at Johns Hopkins University.
Relaxations and extensions include generalized least squares (GLS) addressing heteroscedasticity and autocorrelation, a development tied to work at Bell Labs and analytic traditions at University of Chicago; weighted least squares and ridge regression relate to regularization methods advanced at IBM Research and Microsoft Research. The Gauss–Markov framework has been generalized to instrumental variables approaches used by James Heckman and others, to mixed models in work by Charles Roy and William G. Cochran, and to robust estimation methods advocated at Carnegie Mellon University. Connections exist to the Gauss–Newton algorithm used at NASA and to shrinkage estimators such as the James–Stein estimator, whose properties were debated in forums including Institute of Mathematical Statistics and lectures at University of Michigan.
Category:Theorems in statistics