Generated by GPT-5-mini| Box–Jenkins | |
|---|---|
| Name | Box–Jenkins |
| Field | Time series analysis |
| Introduced | 1970s |
| Developers | George E. P. Box, Gwilym M. Jenkins |
Box–Jenkins is a methodology for modeling and forecasting time series that emphasizes iterative model building, diagnostic checking, and practical forecasting. It originated in the mid-20th century and became influential across statistics, econometrics, hydrology, and engineering. The approach provides a structured workflow for developing parsimonious autoregressive integrated moving average specifications adaptable to many applied problems.
The development traces to statisticians George E. P. Box and Gwilym M. Jenkins and their collaboration at institutions including the University of Wisconsin–Madison and the University of Manchester. Early dissemination occurred through monographs and courses that reached researchers at the Royal Statistical Society, the American Statistical Association, and the International Statistical Institute. Applications and critiques circulated among practitioners at the Bank of England, the Federal Reserve System, and the World Bank, while method refinements were debated at conferences such as the Joint Statistical Meetings and the annual meetings of the Institute of Mathematical Statistics. Subsequent theoretical and computational advances tied the methodology to work by Herman Wold, Norbert Wiener, W. Edwards Deming, Arthur D. Morton, and researchers at Bell Labs and the RAND Corporation. Later textbooks and research integrated ideas from scholars at the Massachusetts Institute of Technology, Princeton University, Harvard University, and Columbia University.
The methodology prescribes iterative stages linking data preparation, model selection, estimation, and validation practiced in applied settings at organizations such as the United Nations Development Programme and the International Monetary Fund. It relies on stochastic process theory influenced by contributions from Andrey Kolmogorov, Aleksandr Lyapunov, and Wold-type decompositions developed in circles that included researchers at the Steklov Institute of Mathematics and the Max Planck Institute for Mathematics in the Sciences. Implementation has been embedded in software ecosystems pioneered by teams at International Business Machines Corporation, Microsoft Corporation, MathWorks, and the R Project for Statistical Computing, and deployed by analysts from Goldman Sachs, Morgan Stanley, J.P. Morgan Chase, and central banks including the European Central Bank.
Model identification in this framework emphasizes stationary transformations and autocorrelation analysis, techniques taught in graduate courses at Stanford University, Yale University, University of Chicago, and University of California, Berkeley. Analysts examine correlograms and partial correlograms, tools stemming from early signal processing work at Bell Labs and the Massachusetts Institute of Technology Radiation Laboratory, and compare candidate autoregressive and moving average orders using information criteria developed in association with researchers at Cornell University and University of Pennsylvania. The approach has been applied in empirical studies by scholars at London School of Economics, Australian National University, and University of Tokyo addressing macroeconomic series, industrial production, and climate indices used by teams at the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, and the Met Office.
Estimation employs maximum likelihood and least squares algorithms refined through contributions from methodologists at Princeton University, University of Cambridge, Imperial College London, and ETH Zurich. Diagnostic checking uses residual analysis, Ljung–Box tests associated with Greta Ljung and George Box circles, and overfitting avoidance strategies discussed in forums at the Royal Society and the National Academy of Sciences. Robustness and model uncertainty considerations link to research by scholars at Columbia Business School, INSEAD, Sloan School of Management, and the Wharton School, while computational optimizations reflect work from teams at Intel Corporation and NVIDIA Corporation.
Forecasts produced under this methodology have been applied in central banking by the Federal Reserve Bank of New York and the Bank of England, in energy demand modeling by Shell plc and ExxonMobil, and in inventory control at firms like Toyota Motor Corporation and Walmart Inc.. Environmental and climate applications span projects at the Intergovernmental Panel on Climate Change, the European Space Agency, and the United Nations Environment Programme, while epidemiological and health-care forecasting drew on methods in studies at the Centers for Disease Control and Prevention, the World Health Organization, and the Johns Hopkins University. Extensions inspired work at the European Central Bank, Bank for International Settlements, and academic centers at New York University, Duke University, and University of Michigan.
Category:Time series analysis