Generated by GPT-5-mini| statistical process control | |
|---|---|
| Name | Statistical process control |
| Focus | Quality control |
| Invented by | Walter A. Shewhart |
| Introduced | 1920s |
| Related | Walter A. Shewhart, W. Edwards Deming, Kaoru Ishikawa |
statistical process control Statistical process control is a set of methods for monitoring, controlling, and improving processes using statistical techniques. It applies quantitative tools to distinguish common-cause variation from special-cause variation, supporting decisions in manufacturing, services, and research. Rooted in early 20th‑century work, it influenced industrial practices associated with Walter A. Shewhart, W. Edwards Deming, and quality movements in Japan and United States industries.
Statistical process control (SPC) originated with Walter A. Shewhart at Bell Labs and was advanced by figures such as W. Edwards Deming, Joseph Juran, and Kaoru Ishikawa in postwar Japan. It integrates sampling plans, control limits, and feedback loops used by organizations like Toyota Motor Corporation, General Electric, and Ford Motor Company to reduce variation and improve reliability. SPC informs frameworks and standards developed by bodies such as International Organization for Standardization and American Society for Quality and is taught in programs at institutions like Massachusetts Institute of Technology and Stanford University. Practitioners use SPC alongside methodologies like Total Quality Management, Six Sigma, Lean manufacturing, and ISO 9001-aligned quality systems.
Control charts are central SPC tools introduced by Walter A. Shewhart; variants include the X̄ and R charts, the X̄ and S charts, and the p, np, c, and u charts. Control charts feature centerlines and control limits determined by statistical theory and are applied in sectors ranging from Boeing aerospace production to Pfizer pharmaceutical manufacturing. Specialized charts—such as EWMA charts linked to work at Bell Labs and CUSUM charts associated with research by E. S. Page and practitioners in Royal Statistical Society contexts—detect small shifts in process parameters. Industries apply rules from sources like American Society for Quality and research from universities such as University of Michigan and University of Cambridge to interpret runs, trends, and outliers.
Process capability indices (Cp, Cpk, Pp, Ppk) quantify the relation between process variation and specification limits used by firms including Toyota and Siemens. Capability analysis complements performance metrics from organizations like Institute of Electrical and Electronics Engineers standards committees and feeds into reliability engineering practices at NASA and European Space Agency. Capability studies often reference statistical distributions studied by pioneers such as Karl Pearson, Ronald A. Fisher, and William Sealy Gosset and use sampling theories advanced at institutions like University of Oxford and Imperial College London. Regulatory agencies such as U.S. Food and Drug Administration and European Medicines Agency expect capability evidence in quality submissions for pharmaceuticals and medical devices.
SPC methods include short-run and long-run control strategies, attribute and variable measurement systems, measurement system analysis (MSA), and design of experiments (DOE) techniques popularized by Ronald A. Fisher and industrial adopters like DuPont. Software tools range from enterprise solutions by SAP SE and IBM to statistical packages developed at The MathWorks and open-source platforms influenced by work at Massachusetts Institute of Technology and Carnegie Mellon University. Complementary tools include process mapping from Kaoru Ishikawa’s quality circles, failure modes and effects analysis (FMEA) used by General Motors, and root-cause analysis methods promoted by Five Whys practices in Toyota Production System contexts.
Organizations implement SPC in discrete and process industries including Procter & Gamble, Boeing, Intel, Samsung Electronics, Toyota Motor Corporation, Pfizer, General Motors, Lockheed Martin, and Siemens. Applications span automotive assembly lines, semiconductor fabrication at Intel Corporation and TSMC, pharmaceutical manufacturing regulated by U.S. Food and Drug Administration, and service operations in American Express and Deloitte consulting practices. Implementation programs are often led by quality professionals certified by American Society for Quality and supported by training at universities such as Pennsylvania State University and University of California, Berkeley. Large-scale adoption followed demonstrations of productivity gains in postwar Japan during periods associated with leaders like Kaoru Ishikawa and corporate transformations at Toyota.
SPC relies on probability theory and statistical inference, drawing on work by Pierre-Simon Laplace, Thomas Bayes, Ronald A. Fisher, Jerzy Neyman, and Egon Pearson. Concepts such as sampling distributions, central limit theorem, hypothesis testing, and estimation underpin control chart limits and capability indices; these topics are core to curricula at Harvard University and Princeton University. Advanced statistical developments from groups like Royal Statistical Society, Institute of Mathematical Statistics, and researchers at Bell Labs informed sequential analysis and change-point detection methods used in modern SPC. Bayesian and frequentist approaches both contribute to modern methodology applied in fields regulated by U.S. Food and Drug Administration and standardized by International Organization for Standardization.
Category:Quality control