Generated by GPT-5-mini| Angrist–Imbens | |
|---|---|
| Name | Angrist–Imbens |
| Field | Econometrics |
| Introduced | 1990s |
| Key persons | Joshua D. Angrist; Guido W. Imbens |
| Related | Instrumental variables; Local Average Treatment Effect; Rubin Causal Model |
Angrist–Imbens is a framework in econometrics developed by Joshua D. Angrist and Guido W. Imbens that formalizes identification and estimation of causal effects using instrumental variables under heterogeneity. The approach articulates assumptions under which an instrument identifies a Local Average Treatment Effect for compliers, linking ideas from the Rubin Causal Model to classic work by Jerzy Neyman and Ronald Fisher. It has influenced empirical practice in studies published in outlets such as the American Economic Review, Quarterly Journal of Economics, and Journal of Political Economy and informed applied work by researchers affiliated with National Bureau of Economic Research, Harvard University, and Stanford University.
Angrist and Imbens provided a formal statement of identification conditions for instrumental variables focusing on heterogenous treatment effects and one-sided noncompliance, building on earlier contributions by David Card, James J. Heckman, Edward Leamer, and Angus Deaton. Their formulation isolates subpopulations—compliers, always-takers, never-takers, and defiers—and links estimands to policy-relevant causal parameters under assumptions such as independence, exclusion, and monotonicity. The work connects to methodological traditions represented by Paul Rosenbaum, Donald Rubin, Jerzy Neyman, and Fisher, and has been incorporated into textbooks by Jeffrey Wooldridge, Joshua Angrist, and Guido Imbens as well as graduate courses at institutions like MIT, Princeton University, and University of Chicago.
The Angrist–Imbens framework considers a binary treatment, a binary instrument, and potential outcomes for each unit. Key assumptions include: (1) random or as-if random assignment of the instrument conditional on covariates—an idea resonant with Jerzy Neyman's work and implemented in studies by Donald Rubin and Paul Rosenbaum; (2) the exclusion restriction, which requires the instrument affect outcomes only through the treatment, an assumption debated in empirical studies by James J. Heckman and Angus Deaton; (3) monotonicity, excluding the presence of defiers, a condition discussed in relation to compliance behavior in research by David Card and Alan B. Krueger; and (4) nonzero first-stage effect, similar to weak instrument concerns raised by Peter Phillips, James Stock, and Mark Watson. The framework formalizes latent compliance classes—compliers, always-takers, never-takers—paralleling classification approaches used in work by Imbens's collaborators and critics.
Under the Angrist–Imbens assumptions, the instrumental variable estimand identifies the Local Average Treatment Effect for compliers. LATE is the average causal effect among units whose treatment status is shifted by the instrument, a notion related to heterogeneity emphasized by James J. Heckman and Angus Deaton. The LATE interpretation clarifies earlier interpretations of Wald estimators in studies by David Card (returns to schooling) and Angrist and Krueger (quarter-of-birth instrument), explaining why estimated effects need not generalize to always-takers or never-takers. LATE has been contrasted with average treatment effects defined in the Rubin Causal Model literature by Donald Rubin and with policy-targeted parameters discussed by John D. Donohue and Steven D. Levitt.
Estimation within the Angrist–Imbens framework typically proceeds via two-stage least squares, generalized method of moments, or locally efficient semiparametric estimators advanced by Guido Imbens and colleagues. Inference addresses finite-sample and asymptotic properties, with attention to weak instruments discussed by James H. Stock, Motohiro Yogo, and Peter Phillips. Variance estimation, bootstrap methods, and sensitivity analyses draw on techniques popularized by Jerome H. Friedman-style resampling and by econometricians such as Kenneth A. Sheppard and Timothy Conley. Empirical researchers implement these procedures using software developed in communities around StataCorp, R Project for Statistical Computing, and languages used at National Bureau of Economic Research workshops.
The Angrist–Imbens framework has been applied to policy evaluations, labor economics, health economics, and education research. Prominent examples include analyses of compulsory schooling laws studied by Angrist and Krueger, returns to schooling investigated by Card and Krueger, draft lottery studies tracing effects on earnings following work by Angrist and Alan B. Krueger, and program evaluations of job training in studies linked to Heckman's literature. Other applications feature evaluation of public health interventions in research associated with scholars at Harvard School of Public Health and field experiments promoted by Abhijit Banerjee and Esther Duflo. The approach has informed causal claims in studies published in American Economic Review, Econometrica, Journal of Econometrics, and Review of Economics and Statistics.
Critiques focus on external validity of LATE, strength and plausibility of the exclusion restriction, and the monotonicity assumption. Commentators such as James J. Heckman, Angus Deaton, and Joel Mokyr have emphasized limitations for policy inference when compliers are nonrepresentative. Extensions include frameworks for multi-valued instruments and treatments developed by Ichiro Imai, Kosuke Imai, and Jasjeet S. Sekhon, bounding approaches by Charles F. Manski, and structural approaches integrating equilibrium models by Angrist's contemporaries. Recent methodological work integrates machine learning for heterogeneous treatment effects by researchers at Carnegie Mellon University, Stanford University, and Princeton University and develops robustness diagnostics in the spirit of sensitivity analysis advanced by Paul Rosenbaum and Donald Rubin.