Generated by GPT-5-mini| Generalized Partial Credit Model | |
|---|---|
| Name | Generalized Partial Credit Model |
| Field | Psychometrics |
| Introduced | 1995 |
| Developers | Mark Engelhardt; Fumiko Samejima; William F. van der Linden |
| Type | Item response theory model |
Generalized Partial Credit Model The Generalized Partial Credit Model is an item response theory model for polytomous assessment data, used in psychometrics, educational testing, and health outcomes measurement. It extends earlier work on ordered-response scaling to accommodate varying discrimination across items and categories, offering flexibility for test development and adaptive testing. The model is central to modern assessments employed by organizations such as Educational Testing Service, National Assessment of Educational Progress, Organisation for Economic Co-operation and Development, and research groups at universities like University of Chicago, Stanford University, University of Cambridge.
The model was proposed to generalize the Partial Credit Model and is situated among contributions by researchers at institutions including University of Illinois at Urbana–Champaign, Northwestern University, and University of Oxford. It generalizes dichotomous frameworks such as the Rasch model and the Two-parameter logistic model and relates to polytomous frameworks like the Graded Response Model and the Rating Scale Model. Key influences on its development include work published in outlets associated with American Educational Research Association, Psychometric Society, and conferences like the International Association for Educational Assessment meeting. The model has been implemented in software from vendors and projects such as R Project for Statistical Computing, Mplus, StataCorp LLC, and packages maintained by teams at University of Michigan and Carnegie Mellon University.
The formulation assigns a latent trait theta to examinees and category-specific step parameters and discrimination parameters to items, drawing on likelihood frameworks used in publications from Journal of Educational Measurement, Psychometrika, and monographs by authors affiliated with Princeton University, Columbia University, and Yale University. It expresses the probability of scoring in category k for item i as a function of theta, item discrimination ai, and step parameters bij, connecting to estimation theory developed at places like Massachusetts Institute of Technology and University of California, Berkeley. Mathematically the structure parallels exponential-family formulations used in analyses at Bell Labs and in applied work by researchers from RAND Corporation and Brookings Institution.
Estimation typically uses marginal maximum likelihood or Bayesian methods, with computational strategies inspired by algorithms from IBM research, the National Institute of Standards and Technology, and numerical methods taught at California Institute of Technology. Software implementations rely on expectation-maximization and Markov chain Monte Carlo algorithms, drawing on libraries and development teams at Google, Amazon Web Services, and statistical groups at Harvard University. Inference procedures test hypotheses about item functioning using likelihood-ratio tests and information criteria promoted by editorial boards of Biometrika, Annals of Statistics, and Journal of the Royal Statistical Society.
Assessing fit uses item-fit statistics and residual analyses discussed in reports from Educational Testing Service and studies from University of Oxford and King's College London. Diagnostics include examination of differential item functioning studies influenced by standards from International Test Commission and guidelines from World Health Organization when applied to patient-reported outcomes. Model comparison employs information criteria such as AIC and BIC, methods popularized in textbooks from Cambridge University Press and evaluation frameworks used by Organisation for Economic Co-operation and Development and national assessment programs like Programme for International Student Assessment.
Applications span large-scale assessments run by Educational Testing Service, competency frameworks at Association of American Medical Colleges, licensure examinations administered by National Council of State Boards of Nursing, and patient-reported outcome measures developed at Mayo Clinic and Johns Hopkins University. The model has been applied in adaptive testing systems designed by teams at Carnegie Mellon University and measurement projects conducted by RAND Corporation and Brookings Institution. Example analyses appear in theses from University of Pennsylvania, technical reports from SAS Institute Inc., and implementation guides from research groups at University of British Columbia.
Extensions include multidimensional variants influenced by research at Columbia University, hierarchical formulations developed at University of Chicago, and incorporate covariate effects studied at Yale University and Duke University. Related models include the Graded Response Model, the Partial Credit Model, multidimensional item response models advanced at University of Notre Dame, and mixture IRT approaches explored in collaborations with scholars at University of California, Los Angeles and University of Texas at Austin. Continued methodological work is often presented at meetings of the Psychometric Society and in journals associated with American Psychological Association.
Category:Item response theory