LLMpediaThe first transparent, open encyclopedia generated by LLMs

COMPAS

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: LinkedIn Hop 4
Expansion Funnel Raw 48 → Dedup 7 → NER 6 → Enqueued 3
1. Extracted48
2. After dedup7 (None)
3. After NER6 (None)
Rejected: 1 (not NE: 1)
4. Enqueued3 (None)
Similarity rejected: 3
COMPAS
NameCOMPAS
TypeRisk assessment tool
DeveloperNorthpointe / Equivant
Initial release1998
Programming languageProprietary
CountryUnited States

COMPAS

COMPAS is a proprietary risk assessment instrument used to estimate recidivism risk for individuals in criminal justice settings. It has been deployed by courts, probation agencies, and corrections departments across the United States and has attracted attention from journalists, academics, civil rights advocates, and policymakers for its methodology, predictive performance, and social implications. The system's use intersects with high-profile criminal cases, policy debates, and litigation involving algorithmic decision-making, bias, and transparency.

Overview

COMPAS was created to predict the likelihood that a person will commit a new offense, fail to appear, or otherwise reoffend, producing scores intended to inform sentencing, bail, parole, and supervision decisions. Actors involved in its deployment include state departments such as the Vermont Department of Corrections, county courts like those in Broward County, Florida, federal agencies, and private vendors including Equivant and Northpointe. Prominent coverage and critique have come from outlets and investigators including ProPublica, scholars at institutions such as Harvard University, Carnegie Mellon University, and legal advocates from organizations like the American Civil Liberties Union. Debates over COMPAS touch on influential figures and cases such as prosecutors, defense attorneys, and defendants whose sentences drew scrutiny in media and academic analyses.

Development and algorithmic design

COMPAS originated in the late 1990s and evolved through iterations by private firms, with design inputs from researchers, consultants, and practitioners in corrections and sentencing. Development has referenced and compared to historical assessment tools used by scholars at University of Pennsylvania, practitioners formerly at RAND Corporation, and risk instruments developed in jurisdictions like New York (state) and California. The algorithm purports to combine questionnaire items, criminal history factors, and demographic proxies to produce risk classifications and subscale scores. Methodological discussions have engaged statisticians and computer scientists at institutions including Massachusetts Institute of Technology, Stanford University, and University of Chicago about modeling choices such as variable selection, weighting schemes, calibration, and classification thresholds. Proprietary aspects of the software architecture and training data have been central to controversies involving disclosure and independent validation, raising questions for regulators in bodies like the National Institute of Justice.

Use in criminal justice system

Courts and agencies have integrated COMPAS output into pretrial release decisions, sentencing recommendations, parole hearings, and supervision planning. Users have included sentencing judges in jurisdictions such as Wisconsin, public defenders in offices like those in Cook County, Illinois, and parole boards in states including Florida and Ohio. Implementation practices have involved training provided by vendors to staff from entities such as the Bureau of Justice Assistance and county sheriff's offices. High-profile criminal prosecutions and appeals where risk assessments were cited brought attention from legal scholars at Yale Law School, practitioners at The Innocence Project, and policymakers in state legislatures such as those of California and New Jersey. Empirical evaluations often cross-reference recidivism statistics compiled by agencies like the Bureau of Justice Statistics.

Transparency, validation, and critiques

Transparency advocates and researchers have critiqued the proprietary nature of the scoring algorithm and limited access to source code and underlying datasets. Analyses by investigative journalists at ProPublica and academic teams at Harvard John A. Paulson School of Engineering and Applied Sciences and Carnegie Mellon University focused on issues including predictive parity, false positive and false negative rates, and disparate impacts across demographic groups such as race and age. Civil rights organizations including the American Civil Liberties Union and advocacy groups like the Brennan Center for Justice raised concerns about fairness, due process, and potential violations of constitutional protections recognized by courts such as the United States Court of Appeals for the Seventh Circuit. Methodological critiques have also come from statisticians affiliated with Columbia University, Princeton University, and Duke University, debating the appropriate metrics for fairness, trade-offs between calibration and equalized error rates, and the role of confounding variables.

Legal challenges and legislative reactions have shaped COMPAS's role in policy. Notable litigation referenced or invoking risk assessment has proceeded through courts including the United States District Court for the Northern District of Illinois and appellate panels in circuits such as the Eighth Circuit Court of Appeals and the Eleventh Circuit Court of Appeals. Policymakers in states like New Jersey, Iowa, and Kentucky considered statutes or guidelines addressing automated decision tools, and executive branches in states including New York (state) issued review directives. Regulatory bodies and commissions—such as panels convened by the National Academy of Sciences or oversight hearings in the United States Congress—examined transparency, vendor certification, and safeguards. Settlements, court opinions, and administrative rules have influenced disclosure practices, vendor contracts, and the use of algorithmic evidence in sentencing.

Alternatives and reform efforts

Alternatives to proprietary risk instruments have included open-source tools developed by academic teams at Carnegie Mellon University, policy labs at New York University, and civic technology groups like Mozilla-affiliated initiatives. Reform proposals promoted by organizations such as the Brennan Center for Justice, ACLU, and scholars at Harvard Kennedy School advocate for transparency mandates, independent validation protocols, algorithmic impact assessments, and enhanced legal protections for defendants. Pilot programs in jurisdictions including Vermont, Washington (state), and Massachusetts tested tailored assessment instruments, recidivism-reduction interventions coordinated with agencies like the Department of Corrections (Massachusetts), and alternatives to incarceration emphasized by reform-minded policymakers and nonprofit providers such as The Sentencing Project.

Category:Criminal justice