Generated by GPT-5-mini| Loomis v. Wisconsin | |
|---|---|
| Case name | Loomis v. Wisconsin |
| Court | Wisconsin Supreme Court |
| Date decided | 2016 |
| Citations | 881 N.W.2d 749 (Wis. 2016) |
| Judges | Patience Roggensack, Ann Walsh Bradley, Diane S. Sykes, Michael Gableman, Daniel Kelly, Bradley D. Sherman, Annette Ziegler |
| Prior actions | Trial court proceedings in Waukesha County, appellate briefing |
| Subsequent actions | Discussion in United States Supreme Court petitions, academic commentary |
Loomis v. Wisconsin was a 2016 decision by the Wisconsin Supreme Court addressing the use of proprietary risk-assessment software in sentencing. The case involved the intersection of algorithmic tools, criminal procedure, and statutory sentencing frameworks applied to a defendant convicted in Waukesha County. It generated controversy across legal, technological, and civil liberties communities including scholars from Harvard Law School, Yale Law School, and commentators in The New York Times and The Washington Post.
The factual and institutional setting included multiple prominent actors and venues: the defendant was sentenced after conviction in Waukesha County, with proceedings overseen by a circuit judge and appellate briefing before the Wisconsin Supreme Court. The contested tool was a proprietary actuarial risk-assessment system developed by a private firm used in several jurisdictions, whose vendor relied on trade-secret protections often discussed by commentators in The Wall Street Journal and The Atlantic. Academic researchers from institutions such as Massachusetts Institute of Technology, Stanford University, and Carnegie Mellon University had published empirical studies on similar algorithms, sparking interest from civil-rights organizations including American Civil Liberties Union and Equal Justice Initiative and policy groups such as the Brennan Center for Justice.
Public discourse about the case referenced broader legal developments at the level of the United States Supreme Court, legislative action in state legislatures like the California State Assembly, and international standards from bodies such as the European Court of Human Rights and Council of Europe. Media coverage connected the case to debates involving prominent technologists and ethicists affiliated with Google, Microsoft Research, and OpenAI about transparency, bias, and accountability in algorithmic decision-making.
Procedural history: the case arose from sentencing proceedings following conviction in a trial court in Waukesha County; the circuit court considered a presentence risk assessment that included outputs from the proprietary software. Appellate briefs were filed by counsel with ties to firms and clinics connected to Harvard Law School and University of Wisconsin Law School, and amicus briefs were submitted by civil-rights groups including the American Civil Liberties Union and scholarly organizations associated with University of Chicago and Columbia Law School.
Key factual points included references to prior convictions tracked by agencies like the Federal Bureau of Investigation and state records maintained in systems tied to the National Crime Information Center. Sentencing considerations cited statutes from the Wisconsin Legislature and guidelines influenced by practices in jurisdictions such as New York State and California. The proprietary nature of the software raised contested evidentiary and due-process claims echoing litigation in venues like the Ninth Circuit Court of Appeals and commentary from scholars at Oxford University and Cambridge University.
Petitioner and appellee advanced competing claims on constitutional and statutory grounds: defense counsel argued that reliance on nontransparent proprietary algorithms violated rights under the Fourteenth Amendment and the Wisconsin Constitution as interpreted by precedents from the Wisconsin Supreme Court and comparative decisions from the United States Supreme Court. The state and proponents of actuarial instruments relied on precedents from appellate decisions in circuits including the Seventh Circuit and statutory sentencing authority vested in state courts, referencing works by legal scholars at Yale Law School and Stanford Law School.
Issues included admissibility rules influenced by evidentiary doctrines in cases from the Supreme Court of the United States and the mechanics of disclosure balanced against trade-secret protections commonly litigated before federal courts in districts such as the Eastern District of Wisconsin. Amici highlighted technical questions about machine-learning models discussed in literature from MIT Media Lab and Carnegie Mellon University on topics like training data, feature selection, and predictive validity, and civil-rights briefs referenced analyses by ProPublica on algorithmic bias.
The Wisconsin Supreme Court affirmed the sentencing court's use of the proprietary risk-assessment output while articulating limits on reliance and urging procedures to safeguard defendants' rights. The majority opinion addressed standards for sentencing discretion drawing upon precedents from the United States Supreme Court and state high courts such as the New Jersey Supreme Court and the California Supreme Court. The court balanced competing interests in transparency, citing cases and commentary from institutions like Northwestern University and University of Pennsylvania.
The opinion mandated that judges may consider actuarial risk scores but must not rely on proprietary trade-secret elements that preclude meaningful defense confrontation; it recommended disclosure practices akin to discovery principles developed in federal practice under rules influenced by commentary from the American Bar Association and academic treatises from Harvard University Press. Dissenting opinions invoked concerns echoed in briefs from civil-rights organizations including the American Civil Liberties Union and scholars at Princeton University.
The decision reverberated across multiple domains: state judiciaries in California, New York, and Florida examined similar issues; legislative bodies such as the Connecticut General Assembly and municipal councils in cities like New York City debated bans or regulations on use of algorithmic risk tools. Legal scholars at Columbia Law School, Berkeley Law, and Georgetown University Law Center published critiques and proposals for statutory frameworks. Technology firms and vendors including Equivant and academic centers at MIT and Stanford accelerated research on explainable models, while civil-rights entities like the NAACP Legal Defense Fund pursued policy advocacy.
The case influenced petitions for certiorari to the United States Supreme Court and featured in policy reports from think tanks such as the Brookings Institution and RAND Corporation. Courts and legislatures continue to grapple with transparency, trade secrets, and due process, and the decision remains a touchstone in discussions involving scholars and policymakers at Harvard Kennedy School, London School of Economics, and International Criminal Court forums. Category:Wisconsin case law