LLMpediaThe first transparent, open encyclopedia generated by LLMs

Sandra Wachter

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 63 → Dedup 33 → NER 3 → Enqueued 3
1. Extracted63
2. After dedup33 (None)
3. After NER3 (None)
Rejected: 30 (not NE: 30)
4. Enqueued3 (None)
Sandra Wachter
NameSandra Wachter
NationalityAustrian
FieldsLaw, Technology, Artificial intelligence ethics, Data ethics
WorkplacesUniversity of Oxford, Oxford Internet Institute
Alma materUniversity of Vienna, London School of Economics
Known forResearch on algorithmic bias, AI accountability, data protection law
AwardsL'Oréal-UNESCO For Women in Science Award (2020)

Sandra Wachter. She is an Austrian scholar and professor focusing on the intersection of law, technology, and ethics, particularly concerning artificial intelligence and big data. Based at the University of Oxford's Oxford Internet Institute, her research critically examines issues of algorithmic fairness, discrimination, and transparency in automated systems. Wachter is a prominent voice in global policy debates, advocating for legal and technical frameworks that ensure AI accountability and protect fundamental rights in the digital age.

Early life and education

Sandra Wachter completed her foundational legal studies in her home country, earning a degree in law from the University of Vienna. She then pursued advanced studies in the United Kingdom, obtaining a Master of Laws (LL.M.) degree from the London School of Economics and Political Science. Her academic trajectory continued with doctoral research, culminating in a PhD from the University of Vienna, where she specialized in the burgeoning field of data protection and the ethical implications of emerging technologies. This multidisciplinary educational background, spanning Austrian civil law and international comparative law, equipped her with the tools to analyze complex regulatory challenges posed by digital innovation.

Career and research

Sandra Wachter is a Professor of Technology and Regulation at the Oxford Internet Institute, part of the University of Oxford. Her research career is distinguished by groundbreaking work on algorithmic bias and the "black box" problem in machine learning. She has published influential papers on concepts like "explainable AI" and "counterfactual explanations" as methods to audit automated decisions. A significant portion of her work involves analyzing the General Data Protection Regulation (GDPR) and its provisions for automated decision-making, arguing for their robust application. She frequently collaborates with computer scientists, including researchers at the Alan Turing Institute, to bridge the gap between legal theory and technical implementation. Her expertise is regularly sought by bodies such as the European Commission, the UK Parliament, and the World Economic Forum on matters of AI governance and digital ethics.

Awards and recognition

In 2020, Sandra Wachter received the prestigious L'Oréal-UNESCO For Women in Science Award for her exceptional contributions to the field. She has been recognized as a leading young scientist by the World Economic Forum, which named her a Young Global Leader. Her scholarly impact is evidenced by numerous invitations to speak at major forums, including the United Nations Internet Governance Forum and the Royal Society. Her research papers have received best paper awards and are widely cited in academic literature spanning law, computer science, and ethics.

Selected publications

* "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation" (2017) in *International Data Privacy Law*. * "Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR" (2018) with Brent Mittelstadt and Chris Russell in *Harvard Journal of Law & Technology*. * "Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law" (2021) in *West Virginia Law Review*. * "The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law" (2022) in *Tulane Law Review*.

Views and advocacy

Sandra Wachter is a strong advocate for enforceable legal standards that preemptively address harms from algorithmic systems. She argues that existing tools within the GDPR and EU non-discrimination law, such as the right to human intervention, must be strengthened and practically implemented. She is critical of "ethics washing" and emphasizes that ethical AI principles must be backed by binding regulation and effective oversight mechanisms. Wachter frequently engages with media outlets like the BBC and The Guardian to highlight risks associated with predictive policing, recruitment algorithms, and social media platforms. She supports interdisciplinary collaboration to develop technical standards for algorithmic auditing and champions greater diversity in the tech industry to mitigate embedded biases.

Category:Austrian academics Category:Artificial intelligence researchers Category:University of Oxford faculty Category:Women in technology