By Phillippe Hacker (Oxford Business Law Blog)
Advanced machine learning (ML) techniques, such as deep neural networks or random forests, are often said to be powerful, but opaque. However, a burgeoning field of computer science is committed to developing machine learning tools that are interpretable ex ante or at least explainable ex post. This has implications not only for technological progress, but also for the law, as we explain in a recent open-access article.
On the legal side, algorithmic explainability has so far been discussed mainly in data protection law, where a vivid debate has erupted over whether the European Union’s General Data Protection Regulation (GDPR) provides for a ‘right to an explanation’. While the obligations flowing from the GDPR in this respect are quite uncertain, we show that more concrete incentives to adopt explainable ML tools may arise from contract and tort law.
To this end, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability, and demonstrate the effect in a technical case study.
Featured News
UK Antitrust Regulator Signals Flexibility in Merger Reviews to Boost Economic Growth
Nov 21, 2024 by
CPI
US Supreme Court Declines to Hear Appeal in Google Antitrust Records Dispute
Nov 21, 2024 by
CPI
Matt Gaetz Withdraws from Consideration for US Attorney General Amid Controversy
Nov 21, 2024 by
CPI
Morocco Fines US Pharma Firm Viatris Over Merger Notification Breach
Nov 21, 2024 by
CPI
FCC Chairwoman Rosenworcel Announces Resignation
Nov 21, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI