Supreme Court Justice Roberts Cautions on the Mixed Impact of AI in the Legal Arena
In a thought-provoking year-end report published on Sunday, U.S. Supreme Court Chief Justice John Roberts explored the dual nature of artificial intelligence (AI) within the legal profession. While acknowledging its potential to enhance access to justice and streamline legal processes, Roberts urged “caution and humility” in the face of evolving technology that has both promising benefits and inherent drawbacks.
Roberts, in his 13-page report, adopted an ambivalent stance, emphasizing that AI had the potential to increase access to justice for indigent litigants, revolutionize legal research, and expedite case resolution, all while reducing costs. However, he also highlighted the significant privacy concerns associated with AI and the technology’s current inability to fully replicate human discretion.
“I predict that human judges will be around for a while,” Roberts wrote, “But with equal confidence, I predict that judicial work – particularly at the trial level – will be significantly affected by AI.”
The Chief Justice’s commentary represents his most significant discussion to date on the impact of AI on the legal system. This comes at a time when lower courts grapple with the challenges of adapting to a technology capable of passing the bar exam but prone to generating fictitious content, referred to as “hallucinations.”
Roberts stressed the necessity for caution in deploying AI, referencing instances where AI-generated hallucinations led lawyers to cite non-existent cases in court papers, calling it “always a bad idea.” Although he did not delve into specifics, Roberts mentioned that the phenomenon had made headlines in the past year.
Recent incidents, such as former President Donald Trump’s lawyer Michael Cohen inadvertently including fake case citations in court filings, have raised eyebrows about the reliability of AI-generated content. This has prompted a federal appeals court in New Orleans, the 5th U.S. Circuit Court of Appeals, to propose rules regulating the use of generative AI tools like OpenAI’s ChatGPT by lawyers appearing before it.
The proposed rule aims to ensure transparency and accountability, requiring lawyers to certify that they either did not rely on AI programs to draft briefs or that any text generated by AI underwent human review for accuracy before being included in court filings.
Source: Reuters
Featured News
Electrolux Fined €44.5 Million in French Antitrust Case
Dec 19, 2024 by
CPI
Indian Antitrust Body Raids Alcohol Giants Amid Price Collusion Probe
Dec 19, 2024 by
CPI
Attorneys Seek $525 Million in Fees in NCAA Settlement Case
Dec 19, 2024 by
CPI
Italy’s Competition Watchdog Ends Investigation into Booking.com
Dec 19, 2024 by
CPI
Minnesota Judge Approves $2.4 Million Hormel Settlement in Antitrust Case
Dec 19, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand