PYMNTS-MonitorEdge-May-2024

Potential Shifts in AI Accountability: Legal Experts Weigh in on Future Liability Concern

Legal Experts Weigh in on Future AI Liability Concern

Amidst a growing dialogue among legal scholars, there’s an emerging sense that companies might soon face increased legal scrutiny for the actions and statements of their artificial intelligence systems.

As lawsuits mount around the fair use of AI, the legal community is navigating the complex terrain of AI liability. With opinions diverging on the extent to which companies should be held accountable for the behavior of their AI technologies, experts are calling attention to the nuanced challenges that lie ahead.

“Vigilance is required because there are many ways AI can negatively impact a company and lead to costly legal liabilities and reputational damage,” Charles Nerko, a lawyer in Barclay Damon’s data security and technology practice area, told PYMNTS.

“Customer service chatbots that miscommunicate information, such as understating a price or misstating a policy, can bind the company to the false data in ways that are costly or disadvantageous,” he added. “AI systems that make decisions based on biased or discriminatory methods can expose companies to legal challenges and reputational damage. Finally, using a training set that causes AI to generate content that infringes on copyright can lead to legal liability.”

AI Legal Challenges

Nvidia, an AI chipmaker, is among the latest AI companies to face a lawsuit from authors for allegedly using their copyrighted books without permission to train its NeMo AI platform. The authors, whose works contributed to a dataset of nearly 196,640 books for NeMo, are seeking damages in a class action for copyright infringement filed in San Francisco.

An ongoing issue is that AI companies like OpenAI haven’t explicitly asked permission to use site content to provide training data for their models, Andrew Kirkcaldy, CEO and co-founder of Content Guardian, told PYMNTS.

“With Google Search Generative Experience, this will take it to new waters as Google seeks to answer more of the user’s questions within the SERP, and thus, the sites that provided the answers won’t see any value in creating content,” he said. “Google has struck a deal with Reddit to allow its AI to use Reddit to train its models — a clear indication that they have sought permission.”

The Nvidia case and others like it are causing businesses to worry about the potential legal consequences of using AI. Craig Smith, a partner at the Boston law firm Lando and Anastasi who handles AI cases, told PYMNTS that companies that use generative AI tools could be liable for using the outputs from these systems.

“For example, if a generative AI tool provided text or an image that included copyrighted material, the company could be liable for copyright infringement for its use of the material,” he said. “Companies could also face liability for using AI systems that generate false or defamatory information.”

Most of the initial lawsuits relating to AI systems have focused on claims against companies that create AI models, such as OpenAI and Google, Smith noted. These cases often allege that the AI model creators engaged in copyright infringement by training their models with copyrighted materials without the authors’ permission.

“The AI companies have defended themselves by arguing that their use of the copyrighted materials constitutes ‘fair use’ and, therefore, not an act of infringement,” he added.

Nerko said the law holds organizations accountable for AI-generated content and decisions in the same way it holds organizations responsible for decisions made by human employees. This accountability includes a range of AI outputs, from customer service chatbots to complex decision-making processes in operational contexts.

“Thus, organizations should ensure the reliability and ethical standards of their AI systems in the same way they would supervise their human workforce,” he said.

Whether AI-generated content qualifies for intellectual property rights protection presents a complex legal challenge. In the United Kingdom, the Copyright, Designs and Patents Act of 1988 extends copyright protection to works created by computers without human authors. However, the United States is still navigating this territory without explicit regulations, with a pivotal ruling expected from the D.C. Court of Appeals.

This legal disparity could impact the business models of AI companies and individual content creators, stirring debates over the originality and style protection of AI-generated works. According to Ryan Abbott, a law and health sciences professor at the University of Surrey who spoke with PYMNTS in October for the “TechREG Talks” series, these issues raise crucial questions about copyright laws adapting to the age of AI.

How Businesses Can Protect Themselves

To protect themselves against lawsuits, organizations employing AI should supervise the technology like they would supervise their human workforce, Nerko said. The default legal rule usually places liability on the organization using AI. Thus, using the contracting process when procuring AI services as a risk management tool is imperative. Contracts should incentivize AI providers to uphold high standards of accuracy and legality and offer appropriate recourse when AI systems fail to meet these benchmarks.

“Thorough planning, combined with proactive AI governance, can serve as a critical defense against the legal challenges posed by AI,” he said.

Smith said companies that use AI tools should carefully evaluate the systems to determine the potential risks.

“Transparency will be key,” he explained. “It is important to understand how the AI models were trained and tested to ensure reliable results.”

Companies can protect themselves by only using AI tools that have been adequately tested and verified, he said. In addition, companies should train their employees on how to use AI tools.

“For example, employees should not be permitted to enter confidential information into publicly facing AI systems that use this information for training purposes,” he said. “Nor should employees rely on the output generated from an AI system without first verifying the accuracy of the information.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

PYMNTS-MonitorEdge-May-2024