Zoom: We Don’t Use Calls to Train AI

Zoom has clarified its artificial intelligence (AI) policy after it made headlines last week.

As noted here, the videoconferencing company had updated its terms of service in a way that suggested customer video calls could be used to train its AI models

However, Zoom Chief Product Officer Smita Hashim published an updated blog post Friday (Aug. 11) to make the company’s position clear.

“It’s important to us at Zoom to empower our customers with innovative and secure communication solutions,” Hashim wrote. 

“We’ve updated our terms of service to further confirm that Zoom does not use any of your audio, video, chat, screen-sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models. In addition, we have updated our in-product notices to reflect this.”

The news comes as corporate use of user data to train AI has triggered legal action, like the federal lawsuit filed against OpenAI in June, which accuses the company of training its ChatGPT tool using data stolen from millions of people.

The suit alleges the AI giant carried out a strategy to “secretly harvest massive amounts of personal data from the internet” and says this data included information and conversations, medical data and information about children, used without permission.

More recently, authors and other creatives have sought legal redress against AI companies, as seen last month in the lawsuit filed against OpenAI by writers Paul Tremblay and Mona Awad, who say ChatGPT crafts summaries of their work that are accurate to a degree that could only possible if the AI had been trained using their novels.

The rapid advance of AI has led to calls for regulation, which might be easier said than done, Professor Cary Coglianese, the Edward B. Shils Professor of Law and professor of political science at the University of Pennsylvania Law School, told PYMNTS recently.

“Trying to regulate AI is a little bit like trying to regulate air or water,” Coglianese, founding director of the Penn Program on Regulation, said during the “TechReg Talks” series presented by AI-ID.

He explained that regulating AI will be a multifaceted activity that varies according to the different types of algorithm and their uses.

“It’s not one static thing. Regulators — and I do mean that plural, we are going to need multiple regulators — they have to be agile, they have to be flexible, and they have to be vigilant,” he said, adding that “a single piece of legislation” won’t fix the problems connected to AI.