OpenAI’s recent content-licensing deals with Vox and The Atlantic have writers for those publications worried.
As Ars Technica wrote Friday (May 31), the deals will let the artificial intelligence (AI) company use works from the two publications to train its models.
However, some writers aren’t thrilled with the idea, with their unions issuing statements of concern last week. For example, The News Guild — representing The Atlantic’s writers — said the publication had days earlier written an article on the ways tech-journalism projects have always gone wrong.
“Atlantic staffers deserve to know why the company leadership believes this time is different,” the guild said.
Meanwhile, Vox Editorial Director Bryan Walsh wrote a piece called, “This article is OpenAI training data,” warning that the growth of AI chatbots and generative AI search products could trigger a sharp drop in search engine traffic to publishers, thus threatening the livelihoods of content creators and the very character of the internet.
PYMNTS has contacted OpenAI for comment but has not yet gotten a reply.
OpenAI and its partner Microsoft are already facing legal action from authors and media companies — including the New York Times — over their use of creative and journalistic work to train their AI models.
And as PYMNTS has written, these ongoing AI-driven concerns came to a head with a recent open letter published in Billboard by the Artist Rights Alliance (ARA), advocating for the ethical and responsible use of AI inside the recording industry while also championing the rights of musicians, performers and songwriters.
Last week, Jonathan Kanter, the U.S. Justice Department’s top antitrust official, expressed concerns about the need for fair compensation for artists and creators, though he did not suggest the department would be doing more than monitoring the situation for now.
As PYMNTS wrote, this is all happening in the midst of increasing tensions between artists and AI companies, with actor Scarlett Johansson recently accusing OpenAI of using a voice similar to hers for the new GPT-4o chatbot without her consent.
“This incident highlights the ongoing conflict surrounding the use of AI-generated voices and imagery in films, television and video games, which has been a contentious issue in labor negotiations within the entertainment industry,” PYMNTS wrote.
Meanwhile, OpenAI’s recent announcement that it had formed a new safety committee has raised the eyebrows of some security experts, as noted here last week.
“The board seems to be entirely OpenAI employees or executives,” John Bambenek, president of cybersecurity company Bambenek Consulting, told PYMNTS. “It’ll be difficult to prevent an echo chamber effect from taking hold that may overlook risks from more advanced models.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.