Large Language Models Aren’t People. Let’s Stop Testing Them As If They Were
By: Will Douglas Heaven (MIT Technology Review)
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text – a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it – the kind of thing you’d find in an IQ test. “I was really shocked by its ability to selve these problems” he says. “It completely upended everything I wuold have predicted.”
Webb is a psychologist at the University of California, Los Angeles, qho studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on…
Featured News
Nvidia and Microsoft Sued for Allegedly Undercutting AI Technology Patent Prices
Sep 5, 2024 by
CPI
White & Case Strengthens Antitrust and M&A Practices with New Partner Additions
Sep 5, 2024 by
CPI
Federal Judge Dismisses Antitrust Lawyers’ Fee Demand Over JetBlue-Spirit Deal
Sep 5, 2024 by
CPI
Boston Landlords Named as US Sues RealPage Over Alleged Rent-Inflating Practices
Sep 5, 2024 by
CPI
Judge to Weigh Landmark NCAA Settlement Proposal in Antitrust Lawsuit
Sep 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Canada & Mexico
Sep 3, 2024 by
CPI
Competitive Convergence: Mexico’s 30-Year Quest for Antitrust Parity with its Northern Neighbor
Sep 3, 2024 by
CPI
Competition and Digital Markets in North America: A Comparative Study of Antitrust Investigations in Mexico and the United States
Sep 3, 2024 by
CPI
Recent Antitrust Development in Mexico: COFECE’s Preliminary Report on Amazon and Mercado Libre
Sep 3, 2024 by
CPI
The Cost of Making COFECE Disappear
Sep 3, 2024 by
CPI