Microsoft’s chief economist says illegal artificial intelligence use is a matter of “when,” not “if.”
“I am confident AI will be used by bad actors, and yes, it will cause real damage,” Michael Schwarz said while serving on a World Economic Forum panel in Geneva on Wednesday (May 3). “It can do a lot of damage in the hands of spammers with elections and so on.”
The technology must be regulated, but lawmakers should be cautious before acting, added Schwarz, whose comments were reported by Bloomberg News.
“Once we see real harm, we have to ask ourselves the simple question: ‘Can we regulate that in a way where the good things that will be prevented by this regulation are less important?’” Schwarz said. “The principles should be, the benefits from the regulation to our society should be greater than the cost to our society.”
He’s one of many people calling for greater scrutiny of artificial intelligence (AI) tools as their use has mushroomed in the wake of the arrival of ChatGPT.
As PYMNTS reported last month, the rapid development of AI capabilities, coupled with its attractive industry-agnostic integration use cases, has challenged regulators and lawmakers around the globe as they race to address them.
In recent weeks, U.S. Senate Majority Leader Chuck Schumer (D-N.Y.), introduced a framework of rules designed to chart a path for the U.S. to regulate the AI industry, while the Biden administration laid out a formal request for comment to shape U.S. AI policy.
Meanwhile, China’s internet regulator released detailed measures to keep artificial intelligence in check. And last week, a group of U.S. regulatory agencies issued a joint statement telling companies that AI-driven decisions still must follow the law.
“These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” the joint statement said. “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination and produce other harmful outcomes.”
In addition to the danger of election interference that Schwarz mentioned, advances in AI have also been embraced by scam artists and other cyber criminals.
“People are already using ChatGPT and generative AI to write phishing emails, to create fake personas and synthetic IDs,” Gerhard Oosthuizen, chief technology officer of Entersekt, told PYMNTS earlier this year.
He added that scammers could even use generative AI tools to ask, “How would I defraud a customer?” with the AI engine coming up with a list, or they could ask it to produce “10 ways to run a phishing campaign,” and get back a number of useful strategies.