Microsoft’s president says Americans should get used to artificial intelligence’s (AI) omnipresence.
In fact, Brad Smith told CBS News in an interview Sunday (May 28), the technology isn’t as “mysterious” as some people might believe.
“If you have a Roomba at home, it finds its way around your kitchen using artificial intelligence to learn what to bump into and how to get around it,” Smith said.
He also said the technology needs proper guardrails.
“Something that would ensure not only that these models are developed safely, but they’re deployed in say, large data centers, where they can be protected from cybersecurity, physical security and national security threats,” Smith said.
However, Smith told CBS that he did not agree with the six-month moratorium on developing AI systems more powerful than GPT4 proposed by critics like Elon Musk.
“Rather than slow down the pace of technology, which I think is extraordinarily difficult — I don’t think China’s going to jump on that bandwagon,” he said. “Let’s use six months to go faster.”
Smith’s idea: an executive order where the government commits to only purchasing AI services from companies that establish AI safety protocols.
“The world is moving forward,” Smith said. “Let’s make sure that the United States at least keeps pace with the rest of the world.”
Last week, Smith gave a talk in Washington, D.C., in which he called deep fakes the biggest AI-related threat, particularly when the material could have been crafted for illicit reasons.
“We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,” he said.
In that talk, Smith also called for measures to “protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI,” along with licensing of the most key forms of AI to safeguard physical, cyber and national security.
As noted here last week, AI has seen a surge in investment and media attention since late 2022 following the rise of generative pretrained transformers (GPT).
“While the world has been discussing potential problems with privacy and copyright created by generative AI, the most serious challenge is likely to be differentiating AI’s creations from the original work of humans,” PYMNTS wrote. “The implications stretch from fraud to something as basic as the value of human creativity.”