Austria says the time has come to regulate artificial intelligence (AI) use in weapons systems.
Left unchecked, such technology could create “killer robots,” officials in the Alpine country said as they hosted a conference on the issue, per a Monday (April 29) Reuters report.
“We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control,” Austrian Foreign Minister Alexander Schallenberg told the gathering, titled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.”
“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines.”
The Reuters report notes that years of talks at the United Nations have yielded few results on the matter, and many participants at the conference — which included both representatives from government and non-governmental bodies — said time was running short.
“We have already seen AI making selection errors in ways both large and small, from misrecognizing a referee’s bald head as a football, to pedestrian deaths caused by self-driving cars unable to recognize jaywalking,” Jaan Tallinn, a software programmer and tech investor, said in a keynote speech.
“We must be extremely cautious about relying on the accuracy of these systems, whether in the military or civilian sectors.”
As PYMNTS has written, the question of what types of threats AI might pose is a matter of some debate. For example, a recent State Department report covers national security threats due to fast-developing AI, spotlighting the urgent need for federal action to prevent a crisis.
However, some AI specialists see reasons to dismiss gloomy headlines questioning AI’s possible threat to our existence.
“Simply put, the machine needs humans — and will for quite some time,” Shawn Daly, a professor at Niagara University who was not part of the study, told PYMNTS in an interview.
“We provide not only the infrastructure but also critical guidance the machine can’t do without. As for evil influences utilizing AI to nefarious ends, we’ve managed the nuclear age pretty well, which I find encouraging,” Daly said.
Meanwhile, the U.S. and Great Britain earlier this month forged an agreement aimed at the safe development of AI systems.
“This new partnership will mean a lot more responsibility being put on companies to ensure their products are safe, trustworthy, and ethical,” AI ethics evangelist Andrew Pery of global intelligent automation company ABBYY told PYMNTS.
“The inclination by innovators of disruptive technologies is to release products with a “ship first and fix later” mentality to gain first mover advantage. For example, while OpenAI is somewhat transparent about the potential risks of ChatGPT, they released it for broad commercial use with its harmful impacts notwithstanding.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.