PYMNTS-MonitorEdge-May-2024

US Signs Multinational Guidelines for Designing Secure AI Systems

AI, artificial intelligence, technology

Few innovations throughout history have progressed as rapidly as generative artificial intelligence (AI). 

The technology’s advancements have gotten to the point where there is a growing schism in the field over whether AI’s developing capabilities, should they ever reach the point of full human-level cognition, can be constrained. 

But lost in the debate around both AI and artificial generative intelligence (AGI) is the simple fact that underneath all the hype and apocalyptic hysteria around it, the innovation remains no more than a piece of software. 

And just like with other software tools, enterprises looking to integrate it into their workflows — and companies looking to develop and ship the latest, greatest version — need to be aware of best practices as it relates to anti-fraud and cyber protection. 

This, as the U.S., U.K. and over a dozen other nations on Sunday (Nov. 26) released a detailed international agreement on how to keep AI safe from rogue actors and hackers, pushing for companies developing AI products and systems to ensure they are “secure by design.”

“Co-sealed by 23 domestic and international cybersecurity organizations, this publication marks a significant step in addressing the intersection of artificial intelligence (AI), cybersecurity, and critical infrastructure,” the U.S. Cybersecurity and Infrastructure Security Agency (CISA) said in a statement. 

Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore are among the other signatories on the non-binding “Guidelines for secure AI system development” agreement. 

See also: Amazon Is Building an LLM Twice the Size of OpenAI’s GPT-4

A Global Effort

Outside of Beijing, very few national governments have put in place regulations or laws dedicated to addressing AI and the risks around it. 

The guidelines agreed to by the U.S. and other nations do not seek to impact areas such as copyright protection around AI system training data, or even how that data is collected, and also avoids tackling issues like which uses of AI are appropriate. 

Rather, the agreement seeks to treat AI the same as any other software tool, and “create a shared set of values, tactics and practices to help creators and distributors use this powerful technology responsibly as it evolves.”

The guidelines are broken down into four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance. 

They establish a framework for monitoring AI systems and keeping them safe from hackers, as well as other best practices around data protection and external vendor vetting, ensuring companies designing and using AI can develop and deploy it in a way that keeps customers and the wider public safe from misuse. 

“The Guidelines apply to all types of AI systems, not just frontier models. We provide suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems,” wrote the U.S. CISA. 

Read alsoWho Will Power the GenAI Operating System?

A Strategic Vision

The multinational agreement is aimed primarily at providers of AI systems, whether based on models hosted by an organization or making use of external application programming interfaces. It comes after White House introduced an executive order on AI last month.

Western observers believe that there must be an ongoing process of interaction between governments, the private sector and other relevant organizations for AI regulation to be effectively implemented in the U.S. The agreement, by treating AI systems as software infrastructures, takes a first step at compartmentalizing and addressing specific vulnerabilities and potential attack vectors that could open the innovation up for abuse when deployed within an enterprise setting. 

PYMNTS has previously covered how a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.

Shaunt Sarkissian, CEO and founder of AI-ID, told PYMNTS that it is important to compartmentalize the AI’s functions to restrict its scope and purpose, as well as to develop specific rules and regulations for different use cases. 

He added that the evolving dynamics between the government and AI innovators underscores the importance of government agencies setting high-level standards and criteria for AI companies hoping to work with them.

PYMNTS-MonitorEdge-May-2024