The Biden administration is reportedly set to propose new requirements for cloud computing companies in an effort to enhance security and protect sensitive data.
The proposed “know your customer” regulations aim to identify who is accessing U.S. cloud technology to train artificial intelligence (AI) models, U.S. Commerce Secretary Gina Raimondo told Reuters in a report published Friday (Jan. 26).
It could be released as soon as next week, according to the report.
“We use export controls on chips,” Raimondo said, per the report. “Those chips are in American cloud data centers so we also have to think about closing down that avenue for potential malicious activity.”
With the growing concerns surrounding the security implications of the AI sector, the U.S. government is taking steps to ensure that non-state actors and unauthorized entities do not gain access to American cloud infrastructure, the report said.
The soon-to-be proposed regulation aims to identify and monitor the customers who are using cloud services to train large AI models, per the report. This requirement will enable the U.S. government to have better visibility into who has access to American cloud infrastructure and how it is being utilized.
Cloud companies will be obligated to identify their largest customers and the AI models they are training, according to the report. This information will help the government assess potential security risks and take appropriate action.
The three largest cloud-computing providers see generative AI as a business driver, with Amazon, Microsoft and Alphabet’s Google putting the technology front and center in their sales pitches since OpenAI’s chatbot became a sensation.
These cloud computing giants have also been upping their spending as generative AI takes off, boosting their capital budgets and earmarking large amounts of them for generative AI systems that need vast amounts of computing and data power.
At the same time, the Biden administration has been establishing essential standards and guidance for the secure deployment of generative AI.
For example, the Commerce Department’s National Institute of Standards and Technology (NIST) is creating comprehensive guidelines for evaluating AI, facilitating the development of industry standards and establishing testing requirements for AI systems.
The agency has requested input from both AI companies and the public, focusing particularly on generative AI risk management and mitigating the risks associated with AI-generated misinformation.