California’s recent vote to regulate artificial intelligence (AI) and personal data use by businesses underscores growing concern over the risks posed by these technologies in the absence of clear federal laws.
“Congress and the administration do not operate in a vacuum,” John Hallmark, U.S. political and legislative leader at consulting firm EY, told PYMNTS in an interview. “Often we see other governments and policymakers stake out policy positions first, like we’ve seen with the EU and California on data privacy. State policymakers also represent constituents, and we are seeing these policymakers respond to those they represent. Where there is no federal law, state laws can sometimes set de facto national laws.”
The California Privacy Protection Agency (CPPA) has taken an essential step in regulating how businesses use AI and collect personal data. In a close 3-2 vote recently in Oakland, the board moved forward with new rules to protect California consumers, workers and students.
The new rules aim to set guidelines for how AI and personal data can affect Californians in areas like jobs, housing, insurance, healthcare and education. For example, if a company wants to use AI to predict someone’s emotions or personality during a job interview, the person can choose not to participate without being discriminated against.
Under the rules, businesses must tell people before using AI. They cannot discriminate against people who choose not to interact with AI. If someone agrees to use an AI service or tool, businesses must answer questions about how they use personal data to make predictions. Employers or third-party contractors must also assess the performance of their AI technology.
The suggested regulations would apply to any business generating over $25 million in yearly income or handling the personal data of over 100,000 individuals from California. Of the world’s leading 50 AI firms, 35 are based in California.
“This is one to watch because California is the home of big tech companies and a population epicenter,” Bob Rogers, the CEO of Oii.ai, a data science company in San Francisco, told PYMNTS in an interview. “What happens there can set the tone for much of the country.”
Rogers noted that the bill says developers of big AI models must do pre-deployment safety testing and implement cybersecurity protections.
He added, “It also gives the attorney general of California the power to hold a developer accountable if an extremely powerful AI model causes harm.”
California is one of many states that are closely looking at artificial intelligence. Washington, New York and Massachusetts have also introduced bills to control AI in areas like hiring, facial recognition, and algorithmic bias. These state actions are putting pressure on the federal government to move faster in creating a national plan for AI oversight.
States and local governments are often more agile and responsive than the federal government when it comes to addressing their constituents’ specific needs and concerns. These smaller jurisdictions are closer to the people they serve and can more quickly react to pressing issues that directly impact their communities, Cindi Howson, chief data strategy officer at the business intelligence software company ThoughtSpot, said in an interview with PYMNTS.
Howson pointed to the example of the ELVIS Act (Establishing Label Integrity in Sound Recordings Act), which was recently introduced in Tennessee. This legislation aims to protect the intellectual property rights of musicians and vocal artists, many of whom are based in Tennessee, and contribute significantly to the state’s vibrant music industry and overall economy.
The ELVIS Act seeks to prevent the unauthorized use of an artist’s name, likeness or voice in a sound recording or audiovisual work. This protection is critical in an era when advanced technologies, such as AI and deepfakes, can manipulate or simulate an artist’s voice or image without their consent.
“But legislation at all levels moves much too slowly to keep pace with the rapid pace of AI development, particularly over the last 12-plus months,” Howson said. “Legislation takes years to develop, pass, and implement. Even though the EU’s AI Act has progressed quicker than some expected, it has not kept up with the hypersonic speed of this technology.”
Even with advancements in California and other states, experts remain skeptical about the prospects for a national AI act. Muddu Sudhakar, CEO of AI company Aisera, told PYMNTS in an interview that he wouldn’t bet on Congress acting anytime soon.
“When it comes to federal regulation, there is typically action when there is a crisis or deep concerns about national security,” he said. “Just look at Congress’s move to ban TikTok. For the most part, this was driven by concerns about China, not necessarily about social media per se.”
EY’s Hallmark said Congress already faces difficulties in enacting legislation to govern the data fundamental to AI technology.
“The White House is moving faster: President Biden has issued an executive order with over 100 work streams that spring from it related to AI,” he said. “We expect much more to come over the next year.”