As artificial intelligence (AI) experts urge for greater transparency and oversight in a Senate testimony, California is enacting laws protecting actors from unauthorized AI cloning, highlighting the growing push for responsible AI governance. Meanwhile, tech companies like Diligent are stepping up to offer compliance tools for upcoming EU AI regulations, underscoring the global nature of the AI regulation challenge.
Margaret Mitchell, an AI researcher and former Google staff scientist, was among the experts who testified before the U.S. Senate Subcommittee on Privacy, Technology and the Law this week, urging for increased transparency and regulation in AI development.
In her testimony, Mitchell highlighted several critical gaps in the tech industry’s approach to AI, including the need for a better understanding of how input data affects model outputs, more rigorous evaluation methods and the implementation of due diligence practices.
Mitchell emphasized the importance of transparency throughout the AI development process, saying: “Transparency is crucial for addressing the ways in which AI systems impact people. This is because transparency is a foundational, extrinsic value — a means for other values to be realized.”
She proposed several policy recommendations, including requiring documentation demonstrating due diligence on potential harms before AI deployment, mandating fair treatment across protected groups in system evaluations and creating stronger protections for tech whistleblowers.
Mitchell also called for increased government funding to address key research gaps in AI development, such as privacy protection, provenance tracking and environmental efficiency. Her testimony underscored the growing need for comprehensive AI governance as the technology continues to evolve.
California Gov. Gavin Newsom signed two laws this week aimed at protecting actors and performers from unauthorized use of AI in the entertainment industry.
One law allows performers to exit existing contracts if vague language could permit studios to use AI to digitally clone their voices and likenesses freely. This measure, set to take effect in 2025, was inspired by concerns raised during last year’s Hollywood actors’ strike.
The second law prohibits the commercial use of digitally cloned deceased performers without permission from their estates. This legislation addresses issues such as the recent AI-generated comedy special mimicking the late George Carlin’s style without consent.
Both laws received support from the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA).
Supporters of the laws argued that these measures would encourage responsible AI use without hindering innovation. Critics, including the California Chamber of Commerce, contended that the laws may be unenforceable and could lead to legal challenges.
These laws are part of a broader effort by California lawmakers to regulate the AI industry. Newsom has until Sept. 30 to act on other AI-related bills passed this year.
Governance software firm Diligent is betting big on AI regulation compliance.
The company unveiled AI Act Toolkits on Wednesday (Sept. 18), which aim to helping organizations navigate the European Union’s AI Act. These toolkits assist companies in identifying AI systems, classifying associated risks and ensuring compliance with the EU’s regulations.
Diligent’s offering includes three components: an AI Discovery and Risk Classification Toolkit, an AI Act IT Compliance Toolkit and an AI Act Risk Management Toolkit. These tools are designed to support corporate secretaries, legal teams, technology officers and audit teams in implementing ethical AI practices while staying compliant. The toolkits within Diligent’s existing platform provide resources for AI literacy, regulatory compliance mapping and risk assessment.
Diligent’s move reflects the growing importance of AI governance in the corporate world as regulatory scrutiny intensifies.