From Washington to Brussels, policymakers are scrambling to rein in artificial intelligence’s (AI) rapid advance. As the Federal Communications Commission (FCC) targets AI-powered robocalls and the EU’s landmark AI Act looms, experts warn that smart regulation is crucial to harness AI’s potential while guarding against its perils.
FCC Chairwoman Jessica Rosenworcel on Tuesday (July 16) proposed new rules requiring the disclosure of AI use in robocalls to protect consumers from potential scams and misinformation.
The proposal comes as AI tools are increasingly being leveraged for deceptive practices in telecommunications. According to the FCC, fraudsters have been using AI-generated voice cloning and other advanced techniques to create more convincing and potentially harmful robocalls.
This move is part of a broader effort by the Commission to address the challenges posed by rapidly evolving AI technologies in the communications sector, including recent actions against deepfake voice calls used for election misinformation and proposed fines for carriers involved in such practices.
“Bad actors are already using AI technology in robocalls to mislead consumers and misinform the public,” Rosenworcel said in a news release. “That’s why we want to put in place rules that empower consumers to avoid this junk and make informed decisions.”
The proposed rules define AI-generated calls and mandate disclosure of AI use when obtaining consent and during each call. This move aims to help consumers identify and avoid potentially fraudulent calls.
The proposal also seeks to safeguard positive applications of AI, particularly in assisting people with disabilities in using telephone networks. Additionally, it calls for comments on technologies that can alert consumers to unwanted AI-generated calls and texts.
This initiative follows a series of actions by the FCC to combat AI-related scams, including fines for illegal robocalls using deepfake technology and requests to carriers about their preventive measures against fraudulent AI-generated political calls.
The full Commission will vote on the proposal in August.
The European Union’s groundbreaking Artificial Intelligence Act, set to take effect Aug. 1, is reportedly poised to hit Chinese tech companies’ wallets.
Industry experts told The South China Morning Post that the comprehensive regulations will significantly boost assessment and compliance costs for firms operating in the bloc’s 27 member states.
“Compliance and assessment requirements will increase R&D and testing costs by around 20 to 40 percent,” said Patrick Tu, CEO of Hong Kong-based Dayta AI. The retail analytics provider anticipated higher spending on “additional documentation, audits and technological measures.”
The AI Act, approved by EU lawmakers earlier this year, aims to safeguard fundamental rights and boost innovation while establishing Europe as an AI leader. However, some Chinese firms fear overregulation could stifle creativity.
Tanguy Van Overstraeten, head of technology, media and telecommunications at law firm Linklaters in Brussels, defended the legislation: “What the EU is trying to do with the AI Act is to create an environment of trust.”
The rules reflect a global scramble to regulate AI, spurred by the explosive growth of generative AI services like ChatGPT. As the EU sets the pace, Chinese tech companies must adapt or risk being locked out of a crucial market.
Tech giants and policy watchers are singing from the same hymnal: AI needs a regulatory touch. At a Brookings Institution discussion on Monday (July 15), industry leaders and academics hammered home the need for stronger AI guardrails.
“AI is too important not to regulate and too important not to regulate well,” said David Weller, Google’s senior policy guru, echoing his CEO’s mantra. Weller stressed the need for AI-savvy workforce development and anti-discrimination safeguards in AI-driven hiring.
Brahima Coulibaly, Brookings’ global economy chief, raised the specter of AI-fueled inequality. “Increasing automation of low- and mid-skilled jobs has shifted labor demand toward higher skills,” he cautioned, backing stronger government oversight.
Not all outlooks were gloomy. World Bank Vice President Victoria Kwakwa sees AI as a boon for Africa, potentially boosting productivity and reaching underserved populations.
But Hilary Allen, an American University law professor, struck a cautionary note. Likening AI to “applied statistics,” she warned of its potential to overlook rare events, drawing parallels to pre-2008 financial crisis blind spots.