Velera How CUs Can Drive Engagement with Self-Service Banking Innovation July 2024 Banner

AI Regulations: EU Probes, Unilever Prepares, Utah Innovates

X, Twitter, xAI, Grok, DPC, data privacy

Artificial intelligence (AI) regulation takes center stage as the EU investigates X’s (formerly Twitter) data practices, Unilever adopts a proactive stance and Utah targets mental health chatbots.

These moves signal a shifting landscape in AI governance across the tech, business and healthcare sectors.

EU Regulator Probes X’s Covert AI Data Collection

The Irish Data Protection Commission (DPC) has reportedly launched an inquiry into X’s latest privacy rule, which caught European regulators off guard. The social media giant, formerly known as Twitter, quietly introduced a new setting that has stirred the data protection community.

At the heart of the controversy is a default-enabled option allowing X to harvest users’ public posts for Grok, an AI system developed by xAI — another venture by Elon Musk. This move, implemented without fanfare, potentially affects data on millions of EU citizens.

Users face significant hurdles when opting out of this data collection scheme. Currently, the option to disable the setting is only available through X’s web interface, leaving mobile users without recourse. While X has promised a mobile solution in the future, critics argue that this vague timeline is inadequate and potentially violates EU data protection principles.

The DPC, X’s primary EU regulator, expressed dismay at the sudden rollout and stressed the lack of proper consultation. This stealth approach has raised serious questions about user consent and data protection practices.

This latest controversy adds to X’s mounting regulatory challenges in the EU. The company is reportedly already under scrutiny for at least five other investigations related to data protection violations. Each case carries the potential for substantial fines, which could significantly impact X’s financial standing.

As the regulatory storm brews, xAI continues its aggressive expansion. Recently, after securing a staggering $6 billion in funding, the company is constructing what it claims will be a revolutionary AI training supercomputer. This digital powerhouse, boasting 100,000 GPUs, aims to push the boundaries of AI capabilities.

Unilever Rolls Out AI Governance Program

Consumer goods giant Unilever has implemented an AI assurance process, positioning itself ahead of impending European Union regulations on AI use.

At the core of Unilever’s approach is a cross-functional team of experts who scrutinize potential AI projects before they are greenlit. This team, which includes external partners such as Holistic AI, assesses proposals for potential risks and develops mitigation strategies.

Unilever’s Chief Data Officer Andy Hill wrote on the company’s website that the AI assurance process has become integral to Unilever’s operations, with the company recently surpassing 150 ‘projects assured.’ Unilever currently employs over 500 AI systems worldwide, spanning areas from research and development to inventory management and marketing.

“We see potential in the use of AI to drive productivity, creativity, and growth at Unilever,” Hill wrote. He emphasized the importance of responsible implementation as AI deployments expand within the company.

The program’s development comes as the EU prepares to enforce the AI Act, widely regarded as the world’s first comprehensive AI legislation. Unilever’s Chief Privacy Officer Christine Lee noted that regulatory compliance is a key component of the firm’s framework, with the company actively monitoring and addressing upcoming legal developments that may impact its operations.

Unilever’s initiative addresses various AI-related concerns, including intellectual property rights, data privacy, transparency and potential bias. The company reports that its approach is designed to be adaptable, allowing it to keep pace with evolving regulations in different jurisdictions.

As global discussions on AI governance intensify, Unilever executives said they are committed to aligning with legal developments affecting their businesses and brands. This proactive stance, they said, enables the company to pursue digital innovation while maintaining responsible AI use and proper data governance.

Utah Targets AI Mental Health Chatbots

Utah’s newly minted Office of Artificial Intelligence is taking aim at mental health chatbots, marking a first in state-level AI regulation.

The office plans to introduce legislation by year-end to oversee AI use in mental health services, Fierce Healthcare reported. This initiative focuses on AI chatbots employed in licensed medical practice, addressing concerns over reliability and potential legal pitfalls. Key issues include information accuracy and the risk of unlicensed medical practice.

The office is collaborating with a diverse group of stakeholders, from local health providers to national mental health companies and startups, to shape the proposed regulations. Their primary concern is the impact of AI chatbots on patient well-being and the integrity of mental health care delivery.

Utah’s move could spark a domino effect, potentially leading to a patchwork of state regulations and spurring federal action on AI governance in healthcare. As the first state to establish a permanent AI regulatory body, Utah is setting a precedent in navigating the complex intersection of AI technology and mental health policy.