PYMNTS-MonitorEdge-May-2024

AI Regulations Need to Target Data Provenance and Protect Privacy

The speed with which AI is radically transforming global economies has not escaped regulators’ attention.

As fears continue to grow over the potential use-case implications of artificial intelligence (AI), lawmakers around the world are preparing to establish the appropriate legal frameworks to contain it.

The Biden administration laid out a formal request for comment on Tuesday (April 11) meant to help shape specific policy recommendations around AI, while China’s internet regulator on Tuesday also released its own set of detailed measures to keep AI in check, including mandates that would ensure accuracy and privacy, prevent discrimination and guarantee protection of intellectual property rights.

Read more: Generative AI Tools Center of New Regulation-Innovation Tug of War

Still, observers believe the pace at which new AI systems are being updated and released makes any attempt at effective regulation a real challenge for policymakers who already find themselves inherently on their back foot.

The Microsoft-backed ChatGPT AI tool from OpenAI was able to grow its user base to 100 million within two months. Training the model took just 34 days –  between November 30, 2022, and March 14, 2023, OpenAI successfully launched two new generations of its disruptive large language model (LLM) AI solution.

The U.S. Commerce Department’s National Telecommunications and Information Administration (NTIA) have opened their request for comment on how to regulate AI for 60 days of public feedback, while China’s internet regulator is holding off on regulatory implementation to give time for public feedback on its proposed measures.

Who knows what the AI landscape may look like by then.

Race to Safely Develop the Industry

“We believe that powerful AI systems should be subject to rigorous safety evaluations,” wrote OpenAI in a recent blog post. “Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”

The Microsoft-backed AI company is widely regarded as a market leader, and its ChatGPT tool has captured the public imagination while transforming traditional industries at the same time it is spurring the creation of new ones.

Complicating the regulatory reality is the fact that, as reported by PYMNTS, U.S. companies are more worried about the growing threat of government regulatory overreach and the risk it poses to their businesses, according to a Chamber of Commerce report.

That latest Chamber of Commerce briefing follows one from last month calling on the government to regulate AI.

As reported by PYMNTS, Italy became the first Western nation to go so far as to ban the OpenAI’s ChatGPT-4 chatbot after the nation’s Data Protection Authority announced a probe of the AI solution’s alleged breach of General Data Protection Regulation (GDPR) privacy rules, claiming “an absence of any legal basis” justifying the massive data collection and storage of personal information used to “train” the GPT-4 chatbot.

Italy’s move has inspired other privacy regulators in Europe and around the world to take a closer look at ChatGPT and other AI tools.

Canada’s Office of the Privacy Commissioner last Tuesday (April 4) launched its own investigation into OpenAI in response to a complaint around ChatGPT’s use of personal information without consent.

“We need to keep up with — and stay ahead of — fast-moving technological advances,” said the agency’s commissioner, Philippe Dufresne.

Per a Reuter’s report, China’s payment and clearing industry association, which is governed by the nation’s central bank, issued a warning on Monday (April 10) against using ChatGPT and other AI tools due to the risk of “cross-border data leaks.”

See also: Former Google CEO Says Industry Must Develop AI ‘Guardrails’

Protecting Children, Respecting Privacy, Improving Accuracy

Data is the lifeblood of AI models. The ways in which businesses gather, collect and use data to power their AI solutions should be the central focus of any regulatory frameworks.

By enacting guardrails around the provenance of data used in LLMs and other training models, making it obvious when an AI model is generating synthetic content, including text, images and even voice applications and flagging its source, governments and regulators can protect consumer privacy without hampering private sector innovation and growth.

Some of biggest concerns around AI should involve protecting children, respecting privacy, and improving accuracy of results to avoid “hallucinations” and the scalable spread of misinformation. At the heart of all of these is the appropriate use of data and the assurance of that data’s integrity.

“These are all very data-hungry situations. Data is foundational to building the models, training the AI — the quality and integrity of that data is important,” Michael Haney, head of Cyberbank Digital Core at FinTech platform Galileo, the sister company of Technisys, told PYMNTS during an earlier conversation.

Individual U.S. states, including California, Connecticut, Colorado, Utah and Virginia, have recently passed general data privacy legislation inspired by similar provisions in the European Union’s GDPR that takes aim at protecting against AI biases and architectural threats.

As PYMNTS has written, areas like healthcare have the opportunity to serve as a best practice standard bearer around data-privacy protections and data set integrity and provenance as the world continues to undergo a tectonic shift driven by the technical capabilities of AI applications.

PYMNTS-MonitorEdge-May-2024