Innovations transform the economy and change the realities of daily life.
The sign of a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.
That’s why recent calls to slow down or even freeze the development of generative artificial intelligence (AI) models are so surprising.
While no longer the open-source nonprofit its name suggests, OpenAI offers access to its bleeding-edge generative AI products through its API, including both closed-source ChatGPT and the open-source Whisper model. It gives a rising generation of developers across industries access to innovative AI capabilities.
Emily Glassberg Sands, Stripe’s head of information and data science, told PYMNTS CEO Karen Webster that at Stripe alone, there are currently 14 GPT-4 prototypes in the works that all leverage OpenAI’s technology.
A quick scan of OpenAI’s own site reveals that many leading companies are already leveraging its tools for a wide range of use cases that enhance both consumer-facing and back-office roles and responsibilities.
“It’s easy to use and genuinely solves a ton of problems — it drives tremendous value for everyone,” said Klarna Co-Founder and CEO Sebastian Siemiatkowski in a release announcing his company’s new ChatGPT plug-in.
As reported by PYMNTS, J.P. Morgan CEO Jamie Dimon went so far as to call AI an “an absolute necessity” in his annual shareholder letter, revealing that “AI runs throughout our payments processing and money movement systems across the globe.”
And while the bank has restricted its global staff’s use of OpenAI’s own AI chatbot, Dimon wrote in his letter that J.P. Morgan has “more than 300 AI use cases in production today,” going on to emphasize the importance of integrating new technology.
So what would happen if legitimate researchers and their investors really did pause AI development?
The most likely scenario is one where illegitimate researchers and profit-chasing companies fill the gap and advance AI’s velocity for their own gain, with few guardrails around data set integrity.
Read More: Generative AI Tools Center of New Regulation-Innovation Tug of War
“Technology moves much faster than regulators do,” Saule T. Omarova, a professor of law at Cornell University, told PYMNTS in a discussion last fall, and the economic reality of advances in modern AI and concerns about what AI technology can do obscure the real questions around how will AI be used, and who will get to decide?
Large language models (LLMs), the engine of chat-based AI interfaces, are primarily built to persuade — and they do so by leveraging access to vast reams of personal data.
That access is where regulators should focus their scrutiny, as it has implications for both manipulative behavioral advertising and dangerous evolutions of the online scams that have always existed around the murkier edges of the internet.
As reported by PYMNTs, Italy became the first Western nation to ban the Microsoft-backed OpenAI ChatGPT-4 chatbot after the nation’s Data Protection Authority announced a probe of the AI solution’s alleged breach of General Data Protection Regulation (GDPR) privacy rules, claiming “an absence of any legal basis” justifying the massive data collection and storage of personal information used to “train” the GPT-4 chatbot.
“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar,” OpenAI wrote in its ChatGPT whitepaper.
Italy’s move has inspired other privacy regulators in Europe to take a closer look at ChatGPT and other AI tools.
How regulation can protect consumers while also ensuring a hands-off approach to growth and innovation is by enacting guardrails around the provenance of data used in LLMs, making it obvious when an AI model is generating synthetic content, including text, images and even voice applications, and flagging its source.
As PYMNTS’ Karen Webster wrote at the beginning of the year, AI’s greatest potential is in creating the knowledge base needed to equip the workforce — any worker in any industry — with the tools to deliver a consistent, high-quality level of service. And quickly and at scale.
New jobs are directly enabled by technology. More than 60% of jobs done in 2018 were not even invented back in 1940, per an MIT paper.
That’s because the trajectory of work creation directly mirrors the path of innovation.
Even as automation increasingly eliminates human labor from certain tasks, technological change inherently leads to new kinds of work — and that is the AI-driven future we should all look forward to, not try to slow down.