Generative artificial intelligence (AI) may be suffering from recency bias.
The technology, which splashed down into the marketplace last November, represents one of the most significant developments in computing capability since the development of data packet switching and communication protocols in the 1960s, which gave birth to the internet itself.
A new study by Purdue University is making waves after finding that OpenAI’s ChatGPT product has worse odds than a coin flip at getting coding questions correct, producing wrong answers to software programming queries 52% of the time.
While generative AI isn’t perfect, the technology is new and continually improving.
That’s where the recency bias comes in. After all, when the iPhone was first introduced, its users were consistently plagued with dropped calls, battery drainage, no apps (which were then called “widgets”), and other technical issues.
But where the iPhone succeeded was in changing the form factor of what a connected handheld device could offer to users. The iPhone’s touchscreen moved the user experience away from static buttons into a dynamic field of possibility. And it continually addressed faults through downloadable updates.
The shift offers a direct parallel to the impact AI is having on the ways in which users can access, produce and engage with information.
See also: Why FinTechs Are Leaning Into AI to Fight Changing Fraud Threats
It is relatively easy for enterprises today to experiment with generative AI use cases, but successfully scaling them up in a way that unlocks business value is more challenging.
Despite its newness, when implemented appropriately and strategically, AI can transform the way firms operate by giving them intelligent tools that streamline internal operations while also improving external customer-facing experiences.
Leaders must view the technology’s potential through a pragmatic lens, grounding its capabilities in strategic use cases first, such as leveraging generative AI as a task-focused enterprise enhancement solution, or research co-pilot and creative assistant.
When redesigning businesses’ workflows, firms need to recognize that they must retrain both their own employees and their audience to take advantage of AI’s capabilities.
According to PYMNTS data, around 40% of executives said there is an urgent necessity to adopt generative artificial intelligence, and the generative AI market is expected to grow to $1.3 trillion by 2032, compared to $40 billion in 2022.
As with any transformative integration, there must be an understanding of the resources needed to capture AI’s benefits at scale, and the feasibility of integrating them beyond simply chasing a trend.
Sixty-two percent of executives surveyed by PYMNTS do not think their companies have the AI skills they need for a successful deployment.
“As exciting as AI technology is, it’s still new for most, and expertise is hard to come by,” Taylor Lowe, CEO and co-founder of large language model (LLM) developer platform Metal, told PYMNTS in July.
And it is only by doing the structural work at an enterprise tech stack level that businesses can capture substantial value from AI integrations.
PYMNTS has been tracking how financial firms and payments industry players have already begun experimenting with the kind of LLMs that power ChatGPT and other AI competitors.
Bank of America is using AI to train over 200,000 employees, while SoFi Technologies has integrated Galileo Financial Technologies’ conversational AI engine into its personal finance app. Lending platform Upstart is tapping AI to automate its unsecured loan product, helping scale the product faster than headcount.
Decision-making is made easier with AI, and marketplace leaders are tapping into this capability. BigCommerce is integrating Google Cloud’s AI with its Software-as-a-Service (SaaS) eCommerce platform to give merchants the potential to improve their operational efficiencies, enhance the customer experience, and drive more sales.
Apple is beefing up the AI capabilities of its iPhones, and Google wants to leverage generative AI’s knack for personalization to supercharge its own smart assistant.
Developing smarter, more reactive and dynamic customer service-focused chatbots is emerging as a crucial answer to the question of what AI’s right-now utility could look like for larger enterprises.
“This didn’t happen overnight,” i2c CEO and Chairman Amir Wain told PYMNTS in an earlier discussion. “There’s been a lot of work going on in AI, and now the product is at a stage where it can be deployed commercially across various applications.”
And as for AI’s relative newness?
“Pace of innovation is the competitive advantage here, not what has been created so far,” is what Elon Musk had to say.
As PYMNTS CEO Karen Webster’s Triple Clock Theory (TCT) explains, an incumbent business operates at a slower speed than a new entrant when introducing a product or service into the same competitive market.
So as AI development continues to move fast and break things, so to speak, firms must remember that any technical innovations they plan to integrate into their workflows should, at the end of the day, serve a real, defensible and scalable purpose.