Just a few years back, “every company” was going to be a FinTech company.
Now, artificial intelligence (AI) capabilities are the future-fit infrastructure integration du jour, as generative solutions enter the marketplace and promise to transform business operations with next generation efficiencies.
The tech’s modern synergies run the gamut, but frequently center around providing a single source of truth to organizations looking to corral fragmented and disparate internal information and data.
And as companies grapple with an ongoing digital transformation, accelerated by a sweeping migration to the cloud, forward-thinking leaders are realizing the importance of infusing their firm’s business processes and enterprise applications with hyper-intelligent and responsive capabilities.
“As you may have heard, AI is having a very busy year,” Alphabet and Google CEO Sundar Pichai said at his company’s developer conference last week.
That’s because the promise, and premise, of AI’s corporate functionality is derived from the modern tech’s ability to take over the workload and sunk labor of historically technical and manual tasks in a fraction of time it would require a human employee, while simultaneously putting data reporting and analytics directly into the hands of key decision makers.
See also: If AI Can Replace Employees, It Can Also Replace Vendors
Data sources are stored across various silos and systems at many companies, which has historically made finding documents, surfacing information, and maintaining processes full of speedbumps and challenges.
The end goal of AI integrations is to arm employees and leaders with real-time data and AI-generated ideas that can support better business decisions and drive efficiencies that lead to sustainable and healthy growth.
Legacy players risk being marginalized if they don’t adapt to this new landscape, and recent moves by Google, Microsoft and other vendors to harness the technology underscore the rapid evolution and potential of AI integrations.
“There is a lot of opportunity to build new user-facing products, or those that better delight users in an existing experience, using AI,” Emily Glassberg Sands, head of information and data science at Stripe, told PYMNTS.
Generative AI features can, for example, allow firms to ask questions like “what are the key deadlines or milestones” for a project and receive an answer in real time.
Echoing that idea, Andrew Gleiser, chief revenue officer at payments provider Aeropay, told PYMNTS that one future-fit use case he sees for generative AI is integrating the solution into merchant payment portals to surface compliant information to customers around their own best clients as it relates to metrics, including average order value, overall volume and purchase cadence.
Companies have long had the ability to index their own data and make it searchable, and previous generations of AI, such as predictive AI, business rules, and automated machine learning (ML) capabilities have for years been performing tedious, high-value tasks across operations within areas like accounts payable (AP) and accounts receivable (AR), cash flow forecasting, credit scoring, fraud prevention and compliance.
The difference with today’s generative AI models is that their capabilities have evolved to better recognize patterns, draw more complex and multi-modal conclusions, as well as create new content by processing large language models (LLMs) full of enormous quantities of media. This means that the latest generation of AI can undertake non-closed loop, iterative tasks in real time.
Read more: Generative vs Predictive AI’s Role Across the Future of Payments
“AI” was mentioned more than 200 times during the most recent earnings calls by Meta, Microsoft and Alphabet, and the Robinhood Markets separately told its own investors during the company’s first-quarter 2023 earnings call that “every company will have to transition into an AI company.”
Yet despite the buzz — and the potential — not every tech giant is advocating for a full-speed ahead approach. As reported by PYMNTS, Apple Chief Executive Tim Cook used his company’s earnings call last week to plead with companies to exercise caution as they race to add AI to their products.
At the center of many enterprise concerns around the use of innovative AI solutions are questions around the data and information fed to the AI models, as well as protections around that data’s provenance and security.
LLMs are prone to hallucination and returning information that is at best inaccurate and at worst misleading — that’s because if bad data becomes the source of a response, it can then be further propagated by serving as an informational foundation for future responses an AI is tasked with.
In response to these concerns, Microsoft is reportedly planning to sell a privacy-focused version of OpenAI’s ChatGPT chatbot to business customers concerned about regulatory compliance and data leaks.
As enterprise integrations of the novel solution continue to spread, more care will have to be taken around its applications to ensure that its potential for revolutionary growth is grounded in an auditable and valid foundation.