Artificial intelligence (AI) is changing the world, or at least the way the information economy operates.
And the way the information economy operates is increasingly what defines business success and failure across today’s landscape.
But as a new paper published in the research journal Nature shows, the transformative tech still has a long way to go before it gets things right — at least all on its own.
The researchers behind the paper aimed to explore the capabilities of OpenAI’s ChatGPT as a research co-pilot by creating an “autonomous data-to-paper system.”
That’s right, they wrote a paper about writing a paper written by AI.
What the team found was that there are “many hurdles to overcome before the [AI] tool can be truly helpful [in writing an entire scientific research paper from scratch],” and the step-by-step process the academics took holds important clues for firms looking to enhance their own workflows by integrating AI.
Sapphire Ventures Tuesday (July 11) committed an investment of more than $1 billion to cultivate the next generation of AI-powered enterprise technology, showing that AI’s snowball effect of commercialization is just getting started.
That AI has gotten this far, this quickly, is impressive enough.
It was just last November that the technology entered the common parlance and dominated the first wave of headlines with its human-like capabilities, and not even six months since ChatGPT-4 launched.
Read more: Moving Past the Shiny Newness of Generative AI
AI’s ability to move beyond crunching numbers and reacting linearly to manually entered information is what sets it apart from traditional computing operation. By activating a number of transformer models (GPT stands for stands for generative pretrained transformers), the tech is able to churn through billions, if not trillions, of data elements and instantaneously ingest the complex relationships between them before generating a relevant response.
The response generated is determined based on the AI model’s own training as to how best to weight the data elements under consideration.
This training, as well as the quality and type of data underpinning the training, is where firms need to pay the most attention.
Read more: Growing Enterprise AI Adoption Shows Integration Friction Is No Fiction
As the researchers found, AI works best when the tool is reprompted (in essence, a form of retraining) when it produces erroneous or even completely fabricated results.
“As exciting as AI technology is, it’s still new for most, and expertise is hard to come by,” Taylor Lowe, CEO and co-founder of large language model (LLM) developer platform Metal, told PYMNTS.
PYMNTS own research in the July 2023 report, “Understanding the Future of Generative Al,” a collaboration with AI-ID, revealed that LLMs — the neural networks behind OpenAI’s ChatGPT and Google’s LaMDA — could impact 40% of all working hours.
That’s why companies need to consider, right now, how they can employ generative AI in jobs to ensure it is a value-add. Once they have a better idea of how to employ the technology efficiently, they will be able to adapt and redesign jobs around that.
“You don’t want to boil the ocean and try to solve for everything at once,” newly appointed Corcentric CEO Matt Clark told PYMNTS. “Firms need to look at [transforming their existing processes] as a kind of crawl-walk-run mentality to get to where they need to go.”
See also: 10 Insiders on Generative AI’s Impact Across the Enterprise
That’s because, as easy as it is to use, AI remains, at least to date, far from a plug-and-play solution.
“No matter the ways and means in which AI is being harnessed, it’s incumbent on firms to mull how they can enhance value rather than just chase a trend,” Shaunt Sarkissian, founder and CEO of AI-ID, told PYMNTS in May.
As PYMNTS wrote earlier, those who see LLMs as a replacement for human labor rather than as a labor-saving device could risk making themselves vulnerable to savvier competition.
New enterprise technologies like AI that one day look to be fully commercialized across the business ecosystem need to be both pointed toward a definitive business goal with a clearly auditable process, as well as highly interoperable across operational workflows.
Firms can — and should — handhold the technology to get what they want out of it. For now, there is still value in keeping a human in the loop to better proactively avoid the tech’s tendency to hallucinate and fill gaps by making things up.
Beyond that, the cost of doing AI business is something for firms to consider. Maybe integrating AI into accounting and compliance operations to intelligently automate formerly laborious tasks is worth the energy and computing costs; whereas something like using AI avatars to greet clients might not be (or, depending on the firm, maybe it is the reverse).
Firms that choose the best use case of enterprise AI for their business operations today will be the enterprises that win tomorrow.