Experimenting with the potentially disruptive use cases of generative artificial intelligence (AI) is relatively easy.
It’s taking those use cases and scaling them up across enterprise workflows in a way that unlocks real businesses value that represents a much more challenging hill to climb.
As the AI landscape becomes increasingly crowded, it will fall on firms themselves to decide which innovative offerings and foundational large language model (LLM) platforms are the right fit for their own purpose.
This, as Llion Jones – one of the eight Google researchers famous for introducing the foundational architecture of transformer neural networks, a piece of research that supports and informs nearly all of the capabilities of today’s AI models – has officially launched his own AI startup, Sakana AI, out of stealth. And he reportedly doesn’t want Sakana to be “just another company with an LLM.”
But LLMs and generative AI more broadly represent just one component of AI – and many organizations are already leveraging the technology’s various other forms, including the use of predictive analytics and other digital-first forecasting and process management tools.
That is in part why Microsoft, which already has an agreement with OpenAI to license the company’s LLM and generative AI capabilities, recently announced it would start selling a version of Databricks software meant to help companies make their own enterprise AI models from scratch.
Read also: Knowing How AI Works Makes It More Likely to Work for You
Determining AI’s Best Strategic Fit
Organizations are typically faced with three options when it comes to enhancing their workflows and go-forward strategies with AI: they can either buy, build or subscribe to a solution.
Building a proprietary enterprise model is likely off the table for most organizations, as the staggering costs of developing AI technology remains a tough pill to swallow for even the industry’s pioneers like OpenAI.
That leaves buying or renting on the table, and the appropriate decision for a particular enterprise depends on whether they are ready to integrate AI right now and are able to eat the switching costs, or whether they prefer to try before they buy and view AI as a spot fix or replacement for a legacy workflow rather than a complete overhaul.
Not all applications call for the use of the most cutting edge and costly LLMs in the market. For example, giving customer service a boost with automated chatbots that do little more than generate simple text in response to queries don’t need the computing lift that more complex operations might.
While understanding the differences in performance between LLMs can have an incredible amount of nuance, it is imperative for firms to first establish a business need or strategic goal before attempting to evaluate which LLM or AI bundle will perform best for them.
“As exciting as AI technology is, it’s still new for most, and expertise is hard to come by,” Taylor Lowe, CEO and co-founder of large language model (LLM) developer platform Metal, told PYMNTS.
When implemented properly and appropriately, AI can transform the way organizations operate by streamlining their operations – and AI capabilities are proving to be increasingly valuable across sectors like the industrial economy and telecommunications industry, where they are helping bridge traditional processes with modern workflows.
More here: Enterprise AI’s Biggest Benefits Take Firms Down a Two-Way Street
Executing with Intent and Focus
Optimizing AI’s impact require that organizations execute with strategic intent and focus.
“No matter the ways and means in which AI is being harnessed, it’s incumbent on firms to mull how they can enhance value rather than just chase a trend,” Shaunt Sarkissian, founder and CEO of AI-ID, told PYMNTS in May – words that ring even more true today.
That’s because the AI landscape is a rapidly evolving one, where staying on top of advances in capability and ensuring that an initial LLM integration choice remains the best fit in terms of performance and viability is critical.
As PYMNTS reported, around 40% of executives said there is an urgent necessity to adopt generative artificial intelligence, and the generative AI market is expected to grow to $1.3 trillion by 2032, compared to $40 billion in 2022.
Already, the world’s biggest tech companies have been increasingly leaning on AI applications to stabilize their businesses.
Areas where enterprises could see the most gain by tapping generative AI capabilities include areas like smart chatbots and virtual assistants, recommendation engines and hyper-personalization, fraud detection and prevention, NLP (natural language processing) for customer service and document discovery, as well as the automation of repetitive and previously manual or paper-based tasks.
And while the stakes for keeping internal operations at the status quo are rising, effectively integrating AI into critical workflows requires an understanding of the unit economics of AI, as well as a holistic view on the resources needed to capture those benefits, and the feasibility of executing the work given existing capabilities.