In October of 1950, British mathematician Alan Turing created what he called the Imitation Game.
Turing’s journal article, first published in the Mind quarterly review of psychology and philosophy, would later become known as the “Turing Test” — a widely popularized test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Ever since then, the concept of an artificial intelligence (AI) system whose intelligence surpasses our own has captured the public imagination. And now, tech companies such as OpenAI, Anthropic, Alphabet, Microsoft and others have publicly stated that they are trying to build such a system.
OpenAI even included the goal of developing a machine capable of artificial general intelligence (AGI) in its founding charter.
But anthropomorphizing AI systems, or attributing human-like characteristics to them, can pose several dangers — and for many business use cases of the innovative technology, doing so can serve as a fatal distraction from the very real utility AI can offer.
After all, AI is not nearly as mysterious as people think. AI models are computerized systems that deploy sophisticated probabilistic algorithms at lightning speed to solve complex problems. They are trained to imitate and built to generate. They do not think, believe or emote.
And assuming that AI models and products possess human-like understanding, emotions or reasoning abilities can leave firms looking to leverage the technology beyond what it is currently capable of in the lurch.
Education and communication about the nature of AI systems can help manage expectations and ensure responsible use. Within an enterprise environment, deploying AI systems with a clear-eyed approach to quantifiable goals and expected return on investment (ROI) is key to success.
Read more: Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT
Scientific researchers and various governmental agencies have been working on forms of AI since the 1940s and 1950s, but the availability of big data to train AI models and advances in hardware such as AI chips and high-performance computing have led to major advancements in the field over the last few years.
The emergence of generative AI has led to conversational AI interfaces that use billions of data points and advanced probabilistic algorithms to mimic human writing and communication styles.
While chatbots like OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and others each carry on conversations, and even write code or generate images, as if they were human — they are not. And their capabilities are much more limited.
“Imagination and new creation is not something AI is capable of … it is just mimicking what it has learned,” Ofir Krakowski, CEO and co-founder at Deepdub, told PYMNTS at the start of the month.
But that is not to say that AI isn’t capable, only that it has never been more important than it is now to fully grasp the limitations, decision-making processes and potential biases inherent in AI in order to deploy and integrate intelligent software effectively.
For businesses to truly get the most out of leveraging AI, they need to understand how it works and be clear about what their desired outcome is — and this holds true across all areas where AI is applied.
For many tasks, particularly ones that involve the parsing of large amounts of data or information or are repetitive, AI can be a much more economically viable solution than using human labor.
Already, AI is being deployed across areas like material science and drug discovery to supercharge and augment the ability of human researchers.
AI-powered solutions can be particularly valuable within finance and accounting offices, assisting employees in areas like invoice processing, generating computer code, creating preliminary financial forecasts and budgets, performing audits, streamlining business correspondence and brainstorming and even researching tax and compliance guidelines.
See also: Demystifying AI’s Capabilities for Use in Payments
Viewing AI as human-like may lead to overestimating its capabilities — and underestimating its weaknesses.
“Technology can be scary in the abstract, but what if I told you a robot is going to start commanding you around all day — that’s Google Maps — or the robot will tell you where to eat — that’s OpenTable — and the robot will even tell you who to mate and date — that’s Tinder,” Adrian Aoun, CEO at Forward, told PYMNTS in December. “When the robots do come about, they are in service of you. AI isn’t in service of its own mission, it is in service of your mission.”
The next phase of AI for enterprise use needs to ensure that the models are auditable by humans and that the decisioning process being used is clear and able to be fine-tuned.
“Enterprise use of AI has to be accurate and relevant — and it has to be goal oriented. Consumers can have fun with AI, but in a business chat or within an enterprise workflow, the numbers have to be exact, and the answer has to be right,” Beerud Sheth, CEO at conversational AI platform Gupshup, told PYMNTS in November.
As PYMNTS has reported, the generative AI industry is expected to grow to $1.3 trillion by 2032. But rather than one single, all-knowing super-AI that is better at everything humans can do, the marketplace growth is likely to be driven and accelerated by a variety of different AIs with different strengths, each fine-tuned for diverse applications.
“There’s a long way to go before there’s a futuristic version of AI where machines think and make decisions. … Humans will be around for quite a while,” Tony Wimmer, head of data and analytics at J.P. Morgan Payments, told PYMNTS in March.