It has been about four months since OpenAI brought its ChatGPT system to proverbial life.
Since evangelists believe the consumer-facing artificial intelligence (AI) tool has changed the world forever, observers can be forgiven for feeling like the solution’s been in the market even longer.
This, as the latest and greatest iteration of the machine learning (ML) system, ChatGPT-4, launched Tuesday (March 14).
GPT stands for Generative Pre-trained Transformer, while Chat simply references the way users can interact with the AI system.
ChatGPT is not a stand-in for OpenAI’s large language model itself but serves as an intuitive, chat-based interface that allows users to interact with the ever-growing language model OpenAI is building and push the boundaries of the dataset it is being trained on.
The ChatGPT that sparked doomsday predictions in white-collar job markets, fears about academic plagiarism, and made all sorts of headlines, was a way for users to engage with OpenAI’s GPT-3.5 model.
As of Tuesday (March 14), it is now a way to interact with the GPT-4 upgrade.
However, Bing search engine users were fortunate enough to enjoy a sneak peek. Microsoft confirmed that versions of Bing using GPT were, in fact, using GPT-4 before its official release.
As relayed by PYMNTS, General Motors Vice President Scott Miller last week (March 10) said, “ChatGPT is going to be in everything.”
Read More: How Truly Responsive and Intelligent AI will Change Business
The most noticeable change between GPT-3.5 and GPT-4 is that the latest iteration is multimodal, meaning it can understand more than one modality of information. GPT-3 was limited to text. It could read and write, but that was about it.
GPT-4 can read, analyze, or generate up to 25,000 words of text, a significant improvement over the earlier GPT-3.5 model and those prior. It also can now respond to images and answer questions around visual stimuli. For example, if shown a photograph of a fridge or cabinet, ChatGPT-4 can suggest recipes using the ingredients on hand — although this solution was only demoed and has not yet been made available for public use.
As PYMNTS previously reported, this new and disruptive ability of AI tools to move across multilayered data set “worlds” of images, speech, text and more is what makes today’s AI solutions worthy of being described as “intelligent.”
OpenAI’s multimodal model is trained to exhibit human-level performance on various professional and academic benchmarks, taking a deep learning approach that leverages more data and more computation to create increasingly sophisticated and capable language models adept at responding to similarly sophisticated and capable user queries.
“With iterative alignment and adversarial testing, it’s our best-ever model on factuality, steerability, and safety,” said OpenAI Chief Technology Officer Mira Murati. “We spent six months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”
That’s because GPT-4 was trained on countless malicious prompts and drew from its past experience in the limelight to do better this time around when faced with “jailbreak” user engagements meant to coax the language model into exhibiting poor behavior, for example, by asking it what a “bad AI” might do in a certain situation.
“These are all very data-hungry situations. Data is foundational to building the models, training the AI — the quality and integrity of that data is important,” Michael Haney, head of Cyberbank Digital Core at FinTech platform Galileo, the sister company of Technisys, told PYMNTS during a conversation earlier this month.
ChatGPT-4, which was trained on data as recent as August 2022, still struggles to predict or infer what might happen in the future. It is inherently limited by a foundation of information that, while growing exponentially, will never be truly up to date.
This leads to what AI researchers call “hallucination,” or the generation of text that is completely false or misleading. Hallucination is a problem that affects all chatbots and is not native to ChatGPT, but to AI more broadly as a technical solution trained on historical data (the only data available in our linear world).
ChatGPT-4 can pass the bar while scoring in the top decile, does fairly decently on various standardized tests, and even correctly answer sophisticated medical queries.
OpenAI worked with several companies, including Morgan Stanley and Stripe, to test enterprise integrations of its tool.
Morgan Stanley uses GPT-4 to organize its internal wealth management library and assist advisors in quickly pulling relevant information, while Stripe has tasked the tool with streamlining the user experience and combating fraud across its platform.
“You essentially have the knowledge of the most knowledgeable person in Wealth Management — instantly. We believe that is a transformative capability for our company,” said Jeff McMillan, Head of Analytics, Data & Innovation for Morgan Stanley Wealth Management.
As consumer expectations for personalization grow, organizations are increasingly shifting gears from exploring AI technology to exploiting it at scale, infusing the technology into core business processes, workflows, and customer journeys to optimize decision-making and operations on a day-to-day basis.