PYMNTS-MonitorEdge-May-2024

Voice AI Takes the Wheel Transforming Connected Car Experiences

The relationship between mankind and machine is undergoing a fundamental transformation.

For the first time in history — at least outside of TV shows like “Knight Rider” and movies like “Cars” and “The Love Bug” — cars can now talk back to, and communicate with, their drivers and passengers.

And it is all thanks to generative artificial intelligence (AI).

It’s now possible “to have a much more conversational interaction with your vehicle compared to previous generations of AI,” Christian Mentz, CRO at Cerence, told PYMNTS during an interview for the “AI Effect” series.

He explained that the contextual awareness and natural language processing (NLP) of today’s AI models have transformed the voice AI experience away from mere “task master,” relegated to statically surfacing information, toward a natural, human-like experience.

One significant area where generative AI is making a difference is navigation. By integrating large language models (LLM), existing navigation systems can be infused with reasoning and contextual awareness. This upgrade allows for a significant improvement in interpreting requests, dovetailing nicely with other data like vehicle sensor inputs for a smoother, more intuitive experience.

For example, Mentz said, a user can make a complex request like “I’m hungry for a thin crust pizza” and the system can interpret it, triggering route guidance to an Italian restaurant with thin crust pizza on the menu.

Read alsoVolkswagen Integrates OpenAI’s ChatGPT Into In-Car Voice Assistant

New Era of AI-Powered Assistants

One of the biggest shifts brought about by generative AI is that AI as a whole has become more applicable, and less abstract, with the advent of tools like OpenAI’s ChatGPT. Consumers are now using AI tools in their daily lives and workplaces, finding value in the improved customer experience — and, increasingly, they want those experiences to transfer over to the digital cabins of their cars.

“Carmakers and customers are looking to bring innovations that they use at their workplaces or at home also into their vehicles,” Mentz said. 

He added that bringing AI into existing vehicles, especially already sold cars, is a huge opportunity area. The ability to update existing vehicles over time and enhance them resonates well with customers.

This is supported by PYMNTS Intelligence, which finds hands-free in-car voice technologies are being embraced by consumers who are looking to integrate intuitive, simple and connected elements into their everyday routines.

Still, despite recent advances, in-car AI-powered experiences are far from perfect.

Mentz explained that the fragmented nature of the user experience, which is typically built around the smartphone, can lead to frictions, and he emphasized the need for a more holistic and synchronized voice and touch experience within vehicles.

“If you look at how voice interaction is designed in vehicles today, there are two user experience islands. One is through brought-in solutions from your mobile phone. And the other one is a native-build in OEM-branded [original equipment manufacturer] system. And you have voice and touch experiences that are very often out of sync,” he said. 

By leveraging GenAI and LLMs, a single conversational interface can be developed to complete complex tasks across multiple applications. This approach would simplify the user experience and improve task completion, Mentz said.

Future of In-Car Experiences

Looking ahead, Mentz envisions multimodal models becoming the next computing engine, acting as specialized AI agents for different areas. The focus will be on refining attractive use cases with smaller, more cost-effective language models, breaking the app paradigm and enabling a holistic conversational interaction with various devices.

This AI-driven leap forward is not just about adding new features; it’s also about enriching existing in-car systems and extending these capabilities to vehicles already on the road.

As for in-car payments and digital wallets, Mentz noted that while these capabilities already exist, they face challenges due to their complexity and high friction.

Mentz believed generative AI and a more conversational interface can reduce this friction and drive adoption for in-vehicle payment use cases. However, he also emphasized the need for a fine balance between regulation and innovation, as standardization should not hinder differentiation and progress.

“I believe we can bring value through enabling these experiences by voice and touch but also leverage technologies like our multizone technology, where you can already identify different speaker positions in a vehicle. That combined with voice biometrics, for example, can increase security or add an additional layer of security through identity authentication to facilitate these payments,” he said.

PYMNTS-MonitorEdge-May-2024