PYMNTS-MonitorEdge-May-2024

Adobe Doubles Down On Voice Interfaces

Adobe Doubles Down on Voice UX Development

At its annual MAX conference in Los Angeles last week, Adobe had all sorts of things to announce. After a (very) long wait, Photoshop in full is finally coming to the iPad, and Adobe offered a first look at its long-speculated-on virtual reality authoring tool: Project Aero, which is still in private beta.

But for voice ecosystem watchers, the biggest action of the day came with the announcement of the latest version of Adobe XD, its user experience design app, which comes built out with a greatly expanded set of tools for voice integration. Those tools, according to Adobe, will let designers create and test new application experiences that can be purely voice-based — designed for use with Alexa or Google Home devices — or designed to work alongside visual content.

According to Adobe Product Management Lead Andrew Shorten, the new platform was developed around customer feedback that designing experiences for voice was mostly a frustrating and inefficient experience. Their stated need was for set of tools for prototyping, so that testing a voice app was a basically similar experience to test-running a regular mobile app.

To offer that capacity, earlier this year Adobe acquired Sayspring, a startup focused on making the voice ecosystem more accessible for developers by enabling them to more easily prototype and build the voice interfaces for Amazon Alexa or Google Assistant apps.

The problem today, Shorten noted, is that without that capability, UX designers are forced to pull back to working with developers and bringing in people to help with making prototypes, which is both costly and time-consuming for businesses.

The newly released XD is designed to enable designers to create experiences for mobile apps, car navigation systems and smart AI assistants like Alexa, Cortana, Siri or Google – and then easily test those voice commands for flow and function within the prototype products.

The point, according to Sayspring’s Founder and former CEO Mark Webster, is to start with the idea of voice as an interaction point, and then figure out how to give designers the tools to make more effective use of those access points.

“Voice is weird,” Webster said. “It’s both a platform like Amazon Alexa and the Google Assistant, but also a form of interaction […] Our starting point has been to treat it as a form of interaction — and how we give designers access to the medium of voice and speech in order to create all kinds of experiences. A huge use case for that would be designing for platforms like Amazon Alexa, Google Assistant and Microsoft Cortana.”

Adobe users will, however, have some edge in designing for and with Alexa and Echo devices. Shortly before the announcement of the voice-enhanced version of XD entering the market in November, Adobe launched the first Adobe UI kit created specifically for Amazon’s recently announced Alexa Presentation Language (APL) framework, a new design language for developers who want to build voice skills that include multimedia experiences.

That means soon, designers will not only be able to prototype their voice or voice-enhanced apps – if they are designing for the Alexa Echo system, they will also be able to run those tests directly on Echo devices.

The goal, according to Shorten, will be to open up that native testing capability on a host of smart devices, because developers are increasingly interested in developing across platforms and user experiences.

“Echo devices and the Alexa experience have been favored, but we are still in the very early days of the voice platform, where the possibilities for the ecosystem are still be explored. I think among developers and designers, the ability to work broadly is critical in a developing ecosystem.”

There will be more work to do going forward, particularly around language processing and the ability to build “really organic-feeling conversations between humans and AI.”

“And we think that we are going to see more of those use case and cutting-edge experiences open up, because we can make a common toolkit available, so that development of these apps and interactions moves out of the realm of specialty programming exercises,” Shorten added.

PYMNTS-MonitorEdge-May-2024