Designing for Alexa

5 questions for Phillip Hunter: Designing voice interactions, deconstructing human behavior, and use cases for voice.

By Mary Treseler
March 9, 2017
Waveform. Waveform. (source: Pixabay)

I recently asked Phillip Hunter, head of UX for Alexa Skills at Amazon, to discuss the complexities of designing for voice interactions, common missteps, and the role voice will play in human-device interactions. At the O’Reilly Design Conference, Phillip will be presenting a session, Amazon Alexa: What, why, and why now?

You’re presenting a talk at the O’Reilly Design Conference called Amazon Alexa: What, why, and why now? Tell me what attendees should expect.

We’ve been thrilled to see the customer and developer reaction to Alexa. This talk will address our vision for Alexa, where we are at today and how attendees can take part in continued growth. There are still hard problems to solve with voice—we’ve spent years of invention on this—but we believe voice interactions will play an ever-increasing role in the future of everyday computing.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

What advice do you have for designers and developers interested in building voice-driven products and services?

It can be easy to underestimate how hard it is to separate our tacit knowledge of speech and thought from the practice of designing for it. Good voice interaction design demands that we deconstruct even our own individual behaviors and what we assume about the behavior of others. Human conversation is highly intricate and deeply evolved. Use the design skills you have to dive deeply into not just the functional needs of a voice app, but into how the what and why of interaction is different for voice.

What I’ll also touch on in my conversation is the set of tools developers can take advantage of to integrate voice into their products. For Alexa specifically, with the availability of the Alexa Skills Kit and Alexa Voice Service, you don’t need to have a background in natural language understanding or speech recognition to build great voice experiences with Alexa.

Do you think voice-driven experiences will be the interaction model of the future?

We believe that voice will fundamentally improve the way people will interact with technology. It can make the complex simple—it’s the most natural and convenient user interface. Like all advances in human-device interaction, voice is additive as both a medium and a magnifier of ease and power. It is part of the future of interaction, and will likely replace some ways that we currently do things. However, voice isn’t meant to replace other forms of input. Voice will be the sole medium in some cases, and be augmenting or complementary in others. I expect to see plenty of innovation based on voice-forward interactions, and continued innovation in what is already established.

What are some of the biggest challenges or missteps​ when developing for voice interaction?

The two most common mistakes are 1) treating voice as an interrogative, form-filling medium driven by impersonal questions and demanding exact answers, and 2) overloading the audio channel with information that is too much or too dense. Both of these tend to be made when the advice above isn’t taken into account.

What other sessions or workshops​ are you interested in attending at the O’Reilly Design Conference?

Several speakers will be addressing next-generation topics that touch on designing for multi-device environments. I expect a lot of good discussion points to be raised. And I’m always mindful of what and how we’re instructing the next sets of UX leaders and practitioners. A couple of talks around those topics look interesting.

Post topics: Design
Share: