Chapter 1. Design for Voice Interfaces

The way we interact with technology is changing dramatically again. As wearables, homes, and cars become smarter and more connected, we’re beginning to create new interaction modes that no longer rely on keyboards or even screens. Meanwhile, significant improvements in voice input technology are making it possible for users to communicate with devices in a more natural, intuitive way.

Of course, for any of this to work, designers are going to need to learn a few things about creating useful, usable voice interfaces.

A (Very) Brief History of Talking to Computers

Voice input isn’t really new, obviously. We’ve been talking to inanimate objects, and sometimes even expecting them to listen to us, for almost a hundred years. Possibly the first “voice-activated” product was a small toy called Radio Rex, produced in the 1920s (Figure 1-1). It was a spring-activated dog that popped out of a little dog house when it “heard” a sound in the 500 Hz range. It wasn’t exactly Siri, but it was pretty impressive for the time.

The technology didn’t begin to become even slightly useful to consumers until the late 1980s, when IBM created a computer that could kind of take dictation. It knew a few thousand words, and if you spoke them very slowly and clearly in unaccented English, it would show them to you on the screen. Unsurprisingly, it didn’t really catch on.

Figure 1-1. Radio Rex.

And why would it? We’ve been dreaming about perfect voice interfaces since ...

Get Design for Voice Interfaces now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.