Chapter 19

Implementing Speech Activation

What's in this chapter?

Activating speech using Android's sensors and continuous speech recognition

Persistently running speech activation using a Service

The first thing a user must do to use speech recognition is to tell the app to start recognizing. One way the user could do it, which the previous chapters relied on, is to press a button. However, pressing a button assumes the user is looking at the screen and can touch it. This is not always the case. For certain tasks, like sending e-mail while driving, users need to activate speech recognition hands-free and eyes-free. In such cases, an app needs different speech activation techniques beyond just a button. Fortunately, Android's sensors provide you with a wide variety of ways to implement speech activation.

In addition to deciding how your app implements speech activation, you must decide when the user can activate it. Your users may need to activate speech only while using the app, or they may need to activate speech at any time, even if the app is not running.

This chapter presents four speech activation implementations, summarized in Table 19.1, that use the sensor techniques discussed in other chapters of this book. It also describes how to run speech activation persistently using a Service.

Table 19.1 Four Different Ways to Use Android Sensors for Speech Activation

Name Tech How
Movement Physical Sensors Move phone with sufficient acceleration
Clap Microphone Make a single ...

Get Professional Android Sensor Programming now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.