Chapter 7. Motion and Gestures

This chapter explores the practical side of implementing motion- and gesture-related AI features in your Swift apps. Taking a top-down approach, we explore four tasks, and how to implement them in Swift, using a variety of AI tools.

Practical AI, Motion, and Gestures

Following are the four motion tasks that we explore in this chapter:

Activity recognition

This uses Apple’s built-in frameworks to determine what kind of motion-based activity the user is currently doing.

Gesture classification for drawings

This task builds on the bitmap drawing detection that we looked at in “Task: Drawing Recognition”. We build a drawing classifier that classifies drawings made on an iOS device instead of from photos.

Activity classification

Here we use Turi Create and train our own activity classification model to determine what kind of motion-based activity the user is doing.

Using augmented reality with AI

We look at using one of Apple’s other buzzword-friendly frameworks, ARKit, for combining augmented reality (AR) with AI.

Task: Activity Recognition

Activity classification is really, really, really popular these days, especially with the proliferation or portable activity trackers like the Apple Watch and Fitbit. Activity classification involves determining what physical action(s) the user is doing with their device(s).

It’s a useful component of lots of apps, including games like Pokemon Go and all manner of fitness apps.

Note

Activity classification ...

Get Practical Artificial Intelligence with Swift now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.