Chapter 6. Computer Vision Apps with ML Kit on iOS
Chapter 3 gave you an introduction to ML Kit and how it could be used to do face detection in a mobile app. In Chapter 4 you then took a look at how to do some more sophisticated scenarios on Android devices—image labeling and classification and object detection in both still images and video. In this chapter, we’ll see how to use ML Kit for the same scenarios, but on iOS using Swift. Let’s start with image labeling and classification.
Image Labeling and Classification
A staple of computer vision is the concept of image classification, where you give a computer an image, and the computer will tell you what the image contains. At the highest level, you could give it a picture of a dog, like that in Figure 6-1, and it will tell you that the image contains a dog.
ML Kit’s image labeling takes this a bit further—and it will give you a list of things it “sees” in the image, each with levels of probability. So, for the image in Figure 6-1, not only will it see a dog, it may also see a pet, a room, a jacket, and more. Building an app to do this on iOS is pretty simple, so let’s explore that step by step.
Note
At the time of writing, ML Kit’s pods give some issues when running on a Mac with the iOS simulator. Apps will still run on devices, and also with the “My Mac (designed for iPad)” runtime setting in Xcode.
Get AI and Machine Learning for On-Device Development now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.