Chapter 11. Looking Under the Hood
In Chapter 10, we looked at the underlying—mostly mathematical—principles used to create AI and ML features. In this chapter, we take that a little further toward the practical and look at what’s happening under the hood for many of the practical AI tasks that we covered in Part II.
For each example that we look at, we touch on the underlying machine-learning algorithms, their origins (academic papers, most often), briefly note how they work, and acknowledge some of the other ways in which you could accomplish the same task, algorithm-wise. We begin, though, with a look at how CoreML itself works because so far it has been treated a bit like a black box. Although you don’t need to understand CoreML, it helps to have at least a basic understanding of its internals because treating things like magic is a sure-fire way to make sure you get stuck when it comes time to fix errors.
A Look Inside CoreML
As we’ve seen many times earlier in the book, the basic steps for inference with CoreML are always the same:
Add your pretrained model to your project.
Load the model file in your app.
Provide the necessary input to the model for it to make a prediction.
Use the prediction output in your app.
So, now that we have seen many working examples of using CoreML (Part II), let’s look into what its actually doing. A great advantage of Xcode handling so much of the work for us means that we can gloss over bits and pieces of the inner workings. But ...