May 2020
Beginner to intermediate
430 pages
10h 39m
English
TOCO stands for TensorFlow Optimized Converter. For a detailed understanding of toco, please visit the following GitHub page: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/toco.
The following code describes how to convert a TensorFlow model using toco. The first part of the code is the same as what we did previously, except we're using toco instead of tflite. The later part uses a quantized inference type. Quantization is a process that's used to reduce model size while improving hardware acceleration latency. There are different methods of quantization, as described at https://www.tensorflow.org/lite/performance/post_training_quantization.
In this case, we are using full integer ...