November 2019
Intermediate to advanced
296 pages
7h 52m
English
Machine learning models generally need 32-bit floating point precision for the training phase. However, some WebGL implementations, such as mobile devices, only support 16-bit precision. This can cause precision problems when we port the model that we've trained on a higher precision machine to a lower precision machine. Considering the popularity of the quantization of machine learning models, this downgrade is not a major problem if the model is only used for inference. However, it is still necessary that we keep the range in-between [0.000000059605, 65504] to achieve full compatibility in terms of accuracy and performance.
Read now
Unlock full access