BERT Variants II - Based on Knowledge Distillation

In the previous chapters, we learned how BERT works, and we also looked into different variants of BERT. We learned that we don't have to train BERT from scratch; instead, we can fine-tune the pre-trained BERT model on downstream tasks. However, one of the challenges with using the pre-trained BERT model is that it is computationally expensive and it is very difficult to run the model with limited resources. The pre-trained BERT model has a large number of parameters and also high inference time, which makes it harder to use it on edge devices such as mobile phones. 

To alleviate this issue, we transfer knowledge from a large pre-trained BERT to a small BERT using knowledge distillation. In ...

Get Getting Started with Google BERT now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.