Skip to Main Content
Getting Started with Google BERT
book

Getting Started with Google BERT

by Sudharsan Ravichandiran
January 2021
Beginner to intermediate content levelBeginner to intermediate
352 pages
10h 17m
English
Packt Publishing
Content preview from Getting Started with Google BERT
BERT Variants II - Based on Knowledge Distillation

In the previous chapters, we learned how BERT works, and we also looked into different variants of BERT. We learned that we don't have to train BERT from scratch; instead, we can fine-tune the pre-trained BERT model on downstream tasks. However, one of the challenges with using the pre-trained BERT model is that it is computationally expensive and it is very difficult to run the model with limited resources. The pre-trained BERT model has a large number of parameters and also high inference time, which makes it harder to use it on edge devices such as mobile phones. 

To alleviate this issue, we transfer knowledge from a large pre-trained BERT to a small BERT using knowledge distillation. In ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Clean Code Fundamentals

Clean Code Fundamentals

Robert C. Martin

Publisher Resources

ISBN: 9781838821593Supplemental Content