Chapter 8: Knowledge distillation

Nikolaos Passalis; Maria Tzelepi; Anastasios Tefas    Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece

Abstract

The need to develop faster, more lightweight and flexible Deep Learning (DL) models has led to the development of a wide variety of methods. Among the most well-known methods for improving the accuracy of lightweight DL models is knowledge distillation, also known as knowledge transfer. Knowledge distillation is capable of improving the effectiveness of the training process by transferring the knowledge encoded in a large and complex neural network into a smaller and faster one. This chapter aims to provide an introduction to knowledge distillation approaches by presenting ...

Get Deep Learning for Robot Perception and Cognition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.