December 2018
Beginner to intermediate
226 pages
7h 59m
English
In this section, we'll see how can we prevent task bias by maximizing and minimizing entropy. We know that entropy is a measure of randomness. So, we maximize entropy by allowing the model to make a random guess over the predicted labels with equal probability. By making random guesses over the predicted label, we can prevent task bias.
How do we compute the entropy? Let's denote entropy by
. The entropy for
is computed by sampling from over its output probabilities, over predicted labels:
In the previous ...
Read now
Unlock full access