August 2018
Intermediate to advanced
522 pages
12h 45m
English
This metric is different than the other ones discussed in this section because its goal is to measure the agreement between two raters (for example, ground truth, human labeling, and estimators) considering the possibility that the raters agree without full awareness (in general, by chance). It is computed as follows:

The two values represent, respectively, the observed agreement between the raters and the probability of a chance agreement. Coefficient κ is bounded between 0 (no agreement) and 1 (total agreement). In fact, if pobserved = 1 and pchance = 0, κ = 1, while pobserved = 0 and pchance = 0, k = 0. All intermediate values ...
Read now
Unlock full access