1Annotating Collaboratively

1.1. The annotation process (re)visited

A simplified representation of the annotation process is shown in Figure 1.4. We will detail further in this section the different steps of this process, but we first introduce a theoretical view on the consensus and show how limited the state of the art on the subject is.

1.1.1. Building consensus

The central question when dealing with manual corpus annotation is how to obtain reliable annotations, that are both useful (i.e. meaningful) and consistent. In order to achieve this and to solve the “annotation conundrum” [LIB 09], we have to understand the annotation process. As we saw in section I.1.2, annotating consists of identifying the segment(s) to annotate and adding a note (also called a label or a tag) to it or them. In some annotation tasks, segments can be linked by a relation, oriented or not, and the note applies to this relation. In most cases, the note is in fact a category, taken from a list (the tagset).

Alain Desrosières, a famous French statistician, worked on the building of the French socio-professional categories [DES 02] and wrote a number of books on categorization (among which, translated into English, [DES 98]). His work is especially relevant to our subject, as he precisely analyzed what categorizing means.

First, and this is fundamental for the annotation process, he makes a clear distinction between measuring and quantifying [DES 14]. Measuring “implies that something already exists ...

Get Collaborative Annotation for Reliable Natural Language Processing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.