Up until this point, we have been applying traditional classification and clustering algorithms to text. By allowing us to measure distances between terms, assign weights to phrases, and calculate probabilities of utterances, these algorithms enable us to reason about the relationships between documents. However, tasks such as machine translation, question answering, and instruction-following often require more complex, semantic reasoning.
For instance, given a large number of news articles, how would you build a model of the narratives they contain—of actions taken by key players or enacted upon others, of the sequence of events, of cause and effect? Using the techniques in Chapter 7, you could extract the entities or keyphrases or look for themes using the topic modeling methods described in Chapter 6. But to model information about the relationships between those entities, phrases, and themes, you would need a different kind of data structure.
Let’s consider how such relationships may be expressed in the headlines of some of our articles:
'FDA approves gene therapy'
'Gene therapy reduces tumor growth'
'FDA recalls pacemakers'
Traditionally, phrases like these are encoded using text meaning representations (TMRs). TMRs take the form of
('subject', 'predicate', 'object') triples (e.g.,
('FDA', 'recalls', 'pacemakers')), to which first-order logic or lambda calculus can be applied to achieve semantic reasoning.
Unfortunately, the ...