It is easy to get our hands on millions of words of text. What can we do with it, assuming we can write some simple programs? In this chapter, we’ll address the following questions:
What can we achieve by combining simple programming techniques with large quantities of text?
How can we automatically extract key words and phrases that sum up the style and content of a text?
What tools and techniques does the Python programming language provide for such work?
What are some of the interesting challenges of natural language processing?
This chapter is divided into sections that skip between two quite different styles. In the “computing with language” sections, we will take on some linguistically motivated programming tasks without necessarily explaining how they work. In the “closer look at Python” sections we will systematically review key programming concepts. We’ll flag the two styles in the section titles, but later chapters will mix both styles without being so up-front about it. We hope this style of introduction gives you an authentic taste of what will come later, while covering a range of elementary concepts in linguistics and computer science. If you have basic familiarity with both areas, you can skip to Automatic Natural Language Understanding; we will repeat any important points in later chapters, and if you miss anything you can easily consult the online reference material at http://www.nltk.org/. If the material is completely new to you, this chapter will raise more questions than it answers, questions that are addressed in the rest of this book.
We’re all very familiar with text, since we read and write it every day. Here we will treat text as raw data for the programs we write, programs that manipulate and analyze it in a variety of interesting ways. But before we can do this, we have to get started with the Python interpreter.
One of the friendly things about Python is that it allows you to
type directly into the interactive interpreter—the program that will be running
your Python programs. You can access the Python interpreter using a
simple graphical interface called the Interactive DeveLopment Environment (IDLE).
On a Mac you can find this under Applications→MacPython, and on
Windows under All Programs→Python. Under Unix you can run Python from
the shell by typing
idle (if this
is not installed, try typing
python). The interpreter will print a blurb
about your Python version; simply check that you are running Python
2.4 or 2.5 (here it is 2.5.1):
Python 2.5.1 (r251:54863, Apr 15 2008, 22:57:26) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>>
If you are unable to run the Python interpreter, you probably don’t have Python installed correctly. Please visit http://python.org/ for detailed instructions.
>>> 1 + 5 * 2 - 3 8 >>>
Once the interpreter has finished calculating the answer and displaying it, the prompt reappears. This means the Python interpreter is waiting for another instruction.
Your Turn: Enter a few more
expressions of your own. You can use asterisk (
*) for multiplication and slash (
/) for division, and parentheses for
bracketing expressions. Note that division doesn’t always behave as
you might expect—it does integer division (with rounding of
fractions downwards) when you type
1/3 and “floating-point” (or decimal)
division when you type
In order to get the expected behavior of division (standard in
Python 3.0), you need to type:
__future__ import division.
The preceding examples demonstrate how you can work interactively with the Python interpreter, experimenting with various expressions in the language to see what they do. Now let’s try a non-sensical expression to see how the interpreter handles it:
>>> 1 + File "<stdin>", line 1 1 + ^ SyntaxError: invalid syntax >>>
This produced a syntax error.
In Python, it doesn’t make sense to end an instruction with a plus
sign. The Python interpreter indicates the line where the problem
occurred (line 1 of
which stands for “standard input”).
Now that we can use the Python interpreter, we’re ready to start working with language data.
Before going further you should install NLTK, downloadable for free from http://www.nltk.org/. Follow the instructions there to download the version required for your platform.
Once you’ve installed NLTK, start up the Python interpreter as
before, and install the data required for the book by typing the
following two commands at the Python prompt, then selecting the
book collection as shown in Figure 1-1.
>>> import nltk >>> nltk.download()
nltk.download(). The Collections tab on the downloader shows how the packages are grouped into sets, and you should select the line labeled book to obtain all data required for the examples and exercises in this book. It consists of about 30 compressed files requiring about 100Mb disk space. The full collection of data (i.e., all in the downloader) is about five times this size (at the time of writing) and continues to expand.
Once the data is downloaded to your machine, you can load some
of it using the Python interpreter. The first step is to type a
special command at the Python prompt, which tells the interpreter to
load some texts for us to explore:
nltk.book import *. This says “from NLTK’s
book module, load all items.” The
book module contains all the data you will
need as you read this chapter. After printing a welcome message, it
loads the text of several books (this will take a few seconds). Here’s
the command again, together with the output that you will see. Take
care to get spelling and punctuation right, and remember that you
don’t type the
>>> from nltk.book import * *** Introductory Examples for the NLTK Book *** Loading text1, ..., text9 and sent1, ..., sent9 Type the name of the text or sentence to view it. Type: 'texts()' or 'sents()' to list the materials. text1: Moby Dick by Herman Melville 1851 text2: Sense and Sensibility by Jane Austen 1811 text3: The Book of Genesis text4: Inaugural Address Corpus text5: Chat Corpus text6: Monty Python and the Holy Grail text7: Wall Street Journal text8: Personals Corpus text9: The Man Who Was Thursday by G . K . Chesterton 1908 >>>
Any time we want to find out about these texts, we just have to enter their names at the Python prompt:
>>> text1 <Text: Moby Dick by Herman Melville 1851> >>> text2 <Text: Sense and Sensibility by Jane Austen 1811> >>>
Now that we can use the Python interpreter, and have some data to work with, we’re ready to get started.
There are many ways to examine the context of a text apart from
simply reading it. A concordance view shows us every occurrence of a
given word, together with some context. Here we look up the word
monstrous in Moby Dick by
text1 followed by a
period, then the term
concordance, and then placing
"monstrous" in parentheses:
>>> text1.concordance("monstrous") Building index... Displaying 11 of 11 matches: ong the former , one was of a most monstrous size . ... This came towards us , ON OF THE PSALMS . " Touching that monstrous bulk of the whale or ork we have r ll over with a heathenish array of monstrous clubs and spears . Some were thick d as you gazed , and wondered what monstrous cannibal and savage could ever hav that has survived the flood ; most monstrous and most mountainous ! That Himmal they might scout at Moby Dick as a monstrous fable , or still worse and more de th of Radney .'" CHAPTER 55 Of the monstrous Pictures of Whales . I shall ere l ing Scenes . In connexion with the monstrous pictures of whales , I am strongly ere to enter upon those still more monstrous stories of them which are to be fo ght have been rummaged out of this monstrous cabinet there is no telling . But of Whale - Bones ; for Whales of a monstrous size are oftentimes cast up dead u >>>
Your Turn: Try searching
for other words; to save re-typing, you might be able to use
up-arrow, Ctrl-up-arrow, or Alt-p to access the previous command and
modify the word being searched. You can also try searches on some of
the other texts we have included. For example, search
Sense and Sensibility for the word
text2.concordance("affection"). Search the
book of Genesis to find out how long some people lived, using:
could look at
Inaugural Address Corpus, to see examples of
English going back to 1789, and search for words like
god to see how these words have been used
differently over time. We’ve also included
text5, the NPS Chat
Corpus: search this for unconventional words like
lol. (Note that this corpus is
Once you’ve spent a little while examining these texts, we hope you have a new sense of the richness and diversity of language. In the next chapter you will learn how to access a broader range of text, including text in languages other than English.
A concordance permits us to see words in context. For example,
we saw that monstrous occurred in contexts such
as the ___ pictures and the ___
size. What other words appear in a similar range of
contexts? We can find out by appending the term
similar to the name of the text in question, then inserting the
relevant word in parentheses:
>>> text1.similar("monstrous") Building word-context index... subtly impalpable pitiable curious imperial perilous trustworthy abundant untoward singular lamentable few maddens horrible loving lazy mystifying christian exasperate puzzled >>> text2.similar("monstrous") Building word-context index... very exceedingly so heartily a great good amazingly as sweet remarkably extremely vast >>>
Observe that we get different results for different texts. Austen uses this word quite differently from Melville; for her, monstrous has positive connotations, and sometimes functions as an intensifier like the word very.
common_contexts allows us to examine just the contexts that are shared
by two or more words, such as monstrous and
very. We have to enclose these words by square
brackets as well as parentheses, and separate them with a
>>> text2.common_contexts(["monstrous", "very"]) be_glad am_glad a_pretty is_pretty a_lucky >>>
It is one thing to automatically detect that a particular word occurs in a text, and to display some words that appear in the same context. However, we can also determine the location of a word in the text: how many words from the beginning it appears. This positional information can be displayed using a dispersion plot. Each stripe represents an instance of a word, and each row represents the entire text. In Figure 1-2 we see some striking patterns of word usage over the last 220 years (in an artificial text constructed by joining the texts of the Inaugural Address Corpus end-to-end). You can produce this plot as shown below. You might like to try more words (e.g., liberty, constitution) and different texts. Can you predict the dispersion of a word before you view it? As before, take care to get the quotes, commas, brackets, and parentheses exactly right.
>>> text4.dispersion_plot(["citizens", "democracy", "freedom", "duties", "America"]) >>>
Important: You need to have Python’s NumPy and Matplotlib packages installed in order to produce the graphical plots used in this book. Please see http://www.nltk.org/ for installation instructions.
Now, just for fun, let’s try generating some random text in the
various styles we have just seen. To do this, we type the name of the
text followed by the term
generate. (We need to include the parentheses, but there’s
nothing that goes between them.)
>>> text3.generate() In the beginning of his brother is a hairy man , whose top may reach unto heaven ; and ye shall sow the land of Egypt there was no bread in all that he was taken out of the month , upon the earth . So shall thy wages be ? And they made their father ; and Isaac was old , and kissed him : and Laban with his cattle in the midst of the hands of Esau thy first born , and Phichol the chief butler unto his son Isaac , she >>>
Note that the first time you run this command, it is slow because it gathers statistics about word sequences. Each time you run it, you will get different output text. Now try generating random text in the style of an inaugural address or an Internet chat room. Although the text is random, it reuses common words and phrases from the source text and gives us a sense of its style and content. (What is lacking in this randomly generated text?)
generate produces its output, punctuation is split off from
the preceding word. While this is not correct formatting for English
text, we do it to make clear that words and punctuation are
independent of one another. You will learn more about this in Chapter 3.
The most obvious fact about texts that emerges from the preceding examples is that they differ in the vocabulary they use. In this section, we will see how to use the computer to count the words in a text in a variety of useful ways. As before, you will jump right in and experiment with the Python interpreter, even though you may not have studied Python systematically yet. Test your understanding by modifying the examples, and trying the exercises at the end of the chapter.
Let’s begin by finding out the length of a text from start to
finish, in terms of the words and punctuation symbols that appear. We
use the term
len to get the length
of something, which we’ll apply here to the book of Genesis:
>>> len(text3) 44764 >>>
So Genesis has 44,764 words and punctuation symbols, or
“tokens.” A token is the technical
name for a sequence of characters—such as
:)—that we want to treat as a group. When we
count the number of tokens in a text, say, the phrase to be
or not to be, we are counting occurrences of these
sequences. Thus, in our example phrase there are two occurrences of
to, two of be, and one each
of or and not. But there are
only four distinct vocabulary items in this phrase. How many distinct
words does the book of Genesis contain? To work this out in Python, we
have to pose the question slightly differently. The vocabulary of a
text is just the set of tokens that it uses,
since in a set, all duplicates are collapsed together. In Python we
can obtain the vocabulary items of
text3 with the command:
set(text3). When you do this, many screens
of words will fly past. Now try the following:
>>> sorted(set(text3)) ['!', "'", '(', ')', ',', ',)', '.', '.)', ':', ';', ';)', '?', '?)', 'A', 'Abel', 'Abelmizraim', 'Abidah', 'Abide', 'Abimael', 'Abimelech', 'Abr', 'Abrah', 'Abraham', 'Abram', 'Accad', 'Achbor', 'Adah', ...] >>> len(set(text3)) 2789 >>>
the Python expression
, we obtain a sorted list of
vocabulary items, beginning with various punctuation symbols and
continuing with words starting with A. All
capitalized words precede lowercase words. We discover the size of the
vocabulary indirectly, by asking for the number of items in the set,
and again we can use
len to obtain
this number . Although it has 44,764
tokens, this book has only 2,789 distinct words, or “word types.” A
word type is the form or spelling
of the word independently of its specific occurrences in a text—that
is, the word considered as a unique item of vocabulary. Our count of
2,789 items will include punctuation symbols, so we will generally
call these unique items types
instead of word types.
Now, let’s calculate a measure of the lexical richness of the text. The next example shows us that each word is used 16 times on average (we need to make sure Python uses floating-point division):
>>> from __future__ import division >>> len(text3) / len(set(text3)) 16.050197203298673 >>>
>>> text3.count("smote") 5 >>> 100 * text4.count('a') / len(text4) 1.4643016433938312 >>>
Your Turn: How many times
does the word lol appear in
text5? How much is this as a percentage of
the total number of words in this text?
You may want to repeat such calculations on several texts, but
it is tedious to keep retyping the formula. Instead, you can come up
with your own name for a task, like “lexical_diversity” or
“percentage”, and associate it with a block of code. Now you only have
to type a short name instead of one or more complete lines of Python
code, and you can reuse it as often as you like. The block of code
that does a task for us is called a function, and we define a short name for our
function with the keyword
next example shows how to define two new functions,
>>> def lexical_diversity(text): ... return len(text) / len(set(text)) ... >>> def percentage(count, total): ... return 100 * count / total ...
The Python interpreter changes the prompt from
... after encountering the colon at the
end of the first line. The
prompt indicates that Python expects an indented code block to appear next. It is
up to you to do the indentation, by typing four spaces or hitting
the Tab key. To finish the indented block, just enter a blank
In the definition of
lexical_diversity() , we specify a parameter labeled
text. This parameter is a “placeholder” for
the actual text whose lexical diversity we want to compute, and
reoccurs in the block of code that will run when the function is used,
in line . Similarly,
percentage() is defined to take two
Once Python knows that
percentage() are the names for specific
blocks of code, we can go ahead and use these functions:
>>> lexical_diversity(text3) 16.050197203298673 >>> lexical_diversity(text5) 7.4200461589185629 >>> percentage(4, 5) 80.0 >>> percentage(text4.count('a'), len(text4)) 1.4643016433938312 >>>
To recap, we use or call a
function such as
lexical_diversity() by typing its name,
followed by an open parenthesis, the name of the text, and then a
close parenthesis. These parentheses will show up often; their role is
to separate the name of a task—such as
lexical_diversity()—from the data that the
task is to be performed on—such as
text3. The data value that we place in the
parentheses when we call a function is an argument to the function.
You have already encountered several functions in this chapter,
sorted(). By convention, we will always add
an empty pair of parentheses after a function name, as in
len(), just to make clear that what we are
talking about is a function rather than some other kind of Python
expression. Functions are an important concept in programming, and we
only mention them at the outset to give newcomers a sense of the power
and creativity of programming. Don’t worry if you find it a bit
confusing right now.
Later we’ll see how to use functions when tabulating data, as in Table 1-1. Each row of the table will involve the same computation but with different data, and we’ll do this repetitive work using a function.
What is a text? At one level, it is a sequence of symbols on a page such as this one. At another level, it is a sequence of chapters, made up of a sequence of sections, where each section is a sequence of paragraphs, and so on. However, for our purposes, we will think of a text as nothing more than a sequence of words and punctuation. Here’s how we represent text in Python, in this case the opening sentence of Moby Dick:
>>> sent1 = ['Call', 'me', 'Ishmael', '.'] >>>
After the prompt we’ve given a name we made up,
sent1, followed by the equals sign, and then
some quoted words, separated with commas, and surrounded with
brackets. This bracketed material is known as a list in Python: it is how we store a text. We
can inspect it by typing the name .
We can ask for its length . We can even
apply our own
function to it .
Some more lists have been defined for you, one for the opening
sentence of each of our texts,
sent9. We inspect two of them here; you can
see the rest for yourself using the Python interpreter (if you get an
error saying that
sent2 is not
defined, you need to first type
nltk.book import *).
>>> sent2 ['The', 'family', 'of', 'Dashwood', 'had', 'long', 'been', 'settled', 'in', 'Sussex', '.'] >>> sent3 ['In', 'the', 'beginning', 'God', 'created', 'the', 'heaven', 'and', 'the', 'earth', '.'] >>>
Your Turn: Make up a few
sentences of your own, by typing a name, equals sign, and a list of
words, like this:
ex1 = ['Monty', 'Python',
'and', 'the', 'Holy', 'Grail']. Repeat some of the other
Python operations we saw earlier in Computing with Language: Texts and Words, e.g.,
>>> ['Monty', 'Python'] + ['and', 'the', 'Holy', 'Grail'] ['Monty', 'Python', 'and', 'the', 'Holy', 'Grail']
This special use of the addition operation is called concatenation; it combines the lists together into a single list. We can concatenate sentences to build up a text.
We don’t have to literally type the lists either; we can use short names that refer to pre-defined lists.
>>> sent4 + sent1 ['Fellow', '-', 'Citizens', 'of', 'the', 'Senate', 'and', 'of', 'the', 'House', 'of', 'Representatives', ':', 'Call', 'me', 'Ishmael', '.'] >>>
>>> sent1.append("Some") >>> sent1 ['Call', 'me', 'Ishmael', '.', 'Some'] >>>
As we have seen, a text in Python is a list of words,
represented using a combination of brackets and quotes. Just as with
an ordinary page of text, we can count up the total number of words in
len(text1), and count the occurrences in a
text of a particular word—say, heaven—using
With some patience, we can pick out the 1st, 173rd, or even
14,278th word in a printed text. Analogously, we can identify the
elements of a Python list by their order of occurrence in the list.
The number that represents this position is the item’s index. We instruct Python to show us the item
that occurs at an index such as
in a text by writing the name of the text followed by the index inside
>>> text4 'awaken' >>>
We can do the converse; given a word, find the index of when it first occurs:
>>> text4.index('awaken') 173 >>>
Indexes are a common way to access the words of a text, or, more generally, the elements of any list. Python permits us to access sublists as well, extracting manageable pieces of language from large texts, a technique known as slicing.
>>> text5[16715:16735] ['U86', 'thats', 'why', 'something', 'like', 'gamefly', 'is', 'so', 'good', 'because', 'you', 'can', 'actually', 'play', 'a', 'full', 'game', 'without', 'buying', 'it'] >>> text6[1600:1625] ['We', "'", 're', 'an', 'anarcho', '-', 'syndicalist', 'commune', '.', 'We', 'take', 'it', 'in', 'turns', 'to', 'act', 'as', 'a', 'sort', 'of', 'executive', 'officer', 'for', 'the', 'week'] >>>
Indexes have some subtleties, and we’ll explore these with the help of an artificial sentence:
>>> sent = ['word1', 'word2', 'word3', 'word4', 'word5', ... 'word6', 'word7', 'word8', 'word9', 'word10'] >>> sent 'word1' >>> sent 'word10' >>>
Notice that our indexes start from zero:
sent element zero, written
sent, is the first word,
sent element 9 is
'word10'. The reason is simple: the moment
Python accesses the content of a list from the computer’s memory, it
is already at the first element; we have to tell it how many elements
forward to go. Thus, zero steps forward leaves it at the first
This practice of counting from zero is initially confusing, but typical of modern programming languages. You’ll quickly get the hang of it if you’ve mastered the system of counting centuries where 19XY is a year in the 20th century, or if you live in a country where the floors of a building are numbered from 1, and so walking up n-1 flights of stairs takes you to level n.
>>> sent Traceback (most recent call last): File "<stdin>", line 1, in ? IndexError: list index out of range >>>
This time it is not a syntax error, because the program fragment
is syntactically correct. Instead, it is a runtime error, and it produces a
Traceback message that shows the context of
the error, followed by the name of the error,
IndexError, and a brief explanation.
>>> sent[5:8] ['word6', 'word7', 'word8'] >>> sent 'word6' >>> sent 'word7' >>> sent 'word8' >>>
elements m…n-1. As the next
example shows, we can omit the first number if the slice begins at the
start of the list , and we can omit the
second number if the slice goes to the end :
>>> sent[:3] ['word1', 'word2', 'word3'] >>> text2[141525:] ['among', 'the', 'merits', 'and', 'the', 'happiness', 'of', 'Elinor', 'and', 'Marianne', ',', 'let', 'it', 'not', 'be', 'ranked', 'as', 'the', 'least', 'considerable', ',', 'that', 'though', 'sisters', ',', 'and', 'living', 'almost', 'within', 'sight', 'of', 'each', 'other', ',', 'they', 'could', 'live', 'without', 'disagreement', 'between', 'themselves', ',', 'or', 'producing', 'coolness', 'between', 'their', 'husbands', '.', 'THE', 'END'] >>>
We can modify an element of a list by assigning to one of its
index values. In the next example, we put
sent on the left of the equals sign . We can also replace an entire slice
with new material . A
consequence of this last change is that the list only has four
elements, and accessing a later value generates an error .
>>> sent = 'First' >>> sent = 'Last' >>> len(sent) 10 >>> sent[1:9] = ['Second', 'Third'] >>> sent ['First', 'Second', 'Third', 'Last'] >>> sent Traceback (most recent call last): File "<stdin>", line 1, in ? IndexError: list index out of range >>>
Your Turn: Take a few minutes to define a sentence of your own and modify individual words and groups of words (slices) using the same methods used earlier. Check your understanding by trying the exercises on lists at the end of this chapter.
From the start of Computing with Language: Texts and Words, you have had
access to texts called
text2, and so on. It saved a lot of
typing to be able to refer to a 250,000-word book with a short name
like this! In general, we can make up names for anything we care to
calculate. We did this ourselves in the previous sections, e.g.,
defining a variable
sent1, as follows:
>>> sent1 = ['Call', 'me', 'Ishmael', '.'] >>>
Such lines have the form: variable =
expression. Python will evaluate the expression, and save
its result to the variable. This process is called assignment. It does not generate any output;
you have to type the variable on a line of its own to inspect its
contents. The equals sign is slightly misleading, since information is
moving from the right side to the left. It might help to think of it
as a left-arrow. The name of the variable can be anything you like,
xyzzy. It must start with a letter, and can
include numbers and underscores. Here are some examples of variables
>>> my_sent = ['Bravely', 'bold', 'Sir', 'Robin', ',', 'rode', ... 'forth', 'from', 'Camelot', '.'] >>> noun_phrase = my_sent[1:4] >>> noun_phrase ['bold', 'Sir', 'Robin'] >>> wOrDs = sorted(noun_phrase) >>> wOrDs ['Robin', 'Sir', 'bold'] >>>
Notice in the previous example that we split the definition of
my_sent over two lines. Python
expressions can be split across multiple lines, so long as this
happens within any kind of brackets. Python uses the
... prompt to indicate that more input is
expected. It doesn’t matter how much indentation is used in these
continuation lines, but some indentation usually makes them easier
It is good to choose meaningful variable names to remind you—and
to help anyone else who reads your Python code—what your code is meant
to do. Python does not try to make sense of the names; it blindly
follows your instructions, and does not object if you do something
confusing, such as
one = 'two' or
two = 3. The only restriction is
that a variable name cannot be any of Python’s reserved words, such as
import. If you use a reserved
word, Python will produce a syntax error:
>>> not = 'Camelot' File "<stdin>", line 1 not = 'Camelot' ^ SyntaxError: invalid syntax >>>
We will often use variables to hold intermediate steps of a
computation, especially when this makes the code easier to follow.
len(set(text1)) could also be
>>> vocab = set(text1) >>> vocab_size = len(vocab) >>> vocab_size 19317 >>>
Take care with your choice of names (or identifiers) for Python variables. First,
you should start the name with a letter, optionally followed by
9) or letters. Thus,
abc23 is fine, but
23abc will cause a syntax error. Names are
case-sensitive, which means that
myvar are distinct variables. Variable
names cannot contain whitespace, but you can separate words using an
careful not to insert a hyphen instead of an underscore:
my-var is wrong, since Python interprets
- as a minus sign.
>>> name * 2 'MontyMonty' >>> name + '!' 'Monty!' >>>
We can join the words of a list to make a single string, or split a string into a list, as follows:
>>> ' '.join(['Monty', 'Python']) 'Monty Python' >>> 'Monty Python'.split() ['Monty', 'Python'] >>>
Let’s return to our exploration of the ways we can bring our computational resources to bear on large quantities of text. We began this discussion in Computing with Language: Texts and Words, and saw how to search for words in context, how to compile the vocabulary of a text, how to generate random text in the same style, and so on.
In this section, we pick up the question of what makes a text distinct, and use automatic methods to find characteristic words and expressions of a text. As in Computing with Language: Texts and Words, you can try new features of the Python language by copying them into the interpreter, and you’ll learn about these features systematically in the following section.
Before continuing further, you might like to check your understanding of the last section by predicting the output of the following code. You can use the interpreter to check whether you got it right. If you’re not sure how to do this task, it would be a good idea to review the previous section before continuing further.
>>> saying = ['After', 'all', 'is', 'said', 'and', 'done', ... 'more', 'is', 'said', 'than', 'done'] >>> tokens = set(saying) >>> tokens = sorted(tokens) >>> tokens[-2:] what output do you expect here? >>>
How can we automatically identify the words of a text that are most informative about the topic and genre of the text? Imagine how you might go about finding the 50 most frequent words of a book. One method would be to keep a tally for each vocabulary item, like that shown in Figure 1-3. The tally would need thousands of rows, and it would be an exceedingly laborious process—so laborious that we would rather assign the task to a machine.
The table in Figure 1-3 is known as a
frequency distribution , and it tells us the frequency of each vocabulary item
in the text. (In general, it could count any kind of observable
event.) It is a “distribution” since it tells us how the total number
of word tokens in the text are distributed across the vocabulary
items. Since we often need frequency distributions in language
processing, NLTK provides built-in support for them. Let’s use a
FreqDist to find the 50 most frequent words of Moby
Dick. Try to work out what is going on here, then read the
explanation that follows.
>>> fdist1 = FreqDist(text1) >>> fdist1 <FreqDist with 260819 outcomes> >>> vocabulary1 = fdist1.keys() >>> vocabulary1[:50] [',', 'the', '.', 'of', 'and', 'a', 'to', ';', 'in', 'that', "'", '-', 'his', 'it', 'I', 's', 'is', 'he', 'with', 'was', 'as', '"', 'all', 'for', 'this', '!', 'at', 'by', 'but', 'not', '--', 'him', 'from', 'be', 'on', 'so', 'whale', 'one', 'you', 'had', 'have', 'there', 'But', 'or', 'were', 'now', 'which', '?', 'me', 'like'] >>> fdist1['whale'] 906 >>>
When we first invoke
FreqDist, we pass the name of the text as an argument . We can inspect the total number of
words (“outcomes”) that have been counted up —260,819 in the case of
Moby Dick. The expression
keys() gives us a list of all the distinct types in the text
, and we can look at the first 50
of these by slicing the list .
Your Turn: Try the
preceding frequency distribution example for yourself, for
text2. Be careful to use the correct
parentheses and uppercase letters. If you get an error message
NameError: name 'FreqDist' is not
defined, you need to start your work with
from nltk.book import *.
Do any words produced in the last example help us grasp the
topic or genre of this text? Only one word,
whale, is slightly informative! It occurs over
900 times. The rest of the words tell us nothing about the text;
they’re just English “plumbing.” What proportion of the text is taken
up with such words? We can generate a cumulative frequency plot for
these words, using
cumulative=True), to produce the graph in Figure 1-4. These 50 words account for nearly half
If the frequent words don’t help us, how about the words that
occur once only, the so-called hapaxes? View them by typing
fdist1.hapaxes(). This list contains
expostulations, and about 9,000 others. It seems
that there are too many rare words, and without seeing the context we
probably can’t guess what half of the hapaxes mean in any case! Since
neither frequent nor infrequent words help, we need to try something
Next, let’s look at the long words of a text; perhaps these will be more characteristic and informative. For this we adapt some notation from set theory. We would like to find the words from the vocabulary of the text that are more than 15 characters long. Let’s call this property P, so that P(w) is true if and only if w is more than 15 characters long. Now we can express the words of interest using mathematical set notation as shown in a. This means “the set of all w such that w is an element of V (the vocabulary) and w has property P.”
The corresponding Python expression is given in b. (Note that it produces a list, not a set, which means that duplicates are possible.) Observe how similar the two notations are. Let’s go one more step and write executable Python code:
>>> V = set(text1) >>> long_words = [w for w in V if len(w) > 15] >>> sorted(long_words) ['CIRCUMNAVIGATION', 'Physiognomically', 'apprehensiveness', 'cannibalistically', 'characteristically', 'circumnavigating', 'circumnavigation', 'circumnavigations', 'comprehensiveness', 'hermaphroditical', 'indiscriminately', 'indispensableness', 'irresistibleness', 'physiognomically', 'preternaturalness', 'responsibilities', 'simultaneousness', 'subterraneousness', 'supernaturalness', 'superstitiousness', 'uncomfortableness', 'uncompromisedness', 'undiscriminating', 'uninterpenetratingly'] >>>
For each word
w in the
V, we check whether
len(w) is greater than 15; all
other words will be ignored. We will discuss this syntax more
Your Turn: Try out the
previous statements in the Python interpreter, and experiment with
changing the text and changing the length condition. Does it make an
difference to your results if you change the variable names, e.g.,
[word for word in vocab if
Let’s return to our task of finding words that characterize a
text. Notice that the long words in
text4 reflect its national
transcontinental—whereas those in
text5 reflect its informal content:
yuuuuuuuuuuuummmmmmmmmmmm. Have we succeeded in
automatically extracting words that typify a text? Well, these very
long words are often hapaxes (i.e., unique) and perhaps it would be
better to find frequently occurring long words.
This seems promising since it eliminates frequent short words (e.g.,
the) and infrequent long words (e.g.,
antiphilosophists). Here are all words from the
chat corpus that are longer than seven characters, that occur more
than seven times:
>>> fdist5 = FreqDist(text5) >>> sorted([w for w in set(text5) if len(w) > 7 and fdist5[w] > 7]) ['#14-19teens', '#talkcity_adults', '((((((((((', '........', 'Question', 'actually', 'anything', 'computer', 'cute.-ass', 'everyone', 'football', 'innocent', 'listening', 'remember', 'seriously', 'something', 'together', 'tomorrow', 'watching'] >>>
Notice how we have used two conditions:
len(w) > 7 ensures that the words are
longer than seven letters, and
7 ensures that these words occur more than seven times. At
last we have managed to automatically identify the frequently
occurring content-bearing words of the text. It is a modest but
important milestone: a tiny piece of code, processing tens of
thousands of words, produces some informative output.
A collocation is a sequence of words that occur together unusually often. Thus red wine is a collocation, whereas the wine is not. A characteristic of collocations is that they are resistant to substitution with words that have similar senses; for example, maroon wine sounds very odd.
>>> bigrams(['more', 'is', 'said', 'than', 'done']) [('more', 'is'), ('is', 'said'), ('said', 'than'), ('than', 'done')] >>>
Here we see that the pair of words
than-done is a bigram, and we write it in Python
('than', 'done'). Now,
collocations are essentially just frequent bigrams, except that we
want to pay more attention to the cases that involve rare words. In
particular, we want to find bigrams that occur more often than we
would expect based on the frequency of individual words. The
collocations() function does this for us (we will see how it works
>>> text4.collocations() Building collocations list United States; fellow citizens; years ago; Federal Government; General Government; American people; Vice President; Almighty God; Fellow citizens; Chief Magistrate; Chief Justice; God bless; Indian tribes; public debt; foreign nations; political parties; State governments; National Government; United Nations; public money >>> text8.collocations() Building collocations list medium build; social drinker; quiet nights; long term; age open; financially secure; fun times; similar interests; Age open; poss rship; single mum; permanent relationship; slim build; seeks lady; Late 30s; Photo pls; Vibrant personality; European background; ASIAN LADY; country drives >>>
The collocations that emerge are very specific to the genre of the texts. In order to find red wine as a collocation, we would need to process a much larger body of text.
Counting words is useful, but we can count other things too. For
example, we can look at the distribution of word lengths in a text, by
FreqDist out of a long list of numbers, where each number is the
length of the corresponding word in the text:
>>> [len(w) for w in text1] [1, 4, 4, 2, 6, 8, 4, 1, 9, 1, 1, 8, 2, 1, 4, 11, 5, 2, 1, 7, 6, 1, 3, 4, 5, 2, ...] >>> fdist = FreqDist([len(w) for w in text1]) >>> fdist <FreqDist with 260819 outcomes> >>> fdist.keys() [3, 1, 4, 2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20] >>>
We start by deriving a list of the lengths of words in
FreqDist then counts the number of times each of these occurs
. The result is a distribution containing a
quarter of a million items, each of which is a number corresponding to
a word token in the text. But there are only 20 distinct items being
counted, the numbers 1 through 20, because there are only 20 different
word lengths. I.e., there are words consisting of just 1 character, 2
characters, ..., 20 characters, but none with 21 or more characters.
One might wonder how frequent the different lengths of words are
(e.g., how many words of length 4 appear in the text, are there more
words of length 5 than length 4, etc.). We can do this as
>>> fdist.items() [(3, 50223), (1, 47933), (4, 42345), (2, 38513), (5, 26597), (6, 17111), (7, 14399), (8, 9966), (9, 6428), (10, 3528), (11, 1873), (12, 1053), (13, 567), (14, 177), (15, 70), (16, 22), (17, 12), (18, 1), (20, 1)] >>> fdist.max() 3 >>> fdist 50223 >>> fdist.freq(3) 0.19255882431878046 >>>
From this we see that the most frequent word length is 3, and that words of length 3 account for roughly 50,000 (or 20%) of the words making up the book. Although we will not pursue it here, further analysis of word length might help us understand differences between authors, genres, or languages. Table 1-2 summarizes the functions defined in frequency distributions.
Create a frequency distribution containing the given samples
Increment the count for this sample
Count of the number of times a given sample occurred
Frequency of a given sample
Total number of samples
The samples sorted in order of decreasing frequency
Iterate over the samples, in order of decreasing frequency
Sample with the greatest count
Tabulate the frequency distribution
Graphical plot of the frequency distribution
Cumulative plot of the frequency distribution
Test if samples in
Our discussion of frequency distributions has introduced some important Python concepts, and we will look at them systematically in Back to Python: Making Decisions and Taking Control.
So far, our little programs have had some interesting qualities: the ability to work with language, and the potential to save human effort through automation. A key feature of programming is the ability of machines to make decisions on our behalf, executing instructions when certain conditions are met, or repeatedly looping through text data until some condition is satisfied. This feature is known as control, and is the focus of this section.
Python supports a wide range of operators, such as
>=, for testing the relationship between
values. The full set of these relational
operators are shown in Table 1-3.
Less than or equal to
Equal to (note this is two “
Not equal to
Greater than or equal to
We can use these to select different words from a sentence of
news text. Here are some examples—notice only the operator is changed
from one line to the next. They all use
sent7, the first sentence from
text7 (Wall Street
Journal). As before, if you get an error saying that
sent7 is undefined, you need to
from nltk.book import
>>> sent7 ['Pierre', 'Vinken', ',', '61', 'years', 'old', ',', 'will', 'join', 'the', 'board', 'as', 'a', 'nonexecutive', 'director', 'Nov.', '29', '.'] >>> [w for w in sent7 if len(w) < 4] [',', '61', 'old', ',', 'the', 'as', 'a', '29', '.'] >>> [w for w in sent7 if len(w) <= 4] [',', '61', 'old', ',', 'will', 'join', 'the', 'as', 'a', 'Nov.', '29', '.'] >>> [w for w in sent7 if len(w) == 4] ['will', 'join', 'Nov.'] >>> [w for w in sent7 if len(w) != 4] ['Pierre', 'Vinken', ',', '61', 'years', 'old', ',', 'the', 'board', 'as', 'a', 'nonexecutive', 'director', '29', '.'] >>>
There is a common pattern to all of these examples:
[w for w in text if
condition is a Python
“test” that yields either true or false. In the cases shown in the
previous code example, the condition is always a numerical comparison.
However, we can also test various properties of words, using the
functions listed in Table 1-4.
Test if all cased characters in
Test if all cased characters in
Test if all characters in
Test if all characters in
Test if all characters in
Here are some examples of these operators being used to select words from our texts: words ending with -ableness; words containing gnt; words having an initial capital; and words consisting entirely of digits.
>>> sorted([w for w in set(text1) if w.endswith('ableness')]) ['comfortableness', 'honourableness', 'immutableness', 'indispensableness', ...] >>> sorted([term for term in set(text4) if 'gnt' in term]) ['Sovereignty', 'sovereignties', 'sovereignty'] >>> sorted([item for item in set(text6) if item.istitle()]) ['A', 'Aaaaaaaaah', 'Aaaaaaaah', 'Aaaaaah', 'Aaaah', 'Aaaaugh', 'Aaagh', ...] >>> sorted([item for item in set(sent7) if item.isdigit()]) ['29', '61'] >>>
We can also create more complex conditions. If
c is a condition, then
not c is also a
condition. If we have two conditions
c2, then we can combine
them to form a new condition using conjunction and disjunction:
Your Turn: Run the following examples and try to explain what is going on in each one. Next, try to make up some conditions of your own.
In Computing with Language: Simple Statistics, we saw some examples of counting items other than words. Let’s take a closer look at the notation we used:
>>> [len(w) for w in text1] [1, 4, 4, 2, 6, 8, 4, 1, 9, 1, 1, 8, 2, 1, 4, 11, 5, 2, 1, 7, 6, 1, 3, 4, 5, 2, ...] >>> [w.upper() for w in text1] ['[', 'MOBY', 'DICK', 'BY', 'HERMAN', 'MELVILLE', '1851', ']', 'ETYMOLOGY', '.', ...] >>>
These expressions have the form
for ...] or
f is a
function that operates on a word to compute its length, or to convert
it to uppercase. For now, you don’t need to understand the difference
between the notations
w.f(). Instead, simply learn this
Python idiom which performs the same operation on every element of a
list. In the preceding examples, it goes through each word in
text1, assigning each one in turn to the
w and performing the
specified operation on the variable.
The notation just described is called a “list comprehension.” This is our first example of a Python idiom, a fixed notation that we use habitually without bothering to analyze each time. Mastering such idioms is an important part of becoming a fluent Python programmer.
Let’s return to the question of vocabulary size, and apply the same idiom here:
>>> len(text1) 260819 >>> len(set(text1)) 19317 >>> len(set([word.lower() for word in text1])) 17231 >>>
Now that we are not double-counting words like This and this, which differ only in capitalization, we’ve wiped 2,000 off the vocabulary count! We can go a step further and eliminate numbers and punctuation from the vocabulary count by filtering out any non-alphabetic items:
>>> len(set([word.lower() for word in text1 if word.isalpha()])) 16948 >>>
This example is slightly complicated: it lowercases all the purely alphabetic items. Perhaps it would have been simpler just to count the lowercase-only items, but this gives the wrong answer (why?).
Don’t worry if you don’t feel confident with list comprehensions yet, since you’ll see many more examples along with explanations in the following chapters.
Most programming languages permit us to execute a block of code
when a conditional expression, or
if statement, is satisfied. We
already saw examples of conditional tests in code like
[w for w in sent7 if len(w) < 4]. In the
following program, we have created a variable called
word containing the string value
if statement checks whether the test
len(word) < 5 is true. It is, so
the body of the
if statement is
invoked and the
>>> word = 'cat' >>> if len(word) < 5: ... print 'word length is less than 5' ... word length is less than 5 >>>
If we change the conditional test to
len(word) >= 5, to check that the length
word is greater than or equal to
5, then the test will no longer be
true. This time, the body of the
statement will not be executed, and no message is shown to the
if statement is known as a
control structure because it
controls whether the code in the indented block will be run. Another
control structure is the
Try the following, and remember to include the colon and the four
>>> for word in ['Call', 'me', 'Ishmael', '.']: ... print word ... Call me Ishmael . >>>
This is called a loop because Python executes the code in
circular fashion. It starts by performing the assignment
word = 'Call', effectively using the
word variable to name the first
item of the list. Then, it displays the value of
word to the user. Next, it goes back to the
for statement, and performs the
word = 'me' before
displaying this new value to the user, and so on. It continues in this
fashion until every item of the list has been processed.
Now we can combine the
for statements. We will loop over
every item of the list, and print the item only if it ends with the
letter l. We’ll pick another name for the
variable to demonstrate that Python doesn’t try to make sense of
>>> sent1 = ['Call', 'me', 'Ishmael', '.'] >>> for xyzzy in sent1: ... if xyzzy.endswith('l'): ... print xyzzy ... Call Ishmael >>>
You will notice that
for statements have a colon at the
end of the line, before the indentation begins. In fact, all Python
control structures end with a colon. The colon indicates that the
current statement relates to the indented block that
We can also specify an action to be taken if the condition of
if statement is not met. Here
we see the
elif (else if)
statement, and the
Notice that these also have colons before the indented code.
>>> for token in sent1: ... if token.islower(): ... print token, 'is a lowercase word' ... elif token.istitle(): ... print token, 'is a titlecase word' ... else: ... print token, 'is punctuation' ... Call is a titlecase word me is a lowercase word Ishmael is a titlecase word . is punctuation >>>
As you can see, even with this small amount of Python knowledge, you can start to build multiline Python programs. It’s important to develop such programs in pieces, testing that each piece does what you expect before combining them into a program. This is why the Python interactive interpreter is so invaluable, and why you should get comfortable using it.
Finally, let’s combine the idioms we’ve been exploring. First, we create a list of cie and cei words, then we loop over each item and print it. Notice the comma at the end of the print statement, which tells Python to produce its output on a single line.
>>> tricky = sorted([w for w in set(text2) if 'cie' in w or 'cei' in w]) >>> for word in tricky: ... print word, ancient ceiling conceit conceited conceive conscience conscientious conscientiously deceitful deceive ... >>>
We have been exploring language bottom-up, with the help of texts and the Python programming language. However, we’re also interested in exploiting our knowledge of language and computation by building useful language technologies. We’ll take the opportunity now to step back from the nitty-gritty of code in order to paint a bigger picture of natural language processing.
At a purely practical level, we all need help to navigate the universe of information locked up in text on the Web. Search engines have been crucial to the growth and popularity of the Web, but have some shortcomings. It takes skill, knowledge, and some luck, to extract answers to such questions as: What tourist sites can I visit between Philadelphia and Pittsburgh on a limited budget? What do experts say about digital SLR cameras? What predictions about the steel market were made by credible commentators in the past week? Getting a computer to answer them automatically involves a range of language processing tasks, including information extraction, inference, and summarization, and would need to be carried out on a scale and with a level of robustness that is still beyond our current capabilities.
On a more philosophical level, a long-standing challenge within artificial intelligence has been to build intelligent machines, and a major part of intelligent behavior is understanding language. For many years this goal has been seen as too difficult. However, as NLP technologies become more mature, and robust methods for analyzing unrestricted text become more widespread, the prospect of natural language understanding has re-emerged as a plausible goal.
In this section we describe some language understanding technologies, to give you a sense of the interesting challenges that are waiting for you.
serve: help with food or drink; hold an office; put ball into play
dish: plate; course of a meal; communications device
In a sentence containing the phrase: he served the dish, you can detect that both serve and dish are being used with their food meanings. It’s unlikely that the topic of discussion shifted from sports to crockery in the space of three words. This would force you to invent bizarre images, like a tennis pro taking out his frustrations on a china tea-set laid out beside the court. In other words, we automatically disambiguate words using context, exploiting the simple fact that nearby words have closely related meanings. As another example of this contextual effect, consider the word by, which has several meanings, for example, the book by Chesterton (agentive—Chesterton was the author of the book); the cup by the stove (locative—the stove is where the cup is); and submit by Friday (temporal—Friday is the time of the submitting). Observe in Example 1-3 that the meaning of the italicized word helps us interpret the meaning of by.
A deeper kind of language understanding is to work out “who did what to whom,” i.e., to detect the subjects and objects of verbs. You learned to do this in elementary school, but it’s harder than you might think. In the sentence the thieves stole the paintings, it is easy to tell who performed the stealing action. Consider three possible following sentences in Example 1-4, and try to determine what was sold, caught, and found (one case is ambiguous).
The thieves stole the paintings. They were subsequently sold.
The thieves stole the paintings. They were subsequently caught.
The thieves stole the paintings. They were subsequently found.
Answering this question involves finding the antecedent of the pronoun they, either thieves or paintings. Computational techniques for tackling this problem include anaphora resolution—identifying what a pronoun or noun phrase refers to—and semantic role labeling—identifying how a noun phrase relates to the verb (as agent, patient, instrument, and so on).
If we can automatically solve such problems of language understanding, we will be able to move on to tasks that involve generating language output, such as question answering and machine translation. In the first case, a machine should be able to answer a user’s questions relating to collection of texts:
Text: ... The thieves stole the paintings. They were subsequently sold. ...
Human: Who or what was sold?
Machine: The paintings.
The machine’s answer demonstrates that it has correctly worked out that they refers to paintings and not to thieves. In the second case, the machine should be able to translate the text into another language, accurately conveying the meaning of the original text. In translating the example text into French, we are forced to choose the gender of the pronoun in the second sentence: ils (masculine) if the thieves are sold, and elles (feminine) if the paintings are sold. Correct translation actually depends on correct understanding of the pronoun.
The thieves stole the paintings. They were subsequently found.
Les voleurs ont volé les peintures. Ils ont été trouvés plus tard. (the thieves)
Les voleurs ont volé les peintures. Elles ont été trouvées plus tard. (the paintings)
In all of these examples, working out the sense of a word, the subject of a verb, and the antecedent of a pronoun are steps in establishing the meaning of a sentence, things we would expect a language understanding system to be able to do.
For a long time now, machine translation (MT) has been the holy grail of language understanding, ultimately seeking to provide high-quality, idiomatic translation between any pair of languages. Its roots go back to the early days of the Cold War, when the promise of automatic translation led to substantial government sponsorship, and with it, the genesis of NLP itself.
Today, practical translation systems exist for particular pairs
of languages, and some are integrated into web search engines.
However, these systems have some serious shortcomings. We can explore
them with the help of NLTK’s “babelizer” (which is automatically
loaded when you import this chapter’s materials using
from nltk.book import *). This program
submits a sentence for translation into a specified language, then
submits the resulting sentence for translation back into English. It
stops after 12 iterations, or if it receives a translation that was
produced already (indicating a loop):
>>> babelize_shell() NLTK Babelizer: type 'help' for a list of commands. Babel> how long before the next flight to Alice Springs? Babel> german Babel> run 0> how long before the next flight to Alice Springs? 1> wie lang vor dem folgenden Flug zu Alice Springs? 2> how long before the following flight to Alice jump? 3> wie lang vor dem folgenden Flug zu Alice springen Sie? 4> how long before the following flight to Alice do you jump? 5> wie lang, bevor der folgende Flug zu Alice tun, Sie springen? 6> how long, before the following flight to Alice does, do you jump? 7> wie lang bevor der folgende Flug zu Alice tut, tun Sie springen? 8> how long before the following flight to Alice does, do you jump? 9> wie lang, bevor der folgende Flug zu Alice tut, tun Sie springen? 10> how long, before the following flight does to Alice, do do you jump? 11> wie lang bevor der folgende Flug zu Alice tut, Sie tun Sprung? 12> how long before the following flight does leap to Alice, does you?
Observe that the system correctly translates Alice
Springs from English to German (in the line starting
1>), but on the way back to
English, this ends up as Alice jump (line
2). The preposition
before is initially translated into the
corresponding German preposition vor, but later
into the conjunction bevor (line 5). After line
5 the sentences become non-sensical
(but notice the various phrasings indicated by the commas, and the
change from jump to leap).
The translation system did not recognize when a word was part of a
proper name, and it misinterpreted the grammatical structure. The
grammatical problems are more obvious in the following example. Did
John find the pig, or did the pig find John?
>>> babelize_shell() Babel> The pig that John found looked happy Babel> german Babel> run 0> The pig that John found looked happy 1> Das Schwein, das John fand, schaute gl?cklich 2> The pig, which found John, looked happy
Machine translation is difficult because a given word could have several possible translations (depending on its meaning), and because word order must be changed in keeping with the grammatical structure of the target language. Today these difficulties are being faced by collecting massive quantities of parallel texts from news and government websites that publish documents in two or more languages. Given a document in German and English, and possibly a bilingual dictionary, we can automatically pair up the sentences, a process called text alignment. Once we have a million or more sentence pairs, we can detect corresponding words and phrases, and build a model that can be used for translating new text.
In the history of artificial intelligence, the chief measure of intelligence has been a linguistic one, namely the Turing Test: can a dialogue system, responding to a user’s text input, perform so naturally that we cannot distinguish it from a human-generated response? In contrast, today’s commercial dialogue systems are very limited, but still perform useful functions in narrowly defined domains, as we see here:
|S: How may I help you?|
|U: When is Saving Private Ryan playing?|
|S: For what theater?|
|U: The Paramount theater.|
|S: Saving Private Ryan is not playing at the Paramount theater, but|
|it’s playing at the Madison theater at 3:00, 5:30, 8:00, and 10:30.|
You could not ask this system to provide driving instructions or details of nearby restaurants unless the required information had already been stored and suitable question-answer pairs had been incorporated into the language processing system.
Observe that this system seems to understand the user’s goals: the user asks when a movie is showing and the system correctly determines from this that the user wants to see the movie. This inference seems so obvious that you probably didn’t notice it was made, yet a natural language system needs to be endowed with this capability in order to interact naturally. Without it, when asked, Do you know when Saving Private Ryan is playing?, a system might unhelpfully respond with a cold Yes. However, the developers of commercial dialogue systems use contextual assumptions and business logic to ensure that the different ways in which a user might express requests or provide information are handled in a way that makes sense for the particular application. So, if you type When is ..., or I want to know when ..., or Can you tell me when ..., simple rules will always yield screening times. This is enough for the system to provide a useful service.
Dialogue systems give us an opportunity to mention the commonly assumed pipeline for NLP. Figure 1-5 shows the architecture of a simple dialogue system. Along the top of the diagram, moving from left to right, is a “pipeline” of some language understanding components. These map from speech input via syntactic parsing to some kind of meaning representation. Along the middle, moving from right to left, is the reverse pipeline of components for converting concepts to speech. These components make up the dynamic aspects of the system. At the bottom of the diagram are some representative bodies of static information: the repositories of language-related data that the processing components draw on to do their work.
The challenge of language understanding has been brought into focus in recent years by a public “shared task” called Recognizing Textual Entailment (RTE). The basic scenario is simple. Suppose you want to find evidence to support the hypothesis: Sandra Goudie was defeated by Max Purnell, and that you have another short text that seems to be relevant, for example, Sandra Goudie was first elected to Parliament in the 2002 elections, narrowly winning the seat of Coromandel by defeating Labour candidate Max Purnell and pushing incumbent Green MP Jeanette Fitzsimons into third place. Does the text provide enough evidence for you to accept the hypothesis? In this particular case, the answer will be “No.” You can draw this conclusion easily, but it is very hard to come up with automated methods for making the right decision. The RTE Challenges provide data that allow competitors to develop their systems, but not enough data for “brute force” machine learning techniques (a topic we will cover in Chapter 6). Consequently, some linguistic analysis is crucial. In the previous example, it is important for the system to note that Sandra Goudie names the person being defeated in the hypothesis, not the person doing the defeating in the text. As another illustration of the difficulty of the task, consider the following text-hypothesis pair:
Text: David Golinkin is the editor or author of 18 books, and over 150 responsa, articles, sermons and books
Hypothesis: Golinkin has written 18 books
In order to determine whether the hypothesis is supported by the text, the system needs the following background knowledge: (i) if someone is an author of a book, then he/she has written that book; (ii) if someone is an editor of a book, then he/she has not written (all of) that book; (iii) if someone is editor or author of 18 books, then one cannot conclude that he/she is author of 18 books.
Despite the research-led advances in tasks such as RTE, natural language systems that have been deployed for real-world applications still cannot perform common-sense reasoning or draw on world knowledge in a general and robust manner. We can wait for these difficult artificial intelligence problems to be solved, but in the meantime it is necessary to live with some severe limitations on the reasoning and knowledge capabilities of natural language systems. Accordingly, right from the beginning, an important goal of NLP research has been to make progress on the difficult task of building technologies that “understand language,” using superficial yet powerful techniques instead of unrestricted knowledge and reasoning capabilities. Indeed, this is one of the goals of this book, and we hope to equip you with the knowledge and skills to build useful NLP systems, and to contribute to the long-term aspiration of building intelligent machines.
Texts are represented in Python using lists:
['Monty', 'Python']. We can use indexing,
slicing, and the
A word “token” is a particular appearance of a given word in a
text; a word “type” is the unique form of the word as a particular
sequence of letters. We count word tokens using
len(text) and word types using
We obtain the vocabulary of a text
We operate on each item of a text using
[f(x) for x in text].
To derive the vocabulary, collapsing case distinctions and
ignoring punctuation, we can write
set([w.lower() for w in text if
We process each word in a text using a
for statement, such as
for w in t: or
for word in text:. This must be followed
by the colon character and an indented block of code, to be executed
each time through the loop.
We test a condition using an
len(word) < 5:. This must be followed by the colon
character and an indented block of code, to be executed only if the
condition is true.
A frequency distribution is a collection of items along with their frequency counts (e.g., the words of a text and their frequency of appearance).
A function is a block of code that has been assigned a name
and can be reused. Functions are defined using the
def keyword, as in
parameters of the function, and act as placeholders for actual data
A function is called by specifying its name followed by one or
more arguments inside parentheses, like this:
mult(3, 4), e.g.,
This chapter has introduced new concepts in programming, natural language processing, and linguistics, all mixed in together. Many of them are consolidated in the following chapters. However, you may also want to consult the online materials provided with this chapter (at http://www.nltk.org/), including links to additional background materials, and links to online NLP systems. You may also like to read up on some linguistics and NLP-related concepts in Wikipedia (e.g., collocations, the Turing Test, the type-token distinction).
You should acquaint yourself with the Python documentation available at http://docs.python.org/, including the many tutorials and comprehensive reference materials linked there. A Beginner’s Guide to Python is available at http://wiki.python.org/moin/BeginnersGuide. Miscellaneous questions about Python might be answered in the FAQ at http://www.python.org/doc/faq/general/.
As you delve into NLTK, you might want to subscribe to the mailing list where new releases of the toolkit are announced. There is also an NLTK-Users mailing list, where users help each other as they learn how to use Python and NLTK for language analysis work. Details of these lists are available at http://www.nltk.org/.
For more information on the topics covered in Automatic Natural Language Understanding, and on NLP more generally, you might like to consult one of the following excellent books:
Indurkhya, Nitin and Fred Damerau (eds., 2010) Handbook of Natural Language Processing (second edition), Chapman & Hall/CRC.
Jurafsky, Daniel and James Martin (2008) Speech and Language Processing (second edition), Prentice Hall.
Mitkov, Ruslan (ed., 2002) The Oxford Handbook of Computational Linguistics. Oxford University Press. (second edition expected in 2010).
The Association for Computational Linguistics is the international organization that represents the field of NLP. The ACL website hosts many useful resources, including: information about international and regional conferences and workshops; the ACL Wiki with links to hundreds of useful resources; and the ACL Anthology, which contains most of the NLP research literature from the past 50 years, fully indexed and freely downloadable.
Some excellent introductory linguistics textbooks are: (Finegan, 2007), (O’Grady et al., 2004), (OSU, 2007). You might like to consult LanguageLog, a popular linguistics blog with occasional posts that use the techniques described in this book.
○ Try using the Python interpreter as a calculator, and typing
12 / (4 +
○ Given an alphabet of 26 letters, there are 26 to the power
26 ** 10, 10-letter
strings we can form. That works out to
L at the end just indicates that this is
Python’s long-number format). How many hundred-letter strings are
○ The Python multiplication operation can be applied to lists.
What happens when you type
'Python'] * 20, or
○ Review Computing with Language: Texts and Words on
computing with language. How many words are there in
text2? How many distinct words are
○ Compare the lexical diversity scores for humor and romance fiction in Table 1-1. Which genre is more lexically diverse?
○ Produce a dispersion plot of the four main protagonists in Sense and Sensibility: Elinor, Marianne, Edward, and Willoughby. What can you observe about the different roles played by the males and females in this novel? Can you identify the couples?
○ Find the collocations in
○ Consider the following Python expression:
len(set(text4)). State the purpose of this
expression. Describe the two steps involved in performing this
○ Review A Closer Look at Python: Texts as Lists of Words on lists and strings.
Define a string and assign it to a variable, e.g.,
my_string = 'My String' (but
put something more interesting in the string). Print the
contents of this variable in two ways, first by simply typing
the variable name and pressing Enter, then by using the
Try adding the string to itself using
my_string + my_string, or multiplying
it by a number, e.g.,
3. Notice that the strings are joined together without
any spaces. How could you fix this?
○ Define a variable
to be a list of words, using the syntax
my_sent = ["My", "sent"] (but with your
own words, or a favorite saying).
to convert this into a string.
split() to split
the string back into the list form you had to start with.
○ Define several variables containing lists of words, e.g.,
phrase2, and so on. Join them together in
various combinations (using the plus operator) to form whole
sentences. What is the relationship between
len(phrase1 + phrase2) and
len(phrase1) + len(phrase2)?
○ Consider the following two expressions, which have the same value. Which one will typically be more relevant in NLP? Why?
○ We have seen how to represent a sentence as a list of words,
where each word is a sequence of characters. What does
sent1 do? Why? Experiment with other
○ The first sentence of
text3 is provided to you in the variable
sent3. The index of
is 1, because
sent3 gives us
'the'. What are the indexes of
the two other occurrences of this word in
○ Review the discussion of conditionals in Back to Python: Making Decisions and Taking Control. Find all words in the Chat Corpus
text5) starting with the letter
b. Show them in alphabetical order.
○ Type the expression
range(10) at the interpreter prompt. Now
range(10, 20, 2), and
range(20, 10, -2). We will see a variety
of uses for this built-in function in later chapters.
text9.index() to find
the index of the word sunset. You’ll need to
insert this word as an argument between the parentheses. By a
process of trial and error, find the slice for the complete sentence
that contains this word.
>>> sorted(set([w.lower() for w in text1])) >>> sorted([w.lower() for w in set(text1)])
Review the discussion of looping with conditions in Back to Python: Making Decisions and Taking Control. Use a combination of
if statements to loop over the words of
the movie script for Monty Python and the Holy
Ending in ize
Containing the letter z
Containing the sequence of letters pt
All lowercase letters except for an initial capital (i.e.,
Print all words beginning with sh.
Print all words longer than four characters
We have been using sets to store vocabularies. Try the
following Python expression:
< set(text1). Experiment with this using different
set(). What does it
do? Can you think of a practical application for this?