Chapter 1. Getting Started
Introduction
This book is about using Spark NLP to build natural language processing (NLP) applications. Spark NLP is an NLP library built on top of Apache Spark. In this book I’ll cover how to use Spark NLP, as well as fundamental natural language processing topics. Hopefully, at the end of this book you’ll have a new software tool for working with natural language and Spark NLP, as well as a suite of techniques and some understanding of why these techniques work.
Let’s begin by talking about the structure of this book. In the first part, we’ll go over the technologies and techniques we’ll be using with Spark NLP throughout this book. After that we’ll talk about the building blocks of NLP. Finally, we’ll talk about NLP applications and systems.
When working on an application that requires NLP, there are three perspectives you should keep in mind: the software developer’s perspective, the linguist’s perspective, and the data scientist’s perspective. The software developer’s perspective focuses on what your application needs to do; this grounds the work in terms of the product you want to create. The linguist’s perspective focuses on what it is in the data that you want to extract. The data scientist’s perspective focuses on how you can extract the information you need from your data.
Following is a more detailed overview of the book.
- Part I, Basics
- Chapter 1 covers setting up your environment so you can follow along with the examples and exercises in the book.
- Chapter 2, Natural Language Basics is a survey of some of the linguistic concepts that help in understanding why NLP techniques work, and how to use NLP techniques to get the information you need from language.
- Chapter 3, NLP on Apache Spark is an introduction to Apache Spark and, most germane, the Spark NLP library.
- Chapter 4, Deep Learning Basics is a survey of some of the deep learning concepts that we’ll be using in this book. This book is not a tutorial on deep learning, but we’ll try and explain these techniques when necessary.
- Part II, Building Blocks
- Chapter 5, Processing Words covers the classic text-processing techniques. Since NLP applications generally require a pipeline of transformations, understanding the early steps well is a necessity.
- Chapter 6, Information Retrieval covers the basic concepts of search engines. Not only is this a classic example of an application that uses text, but many NLP techniques used in other kinds of applications ultimately come from information retrieval.
- Chapter 7, Classification and Regression covers some well-established techniques of using text features for classification and regression tasks.
- Chapter 8, Sequence Modeling with Keras introduces techniques used in modeling natural language text data as sequences. Since natural language is a sequence, these techniques are fundamental.
- Chapter 9, Information Extraction shows how we can extract facts and relationships from text.
- Chapter 10, Topic Modeling demonstrates techniques for finding topics in documents. Topic modeling is a great way to explore text.
- Chapter 11, Word Embeddings discusses one of the most popular modern techniques for creating features from text.
- Part III, Applications
- Chapter 12, Sentiment Analysis and Emotion Detection covers some basic applications that require identifying the sentiment of a text’s author—for example, whether a movie review is positive or negative.
- Chapter 13, Building Knowledge Bases explores creating an ontology, a collection of facts and relationships organized in a graph-like manner, from a corpus.
- Chapter 14, Search Engine goes deeper into what can be done to improve a search engine. Improving is not just about improving the ranker; it’s also about facilitating the user with features like facets.
- Chapter 15, Chatbot demonstrates how to create a chatbot—this is a fun and interesting application. This kind of application is becoming more and more popular.
- Chapter 16, Object Character Recognition introduces converting text stored as images to text data. Not all texts are stored as text data. Handwriting and old texts are examples of texts we may receive as images. Sometimes, we also have to deal with nonhandwritten text stored in images like PDF images and scans of printed documents.
- Part IV, Building NLP Systems
- Chapter 17, Supporting Multiple Languages explores topics that an application creator should consider when preparing to work with multiple languages.
- Chapter 18, Human Labeling covers ways to use humans to gather data about texts. Being able to efficiently use humans to augment data can make an otherwise impossible project feasible.
- Chapter 19, Productionizing NLP Applications covers creating models, Spark NLP pipelines, and TensorFlow graphs, and publishing them for use in production; some of the performance concerns that developers should keep in mind when designing a system that uses text; and the quality and monitoring concerns that are unique to NLP applications.
Other Tools
In addition to Spark NLP, Apache Spark, and TensorFlow, we’ll make use of a number of other tools:
- Python is one of the most popular programming languages used in data science. Although Spark NLP is implemented in Scala, we will be demonstrating its use through Python.
- Anaconda is an open source distribution of Python (and R, which we are not using). It is maintained by Anaconda, Inc., who also offer an enterprise platform and training courses. We’ll use the Anaconda package manager,
conda
, to create our environment. - Jupyter Notebook is a tool for executing code in the browser. Jupyter Notebook also allows you to write markdown and display visualizations all in the browser. In fact, this book was written as a Jupyter notebook before being converted to a publishable format. Jupyter Notebook is maintained by Project Jupyter, which is a nonprofit dedicated to supporting interactive data-science tools.
- Docker is a tool for easily creating virtual machines, often referred to as containers. We’ll use Docker as an alternative installation tool to setting up Anaconda. It is maintained by Docker, Inc.
Setting Up Your Environment
In this book, almost every chapter has exercises, so it is useful to make sure that the environment is working at the beginning. We’ll use Jupyter notebooks in this book, and the kernel we’ll use is the baseline Python 3.6 kernel. The instructions here use Continuum’s Anaconda to set up a Python virtual environment.
You can also use the docker image for the necessary environment.
These instructions were created from the set-up process for Ubuntu. There are additional set-up instructions online at the project’s GitHub page.
Prerequisites
- Anaconda
- To set up Anaconda, follow the instructions.
- Apache Spark
- To set up Apache Spark, follow the instructions.
Make sure that
SPARK_HOME
is set to the location of your Apache Spark installation.- If you are on Linux or macOS, you can put
export SPARK_HOME="/path/to/spark"
- If you are on Windows, you can use System Properties to set an environment variable named
SPARK_HOME
to"/path/to/spark"
- If you are on Linux or macOS, you can put
- This was written on Apache Spark 2.4
Optional: Set up a password for your Jupyter notebook server.
Starting Apache Spark
$
echo
$SPARK_HOME
/path/to/your/spark/installation
$
spark-shell
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.prope
rties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setL
ogLevel(newLevel).
...
Spark context Web UI available at localhost:4040
Spark context available as 'sc' (master = local[*], app id = ...).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.2
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0
_102)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
Checking Out the Code
- Go to the GitHub repo for this project
- Check out the code, and run the following code examples in a terminal:
- Clone the repo
git clone https://github.com/alexander-n-thomas/spark-nlp-book.git
- Create the Anaconda environment—this will take a while
conda env create -f environment.yml
- Activate the new environment
source
activate
spark-nlp-book
- Create the kernel for this environment
ipython
kernel
install
--user
--name
=
sparknlpbook
- Launch the notebook server
jupyter notebook
- Go to your notebook page at localhost:8888
- Clone the repo
Getting Familiar with Apache Spark
Now that we’re all set up, let’s start using Spark NLP! We will be using the 20 Newsgroups Data Set from the University of California–Irvine Machine Learning Repository. For this first example we use the mini_newsgroups data set. Download the TAR file and extract it into the data folder for this project.
!
ls
./
data
/
mini_newsgroups
Starting Apache Spark with Spark NLP
There are many ways we can use Apache Spark from Jupyter notebooks. We could use a specialized kernel, but I generally prefer using a simple kernel. Fortunately, Spark NLP gives us an easy way to start up.
import
sparknlp
import
pyspark
from
pyspark
import
SparkConf
from
pyspark.sql
import
SparkSession
from
pyspark.sql
import
functions
as
fun
from
pyspark.sql.types
import
*
%
matplotlib
inline
import
matplotlib.pyplot
as
plt
packages
=
','
.
join
([
"JohnSnowLabs:spark-nlp:1.6.3"
,
])
spark_conf
=
SparkConf
()
spark_conf
=
spark_conf
.
setAppName
(
'spark-nlp-book-p1c1'
)
spark_conf
=
spark_conf
.
setAppName
(
'master[*]'
)
spark_conf
=
spark_conf
.
set
(
"spark.jars.packages"
,
packages
)
spark
=
SparkSession
.
builder
.
config
(
conf
=
spark_conf
)
.
getOrCreate
()
%
matplotlib
inline
import
matplotlib.pyplot
as
plt
Loading and Viewing Data in Apache Spark
Let’s look at how we can load data with Apache Spark and then at some ways we can view the data.
import
os
mini_newsgroups_path
=
os
.
path
.
join
(
'data'
,
'mini_newsgroups'
,
'*'
)
texts
=
spark
.
sparkContext
.
wholeTextFiles
(
mini_newsgroups_path
)
schema
=
StructType
([
StructField
(
'filename'
,
StringType
()),
StructField
(
'text'
,
StringType
()),
])
texts_df
=
spark
.
createDataFrame
(
texts
,
schema
)
texts_df
.
show
()
+--------------------+--------------------+ | filename| text| +--------------------+--------------------+ |file:/home/alext/...|Path: cantaloupe....| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Xref: cantaloupe....| |file:/home/alext/...|Path: cantaloupe....| |file:/home/alext/...|Xref: cantaloupe....| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Xref: cantaloupe....| |file:/home/alext/...|Path: cantaloupe....| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Path: cantaloupe....| |file:/home/alext/...|Path: cantaloupe....| |file:/home/alext/...|Path: cantaloupe....| |file:/home/alext/...|Xref: cantaloupe....| |file:/home/alext/...|Path: cantaloupe....| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Newsgroups: sci.e...| |file:/home/alext/...|Newsgroups: sci.e...| +--------------------+--------------------+ only showing top 20 rows
Looking at the data is important in any data-science project. When working with structured data, especially numerical data, it is common to explore data with aggregates. This is necessary because data sets are large, and looking at a small number of examples can easily lead to misrepresentation of the data. Natural language data complicates this. On one hand, humans are really good at interpreting language; on the other, humans are also really good at jumping to conclusions and making hasty generalizations. So we still have the problem of creating a representative summary for large data sets. We’ll talk about some techniques to do this in Chapters 10 and 11.
For now, let’s talk about ways we can look at a small amount of data in DataFrame
s. As you can see in the preceding code example, we can show the output of a DataFrame
using .show()
.
Let’s look at the arguments:
n
: number of rows to show.truncate
: if set to True, truncates strings longer than 20 characters by default. If set to a number greater than one, truncates long strings to lengthtruncate
and aligns cells right.vertical
: if set to True, prints output rows vertically (one line per column value).
Let’s try using some of these arguments:
texts_df
.
show
(
n
=
5
,
truncate
=
100
,
vertical
=
True
)
-RECORD 0-------------------------------------------------------------------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini_newsgroups/sci.electronics/54165 text | Path: cantaloupe.srv.cs.cmu.edu!magnesium.club.cc.cmu.edu!news.sei.cmu.edu!cis.ohio-state.edu!zap... -RECORD 1-------------------------------------------------------------------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini_newsgroups/sci.electronics/54057 text | Newsgroups: sci.electronics Path: cantaloupe.srv.cs.cmu.edu!magnesium.club.cc.cmu.edu!news.sei.cm... -RECORD 2-------------------------------------------------------------------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini_newsgroups/sci.electronics/53712 text | Newsgroups: sci.electronics Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!noc.near.net!how... -RECORD 3-------------------------------------------------------------------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini_newsgroups/sci.electronics/53529 text | Newsgroups: sci.electronics Path: cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!bb3.andrew.c... -RECORD 4-------------------------------------------------------------------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini_newsgroups/sci.electronics/54042 text | Xref: cantaloupe.srv.cs.cmu.edu comp.os.msdos.programmer:23292 alt.msdos.programmer:6797 sci.elec... only showing top 5 rows
The .show()
method is good for a quick view of data, but if the data is complicated, it doesn’t work as well. In the Jupyter environment, there are some special integrations with pandas, and pandas DataFrame
s are displayed a little more nicely. Table 1-1 is an example.
texts_df.limit(5).toPandas()
filename | text | |
---|---|---|
0 | file:/home/alext/projects/spark-nlp-book/data... | Path: cantaloupe.srv.cs.cmu.edu!magne... |
1 | file:/home/alext/projects/spark-nlp-book/data... | Newsgroups: sci.electronics\nPath: cant... |
2 | file:/home/alext/projects/spark-nlp-book/data... | Newsgroups: sci.electronics\nPath: cant... |
3 | file:/home/alext/projects/spark-nlp-book/data... | Newsgroups: sci.electronics\nPath: cant... |
4 | file:/home/alext/projects/spark-nlp-book/data... | Xref: cantaloupe.srv.cs.cmu.edu comp.o... |
Notice the use of .limit()
. The .toPandas()
method pulls the Spark DataFrame
into memory to create a pandas DataFrame
. Converting to pandas can also be useful for using tools available in Python, since pandas DataFrame
is widely supported in the Python ecosystem.
For other types of visualizations, we’ll primarily use the Python libraries matplotlib and seaborn. In order to use these libraries we will need to create pandas DataFrame
s, so we will either aggregate or sample Spark DataFrame
s into a manageable size.
Hello World with Spark NLP
We have some data, so let’s use Spark NLP to process it. First, let’s extract the newsgroup name from the filename. We can see the newsgroup as the last folder in the filename. Table 1-2 shows the result.
texts_df
=
texts_df
.
withColumn
(
'newsgroup'
,
fun
.
split
(
'filename'
,
'/'
)
.
getItem
(
7
)
)
texts_df
.
limit
(
5
)
.
toPandas
()
filename | text | newsgroup | |
---|---|---|---|
0 | file:/home/alext/projects/spark... | Path: cantaloupe.srv.cs.cmu.edu!mag... | sci.electronics |
1 | file:/home/alext/projects/spark... | Newsgroups: sci.electronics\nPath: ca... | sci.electronics |
2 | file:/home/alext/projects/spark... | Newsgroups: sci.electronics\nPath: ca... | sci.electronics |
3 | file:/home/alext/projects/spark... | Newsgroups: sci.electronics\nPath: ca... | sci.electronics |
4 | file:/home/alext/projects/spark... | Xref: cantaloupe.srv.cs.cmu.edu comp... | sci.electronics |
Let’s look at how many documents are in each newsgroup. Figure 1-1 shows the bar chart.
newsgroup_counts = texts_df.groupBy('newsgroup').count().toPandas()
newsgroup_counts.plot(kind='bar', figsize=(10, 5)) plt.xticks( ticks=range(len(newsgroup_counts)), labels=newsgroup_counts['newsgroup'] ) plt.show()
Because the mini_newsgroups data set is a subset of the Twenty Newsgroups Data Set, we have the same number of documents in each newsgroup. Now, let’s use the Explain Document ML:
from
sparknlp.pretrained
import
PretrainedPipeline
The explain_document_ml
is a pretrained pipeline that we can use to process text with a simple pipeline that performs basic processing steps. In order to understand what the explain_document_ml
is doing, it is necessary to give a brief description of what the annotators are. An annotator is a representation of a specific NLP technique. We will go more in depth when we get to Chapter 3.
The annotators work on a document, which is the text, associated metadata, and any previously discovered annotations. This design helps annotators reuse work of previous annotators. The downside is that it is more complex than libraries like NLTK, which are uncoupled collections of NLP functions.
The explain_document_ml
has one Transformer
and six annotators:
DocumentAssembler
- A
Transformer
that creates a column that contains documents. Sentence Segmenter
- An annotator that produces the sentences of the document.
Tokenizer
- An annotator that produces the tokens of the sentences.
SpellChecker
- An annotator that produces the spelling-corrected tokens.
Stemmer
- An annotator that produces the stems of the tokens.
Lemmatizer
- An annotator that produces the lemmas of the tokens.
POS Tagger
- An annotator that produces the parts of speech of the associated tokens.
There are some new terms introduced here that we’ll discuss more in upcoming chapters:
pipeline
=
PretrainedPipeline
(
'explain_document_ml'
,
lang
=
'en'
)
The .annotate()
method of the BasicPipeline
can be used to annotate singular strings, as well as DataFrame
s. Let’s look at what it produces.
pipeline
.
annotate
(
'Hellu wrold!'
)
{'document': ['Hellu wrold!'], 'spell': ['Hello', 'world', '!'], 'pos': ['UH', 'NN', '.'], 'lemmas': ['Hello', 'world', '!'], 'token': ['Hellu', 'wrold', '!'], 'stems': ['hello', 'world', '!'], 'sentence': ['Hellu wrold!']}
This a good amount of additional information, which brings up something that you will want to keep in mind—annotations can produce a lot of extra data.
Let’s look at the schema of the raw data.
texts_df.printSchema()
root |-- filename: string (nullable = true) |-- text: string (nullable = true) |-- newsgroup: string (nullable = true)
Now, let’s annotate our DataFrame
and look at the new schema.
procd_texts_df = basic_pipeline.annotate(texts_df, 'text')
procd_texts_df.printSchema()
root |-- filename: string (nullable = true) |-- text: string (nullable = true) |-- newsgroup: string (nullable = true) |-- document: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- annotatorType: string (nullable = true) | | |-- begin: integer (nullable = false) | | |-- end: integer (nullable = false) | | |-- result: string (nullable = true) | | |-- metadata: map (nullable = true) | | | |-- key: string | | | |-- value: string (valueContainsNull = true) | | |-- embeddings: array (nullable = true) | | | |-- element: float (containsNull = false) | | |-- sentence_embeddings: array (nullable = true) | | | |-- element: float (containsNull = false) |-- sentence: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- annotatorType: string (nullable = true) ...
That schema is quite complex! To break it down, let’s look at the token column. It has an Array
type column, and each element is a Struct
. Each element has the following:
annotatorType
- The type of annotation.
begin
- The starting character position of the annotation.
end
- The character position after the end of the annotation.
result
- The output of the annotator.
metadata
- A
Map
fromString
toString
containing additional, potentially helpful, information about the annotation.
Let’s look at some of the data using .show()
.
procd_texts_df.show(n=2)
+--------------------+--------------------+---------------+--------- -----------+--------------------+--------------------+-------------- ------+--------------------+--------------------+------------------- -+ | filename| text| newsgroup| document| sentence| token| spell| lemmas| stems| po s| +--------------------+--------------------+---------------+--------- -----------+--------------------+--------------------+-------------- ------+--------------------+--------------------+------------------- -+ |file:/home/alext/...|Path: cantaloupe....|sci.electronics|[[documen t, 0, 90...|[[document, 0, 46...|[[token, 0, 3, Pa...|[[token, 0, 3, Pa...|[[token, 0, 3, Pa...|[[token, 0, 3, pa...|[[pos, 0, 3, NNP,.. .| |file:/home/alext/...|Newsgroups: sci.e...|sci.electronics|[[documen t, 0, 19...|[[document, 0, 40...|[[token, 0, 9, Ne...|[[token, 0, 9, Ne...|[[token, 0, 9, Ne...|[[token, 0, 9, ne...|[[pos, 0, 9, NNP,.. .| +--------------------+--------------------+---------------+--------- -----------+--------------------+--------------------+-------------- ------+--------------------+--------------------+------------------- -+ only showing top 2 rows
This is not very readable. Not only is the automatic formatting doing poorly with this data, but we can hardly see our annotations. Let’s try using some other arguments.
procd_texts_df.show(n=2, truncate=100, vertical=True)
-RECORD 0----------------------------------------------------------- ---------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini_news groups/sci.electronics/54165 text | Path: cantaloupe.srv.cs.cmu.edu!magnesium.club.cc.cmu.e du!news.sei.cmu.edu!cis.ohio-state.edu!zap... newsgroup | sci.electronics document | [[document, 0, 903, Path: cantaloupe.srv.cs.cmu.edu!mag nesium.club.cc.cmu.edu!news.sei.cmu.edu!ci... sentence | [[document, 0, 468, Path: cantaloupe.srv.cs.cmu.edu!mag nesium.club.cc.cmu.edu!news.sei.cmu.edu!ci... token | [[token, 0, 3, Path, [sentence -> 0], [], []], [token, 4, 4, :, [sentence -> 0], [], []], [token,... spell | [[token, 0, 3, Path, [sentence -> 0], [], []], [token, 4, 4, :, [sentence -> 0], [], []], [token,... lemmas | [[token, 0, 3, Path, [sentence -> 0], [], []], [token, 4, 4, :, [sentence -> 0], [], []], [token,... stems | [[token, 0, 3, path, [sentence -> 0], [], []], [token, 4, 4, :, [sentence -> 0], [], []], [token,... pos | [[pos, 0, 3, NNP, [word -> Path], [], []], [pos, 4, 4, :, [word -> :], [], []], [pos, 6, 157, JJ,... -RECORD 1----------------------------------------------------------- ---------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini_news groups/sci.electronics/54057 text | Newsgroups: sci.electronics Path: cantaloupe.srv.cs.cmu.edu!magnesium.club.cc.cmu.edu!news.sei.c m... newsgroup | sci.electronics document | [[document, 0, 1944, Newsgroups: sci.electronics Path: cantaloupe.srv.cs.cmu.edu!magnesium.club.c... sentence | [[document, 0, 408, Newsgroups: sci.electronics Path: c antaloupe.srv.cs.cmu.edu!magnesium.club.cc... token | [[token, 0, 9, Newsgroups, [sentence -> 0], [], []], [t oken, 10, 10, :, [sentence -> 0], [], []],... spell | [[token, 0, 9, Newsgroups, [sentence -> 0], [], []], [t oken, 10, 10, :, [sentence -> 0], [], []],... lemmas | [[token, 0, 9, Newsgroups, [sentence -> 0], [], []], [t oken, 10, 10, :, [sentence -> 0], [], []],... stems | [[token, 0, 9, newsgroup, [sentence -> 0], [], []], [to ken, 10, 10, :, [sentence -> 0], [], []], ... pos | [[pos, 0, 9, NNP, [word -> Newsgroups], [], []], [pos, 10, 10, :, [word -> :], [], []], [pos, 12,... only showing top 2 rows
Better, but this is still not useful for getting a general understanding of our corpus. We at least have a glimpse of what our pipeline is doing.
Now, we need to pull out the information we might want to use in other processes—that is why there is the Finisher Transformer
. The Finisher
takes annotations and pulls out the pieces of data that we will be using in downstream processes. This allows us to use the results of our NLP pipeline in generic Spark. For now, let’s pull out all the lemmas and put them into a String
, separated by spaces.
from sparknlp import Finisher
finisher = Finisher() finisher = finisher # taking the lemma column finisher = finisher.setInputCols(['lemmas']) # separating lemmas by a single space finisher = finisher.setAnnotationSplitSymbol(' ')
finished_texts_df = finisher.transform(procd_texts_df)
finished_texts_df.show(n=1, truncate=100, vertical=True)
-RECORD 0----------------------------------------------------------- ---------------------------------------------------- filename | file:/home/alext/projects/spark-nlp-book/data/mini _newsgroups/sci.electronics/54165 text | Path: cantaloupe.srv.cs.cmu.edu!magnesium.club.cc. cmu.edu!news.sei.cmu.edu!cis.ohio-state.edu!zap... newsgroup | sci.electronics finished_lemmas | [Path, :, cantaloupe.srv.cs.cmu.edu!magnesium.club .cc.cmu.edu!news.sei.cmu.edu!cis.ohio-state.edu... only showing top 1 row
Normally, we’ll be using the .setOutputAsArray(True)
option so that the output is an Array
instead of a String
.
Let’s look at the final result on the first document.
finished_texts_df.select('finished_lemmas').take(1)
[Row(finished_lemmas=['Path', ':', 'cantaloupe.srv.cs.cmu.edu!magnes ium.club.cc.cmu.edu!news.sei.cmu.edu!cis.ohio-state.edu!zaphod.mps.o hio-state.edu!news.acns.nwu.edu!uicvm.uic.edu!u19250', 'Organization ', ':', 'University', 'of', 'Illinois', 'at', 'Chicago', ',', 'acade mic', 'Computer', 'Center', 'Date', ':', 'Sat', ',', '24', 'Apr', '1 993', '14:28:35', 'CDT', 'From', ':', '<U19250@uicvm.uic.edu>', 'Mes sage-ID', ':', '<93114.142835U19250@uicvm.uic.edu>', 'Newsgroups', ' :', 'sci.electronics', 'Subject', ':', 'multiple', 'input', 'for', ' PC', 'Lines', ':', '8', 'Can', 'anyone', 'offer', 'a', 'suggestion', 'on', 'a', 'problem', 'I', 'be', 'have', '?', 'I', 'have', 'several ', 'board', 'whose', 'sole', 'purpose', 'be', 'to', 'decode', 'DTMF' , 'tone', 'and', 'send', 'the', 'resultant', 'in', 'ASCII', 'to', 'a ', 'PC', '.', 'These', 'board', 'run', 'on', 'the', 'serial', 'inter face', '.', 'I', 'need', 'to', 'run', '*', 'of', 'the', 'board', 'so mewhat', 'simultaneously', '.', 'I', 'need', 'to', 'be', 'able', 'to ', 'ho', 'ok', 'they', 'up', 'to', 'a', 'PC', '>', 'The', 'problem', 'be', ',', 'how', 'do', 'I', 'hook', 'up', '8', '+', 'serial', 'dev ice', 'to', 'one', 'PC', 'inexpensively', ',', 'so', 'that', 'all', 'can', 'send', 'data', 'simultaneously', '(', 'or', 'close', 'to', ' it', ')', '?', 'Any', 'help', 'would', 'be', 'greatly', 'appreciate' , '!', 'Achin', 'Single'])]
It doesn’t look like much has been done here, but there is still a lot to unpack. In the next chapter, we will explain some basics of linguistics that will help us understand what these annotators are doing.
Get Natural Language Processing with Spark NLP now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.