Skip to Main Content
Python 3 Text Processing with NLTK 3 Cookbook - Second Edition
book

Python 3 Text Processing with NLTK 3 Cookbook - Second Edition

by Jacob Perkins
August 2014
Beginner to intermediate content levelBeginner to intermediate
304 pages
7h 10m
English
Packt Publishing
Content preview from Python 3 Text Processing with NLTK 3 Cookbook - Second Edition

Measuring precision and recall of a classifier

In addition to accuracy, there are a number of other metrics used to evaluate classifiers. Two of the most common are precision and recall. To understand these two metrics, we must first understand false positives and false negatives. False positives happen when a classifier classifies a feature set with a label it shouldn't have gotten. False negatives happen when a classifier doesn't assign a label to a feature set that should have it. In a binary classifier, these errors happen at the same time.

Here's an example: the classifier classifies a movie review as pos when it should have been neg. This counts as a false positive for the pos label, and a false negative for the neg label. If the classifier ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Python Machine Learning - Third Edition

Python Machine Learning - Third Edition

Sebastian Raschka, Vahid Mirjalili
Python Cookbook, 3rd Edition

Python Cookbook, 3rd Edition

David Beazley, Brian K. Jones

Publisher Resources

ISBN: 9781782167853Supplemental Content