Natural Language Annotation for Machine Learning

Book description

Create your own natural language training corpus for machine learning. Whether you’re working with English, Chinese, or any other natural language, this hands-on book guides you through a proven annotation development cycle—the process of adding metadata to your training corpus to help ML algorithms work more efficiently. You don’t need any programming or linguistics experience to get started.

Table of contents

  1. Preface
    1. Natural Language Annotation for Machine Learning
    2. Audience
    3. Organization of This Book
    4. Software Requirements
    5. Conventions Used in This Book
    6. Using Code Examples
    7. Safari® Books Online
    8. How to Contact Us
    9. Acknowledgments
      1. James Adds:
      2. Amber Adds:
  2. 1. The Basics
    1. The Importance of Language Annotation
      1. The Layers of Linguistic Description
      2. What Is Natural Language Processing?
    2. A Brief History of Corpus Linguistics
      1. What Is a Corpus?
      2. Early Use of Corpora
      3. Corpora Today
      4. Kinds of Annotation
    3. Language Data and Machine Learning
      1. Classification
      2. Clustering
      3. Structured Pattern Induction
    4. The Annotation Development Cycle
      1. Model the Phenomenon
      2. Annotate with the Specification
      3. Train and Test the Algorithms over the Corpus
      4. Evaluate the Results
      5. Revise the Model and Algorithms
    5. Summary
  3. 2. Defining Your Goal and Dataset
    1. Defining Your Goal
      1. The Statement of Purpose
      2. Refining Your Goal: Informativity Versus Correctness
        1. The scope of the annotation task
        2. What will the annotation be used for?
        3. What will the overall outcome be?
        4. Where will the corpus come from?
        5. How will the result be achieved?
    2. Background Research
      1. Language Resources
      2. Organizations and Conferences
      3. NLP Challenges
    3. Assembling Your Dataset
      1. The Ideal Corpus: Representative and Balanced
      2. Collecting Data from the Internet
      3. Eliciting Data from People
        1. Read speech
        2. Spontaneous speech
    4. The Size of Your Corpus
      1. Existing Corpora
      2. Distributions Within Corpora
    5. Summary
  4. 3. Corpus Analytics
    1. Basic Probability for Corpus Analytics
      1. Joint Probability Distributions
      2. Bayes Rule
    2. Counting Occurrences
      1. Zipf’s Law
      2. N-grams
    3. Language Models
    4. Summary
  5. 4. Building Your Model and Specification
    1. Some Example Models and Specs
      1. Film Genre Classification
      2. Adding Named Entities
      3. Semantic Roles
    2. Adopting (or Not Adopting) Existing Models
      1. Creating Your Own Model and Specification: Generality Versus Specificity
      2. Using Existing Models and Specifications
      3. Using Models Without Specifications
    3. Different Kinds of Standards
      1. ISO Standards
        1. Annotation format standards
        2. Annotation specification standards
      2. Community-Driven Standards
      3. Other Standards Affecting Annotation
    4. Summary
  6. 5. Applying and Adopting Annotation Standards
    1. Metadata Annotation: Document Classification
      1. Unique Labels: Movie Reviews
      2. Multiple Labels: Film Genres
    2. Text Extent Annotation: Named Entities
      1. Inline Annotation
      2. Stand-off Annotation by Tokens
      3. Stand-off Annotation by Character Location
    3. Linked Extent Annotation: Semantic Roles
    4. ISO Standards and You
    5. Summary
  7. 6. Annotation and Adjudication
    1. The Infrastructure of an Annotation Project
    2. Specification Versus Guidelines
    3. Be Prepared to Revise
    4. Preparing Your Data for Annotation
      1. Metadata
      2. Preprocessed Data
      3. Splitting Up the Files for Annotation
    5. Writing the Annotation Guidelines
      1. Example 1: Single Labels—Movie Reviews
      2. Example 2: Multiple Labels—Film Genres
      3. Example 3: Extent Annotations—Named Entities
      4. Example 4: Link Tags—Semantic Roles
    6. Annotators
    7. Choosing an Annotation Environment
    8. Evaluating the Annotations
      1. Cohen’s Kappa (κ)
      2. Fleiss’s Kappa (κ)
      3. Interpreting Kappa Coefficients
      4. Calculating κ in Other Contexts
    9. Creating the Gold Standard (Adjudication)
    10. Summary
  8. 7. Training: Machine Learning
    1. What Is Learning?
    2. Defining Our Learning Task
    3. Classifier Algorithms
      1. Decision Tree Learning
      2. Gender Identification
      3. Naïve Bayes Learning
        1. Movie genre identification
        2. Sentiment classification
      4. Maximum Entropy Classifiers
      5. Other Classifiers to Know About
    4. Sequence Induction Algorithms
    5. Clustering and Unsupervised Learning
    6. Semi-Supervised Learning
    7. Matching Annotation to Algorithms
    8. Summary
  9. 8. Testing and Evaluation
    1. Testing Your Algorithm
    2. Evaluating Your Algorithm
      1. Confusion Matrices
      2. Calculating Evaluation Scores
        1. Percentage accuracy
        2. Precision and recall
        3. F-measure
        4. Other evaluation metrics
      3. Interpreting Evaluation Scores
    3. Problems That Can Affect Evaluation
      1. Dataset Is Too Small
      2. Algorithm Fits the Development Data Too Well
      3. Too Much Information in the Annotation
    4. Final Testing Scores
    5. Summary
  10. 9. Revising and Reporting
    1. Revising Your Project
      1. Corpus Distributions and Content
      2. Model and Specification
      3. Annotation
        1. Guidelines
        2. Annotators
        3. Tools
      4. Training and Testing
    2. Reporting About Your Work
      1. About Your Corpus
      2. About Your Model and Specifications
      3. About Your Annotation Task and Annotators
      4. About Your ML Algorithm
      5. About Your Revisions
    3. Summary
  11. 10. Annotation: TimeML
    1. The Goal of TimeML
    2. Related Research
    3. Building the Corpus
    4. Model: Preliminary Specifications
      1. Times
      2. Signals
      3. Events
      4. Links
    5. Annotation: First Attempts
    6. Model: The TimeML Specification Used in TimeBank
      1. Time Expressions
      2. Events
      3. Signals
      4. Links
      5. Confidence
    7. Annotation: The Creation of TimeBank
    8. TimeML Becomes ISO-TimeML
    9. Modeling the Future: Directions for TimeML
      1. Narrative Containers
      2. Expanding TimeML to Other Domains
      3. Event Structures
    10. Summary
  12. 11. Automatic Annotation: Generating TimeML
    1. The TARSQI Components
      1. GUTime: Temporal Marker Identification
      2. EVITA: Event Recognition and Classification
      3. GUTenLINK
      4. Slinket
      5. SputLink
      6. Machine Learning in the TARSQI Components
    2. Improvements to the TTK
      1. Structural Changes
      2. Improvements to Temporal Entity Recognition: BTime
      3. Temporal Relation Identification
      4. Temporal Relation Validation
      5. Temporal Relation Visualization
    3. TimeML Challenges: TempEval-2
      1. TempEval-2: System Summaries
      2. Overview of Results
    4. Future of the TTK
      1. New Input Formats
      2. Narrative Containers/Narrative Times
      3. Medical Documents
      4. Cross-Document Analysis
    5. Summary
  13. 12. Afterword: The Future of Annotation
    1. Crowdsourcing Annotation
      1. Amazon’s Mechanical Turk
      2. Games with a Purpose (GWAP)
      3. User-Generated Content
    2. Handling Big Data
      1. Boosting
      2. Active Learning
      3. Semi-Supervised Learning
    3. NLP Online and in the Cloud
      1. Distributed Computing
      2. Shared Language Resources
      3. Shared Language Applications
    4. And Finally...
  14. A. List of Available Corpora and Specifications
    1. Corpora
    2. Specifications, Guidelines, and Other Resources
    3. Representation Standards
  15. B. List of Software Resources
    1. Annotation and Adjudication Software
      1. Multipurpose Tools
      2. Corpus Creation and Exploration Tools
      3. Manual Annotation Tools
      4. Automated Annotation Tools
        1. Multipurpose tools
        2. Phonetic annotation
        3. Part-of-speech taggers/syntactic parsers
        4. Tokenizers/chunkers/stemmers
        5. Other
    2. Machine Learning Resources
  16. C. MAE User Guide
    1. Installing and Running MAE
    2. Loading Tasks and Files
      1. Loading a Task
      2. Loading a File
      3. Annotating Entities
        1. Attribute information
        2. Nonconsuming tags
      4. Annotating Links
      5. Deleting Tags
    3. Saving Files
    4. Defining Your Own Task
      1. Task Name
      2. Elements (a.k.a. Tags)
      3. Attributes
        1. id attributes
        2. start attribute
        3. Attribute types
        4. Default attribute values
    5. Frequently Asked Questions
  17. D. MAI User Guide
    1. Installing and Running MAI
    2. Loading Tasks and Files
      1. Loading a Task
      2. Loading Files
    3. Adjudicating
      1. The MAI Window
      2. Adjudicating a Tag
      3. Extent Tags
      4. Link Tags
      5. Nonconsuming Tags
      6. Adding New Tags
      7. Deleting tags
    4. Saving Files
  18. E. Bibliography
    1. References for Using Amazon’s Mechanical Turk/Crowdsourcing
  19. Index
  20. About the Authors
  21. Colophon
  22. Copyright

Product information

  • Title: Natural Language Annotation for Machine Learning
  • Author(s): James Pustejovsky, Amber Stubbs
  • Release date: October 2012
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781449306663