Data Analytics with Hadoop

Book description

Ready to use statistical and machine-learning techniques across large data sets? This practical guide shows you why the Hadoop ecosystem is perfect for the job. Instead of deployment, operations, or software development usually associated with distributed computing, you’ll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher order data workflows this framework can produce.

Data scientists and analysts will learn how to perform a wide range of techniques, from writing MapReduce and Spark applications with Python to using advanced modeling and data management with Spark MLlib, Hive, and HBase. You’ll also learn about the analytical processes and data systems available to build and empower data products that can handle—and actually require—huge amounts of data.

  • Understand core concepts behind Hadoop and cluster computing
  • Use design patterns and parallel analytical algorithms to create distributed data analysis jobs
  • Learn about data management, mining, and warehousing in a distributed context using Apache Hive and HBase
  • Use Sqoop and Apache Flume to ingest data from relational databases
  • Program complex Hadoop and Spark applications with Apache Pig and Spark DataFrames
  • Perform machine learning techniques such as classification, clustering, and collaborative filtering with Spark’s MLlib

Table of contents

  1. Preface
    1. What to Expect from This Book
    2. Who This Book Is For
    3. How to Read This Book
    4. Overview of Chapters
    5. Programming and Code Examples
      1. GitHub Repository
      2. Executing Distributed Jobs
      3. Permissions and Citation
    6. Feedback and How to Contact Us
    7. Safari® Books Online
    8. How to Contact Us
    9. Acknowledgments
  2. I. Introduction to Distributed Computing
  3. 1. The Age of the Data Product
    1. What Is a Data Product?
    2. Building Data Products at Scale with Hadoop
      1. Leveraging Large Datasets
      2. Hadoop for Data Products
    3. The Data Science Pipeline and the Hadoop Ecosystem
      1. Big Data Workflows
    4. Conclusion
  4. 2. An Operating System for Big Data
    1. Basic Concepts
    2. Hadoop Architecture
      1. A Hadoop Cluster
      2. HDFS
      3. YARN
    3. Working with a Distributed File System
      1. Basic File System Operations
      2. File Permissions in HDFS
      3. Other HDFS Interfaces
    4. Working with Distributed Computation
      1. MapReduce: A Functional Programming Model
      2. MapReduce: Implemented on a Cluster
      3. Beyond a Map and Reduce: Job Chaining
    5. Submitting a MapReduce Job to YARN
    6. Conclusion
  5. 3. A Framework for Python and Hadoop Streaming
    1. Hadoop Streaming
      1. Computing on CSV Data with Streaming
      2. Executing Streaming Jobs
    2. A Framework for MapReduce with Python
      1. Counting Bigrams
      2. Other Frameworks
    3. Advanced MapReduce
      1. Combiners
      2. Partitioners
      3. Job Chaining
    4. Conclusion
  6. 4. In-Memory Computing with Spark
    1. Spark Basics
      1. The Spark Stack
      2. Resilient Distributed Datasets
      3. Programming with RDDs
    2. Interactive Spark Using PySpark
    3. Writing Spark Applications
      1. Visualizing Airline Delays with Spark
    4. Conclusion
  7. 5. Distributed Analysis and Patterns
    1. Computing with Keys
      1. Compound Keys
      2. Keyspace Patterns
      3. Pairs versus Stripes
    2. Design Patterns
      1. Summarization
      2. Indexing
      3. Filtering
    3. Toward Last-Mile Analytics
      1. Fitting a Model
      2. Validating Models
    4. Conclusion
  8. II. Workflows and Tools for Big Data Science
  9. 6. Data Mining and Warehousing
    1. Structured Data Queries with Hive
      1. The Hive Command-Line Interface (CLI)
      2. Hive Query Language (HQL)
      3. Data Analysis with Hive
    2. HBase
      1. NoSQL and Column-Oriented Databases
      2. Real-Time Analytics with HBase
    3. Conclusion
  10. 7. Data Ingestion
    1. Importing Relational Data with Sqoop
      1. Importing from MySQL to HDFS
      2. Importing from MySQL to Hive
      3. Importing from MySQL to HBase
    2. Ingesting Streaming Data with Flume
      1. Flume Data Flows
      2. Ingesting Product Impression Data with Flume
    3. Conclusion
  11. 8. Analytics with Higher-Level APIs
    1. Pig
      1. Pig Latin
      2. Data Types
      3. Relational Operators
      4. User-Defined Functions
      5. Wrapping Up
    2. Spark’s Higher-Level APIs
      1. Spark SQL
      2. DataFrames
    3. Conclusion
  12. 9. Machine Learning
    1. Scalable Machine Learning with Spark
      1. Collaborative Filtering
      2. Classification
      3. Clustering
    2. Conclusion
  13. 10. Summary: Doing Distributed Data Science
    1. Data Product Lifecycle
      1. Data Lakes
      2. Data Ingestion
      3. Computational Data Stores
    2. Machine Learning Lifecycle
    3. Conclusion
  14. A. Creating a Hadoop Pseudo-Distributed Development Environment
    1. Quick Start
    2. Setting Up Linux
      1. Creating a Hadoop User
      2. Configuring SSH
      3. Installing Java
      4. Disabling IPv6
    3. Installing Hadoop
      1. Unpacking
      2. Environment
      3. Hadoop Configuration
      4. Formatting the Namenode
      5. Starting Hadoop
      6. Restarting Hadoop
  15. B. Installing Hadoop Ecosystem Products
    1. Packaged Hadoop Distributions
    2. Self-Installation of Apache Hadoop Ecosystem Products
      1. Basic Installation and Configuration Steps
      2. Sqoop-Specific Configurations
      3. Hive-Specific Configuration
      4. HBase-Specific Configurations
      5. Installing Spark
  16. Glossary
  17. Index

Product information

  • Title: Data Analytics with Hadoop
  • Author(s): Benjamin Bengfort, Jenny Kim
  • Release date: June 2016
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781491913703