Hadoop For Dummies

Book description

Let Hadoop For Dummies help harness the power of your data and rein in the information overload

Big data has become big business, and companies and organizations of all sizes are struggling to find ways to retrieve valuable information from their massive data sets with becoming overwhelmed. Enter Hadoop and this easy-to-understand For Dummies guide. Hadoop For Dummies helps readers understand the value of big data, make a business case for using Hadoop, navigate the Hadoop ecosystem, and build and manage Hadoop applications and clusters.

  • Explains the origins of Hadoop, its economic benefits, and its functionality and practical applications

  • Helps you find your way around the Hadoop ecosystem, program MapReduce, utilize design patterns, and get your Hadoop cluster up and running quickly and easily

  • Details how to use Hadoop applications for data mining, web analytics and personalization, large-scale text processing, data science, and problem-solving

  • Shows you how to improve the value of your Hadoop cluster, maximize your investment in Hadoop, and avoid common pitfalls when building your Hadoop cluster

  • From programmers challenged with building and maintaining affordable, scaleable data systems to administrators who must deal with huge volumes of information effectively and efficiently, this how-to has something to help you with Hadoop.

    Table of contents

      1. Introduction
        1. About this Book
        2. Foolish Assumptions
        3. How This Book Is Organized
          1. Part I: Getting Started With Hadoop
          2. Part II: How Hadoop Works
          3. Part III: Hadoop and Structured Data
          4. Part IV: Administering and Configuring Hadoop
          5. Part V: The Part Of Tens: Getting More Out of Your Hadoop Cluster
        4. Icons Used in This Book
        5. Beyond the Book
        6. Where to Go from Here
      2. Part I: Getting Started with Hadoop
        1. Chapter 1: Introducing Hadoop and Seeing What It’s Good For
          1. Big Data and the Need for Hadoop
            1. Exploding data volumes
            2. Varying data structures
            3. A playground for data scientists
          2. The Origin and Design of Hadoop
            1. Distributed processing with MapReduce
            2. Apache Hadoop ecosystem
          3. Examining the Various Hadoop Offerings
            1. Comparing distributions
            2. Working with in-database MapReduce
            3. Looking at the Hadoop toolbox
        2. Chapter 2: Common Use Cases for Big Data in Hadoop
          1. The Keys to Successfully Adopting Hadoop (Or, “Please, Can We Keep Him?”)
          2. Log Data Analysis
          3. Data Warehouse Modernization
          4. Fraud Detection
          5. Risk Modeling
          6. Social Sentiment Analysis
          7. Image Classification
          8. Graph Analysis
          9. To Infinity and Beyond
        3. Chapter 3: Setting Up Your Hadoop Environment
          1. Choosing a Hadoop Distribution
          2. Choosing a Hadoop Cluster Architecture
            1. Pseudo-distributed mode (single node)
            2. Fully distributed mode (a cluster of nodes)
          3. The Hadoop For Dummies Environment
            1. The Hadoop For Dummies distribution: Apache Bigtop
            2. Setting up the Hadoop For Dummies environment
            3. The Hadoop For Dummies Sample Data Set: Airline on-time performance
          4. Your First Hadoop Program: Hello Hadoop!
      3. Part II: How Hadoop Works
        1. Chapter 4: Storing Data in Hadoop: The Hadoop Distributed File System
          1. Data Storage in HDFS
            1. Taking a closer look at data blocks
            2. Replicating data blocks
            3. Slave node and disk failures
          2. Sketching Out the HDFS Architecture
            1. Looking at slave nodes
            2. Keeping track of data blocks with NameNode
            3. Checkpointing updates
          3. HDFS Federation
          4. HDFS High Availability
        2. Chapter 5: Reading and Writing Data
          1. Compressing Data
          2. Managing Files with the Hadoop File System Commands
          3. Ingesting Log Data with Flume
        3. Chapter 6: MapReduce Programming
          1. Thinking in Parallel
          2. Seeing the Importance of MapReduce
          3. Doing Things in Parallel: Breaking Big Problems into Many Bite-Size Pieces
            1. Looking at MapReduce application flow
            2. Understanding input splits
            3. Seeing how key/value pairs fit into the MapReduce application flow
          4. Writing MapReduce Applications
          5. Getting Your Feet Wet: Writing a Simple MapReduce Application
            1. The FlightsByCarrier driver application
            2. The FlightsByCarrier mapper
            3. The FlightsByCarrier reducer
            4. Running the FlightsByCarrier application
        4. Chapter 7: Frameworks for Processing Data in Hadoop: YARN and MapReduce
          1. Running Applications Before Hadoop 2
            1. Tracking JobTracker
            2. Tracking TaskTracker
            3. Launching a MapReduce application
          2. Seeing a World beyond MapReduce
            1. Scouting out the YARN architecture
            2. Launching a YARN-based application
          3. Real-Time and Streaming Applications
        5. Chapter 8: Pig: Hadoop Programming Made Easier
          1. Admiring the Pig Architecture
          2. Going with the Pig Latin Application Flow
          3. Working through the ABCs of Pig Latin
            1. Uncovering Pig Latin structures
            2. Looking at Pig data types and syntax
          4. Evaluating Local and Distributed Modes of Running Pig scripts
          5. Checking Out the Pig Script Interfaces
          6. Scripting with Pig Latin
        6. Chapter 9: Statistical Analysis in Hadoop
          1. Pumping Up Your Statistical Analysis
            1. The limitations of sampling
            2. Factors that increase the scale of statistical analysis
            3. Running statistical models in MapReduce
          2. Machine Learning with Mahout
            1. Collaborative filtering
            2. Clustering
            3. Classifications
          3. R on Hadoop
            1. The R language
            2. Hadoop Integration with R
        7. Chapter 10: Developing and Scheduling Application Workflows with Oozie
          1. Getting Oozie in Place
          2. Developing and Running an Oozie Workflow
            1. Writing Oozie workflow definitions
            2. Configuring Oozie workflows
            3. Running Oozie workflows
          3. Scheduling and Coordinating Oozie Workflows
            1. Time-based scheduling for Oozie coordinator jobs
            2. Time and data availability-based scheduling for Oozie coordinator jobs
            3. Running Oozie coordinator jobs
      4. Part III: Hadoop and Structured Data
        1. Chapter 11: Hadoop and the Data Warehouse: Friends or Foes?
          1. Comparing and Contrasting Hadoop with Relational Databases
            1. NoSQL data stores
            2. ACID versus BASE data stores
            3. Structured data storage and processing in Hadoop
          2. Modernizing the Warehouse with Hadoop
            1. The landing zone
            2. A queryable archive of cold warehouse data
            3. Hadoop as a data preprocessing engine
            4. Data discovery and sandboxes
        2. Chapter 12: Extremely Big Tables: Storing Data in HBase
          1. Say Hello to HBase
            1. Sparse
            2. It’s distributed and persistent
            3. It has a multidimensional sorted map
          2. Understanding the HBase Data Model
          3. Understanding the HBase Architecture
            1. RegionServers
            2. MasterServer
            3. Zookeeper and HBase reliability
          4. Taking HBase for a Test Run
            1. Creating a table
            2. Working with Zookeeper
          5. Getting Things Done with HBase
            1. Working with an HBase Java API client example
          6. HBase and the RDBMS world
            1. Knowing when HBase makes sense for you?
            2. ACID Properties in HBase
            3. Transitioning from an RDBMS model to HBase
          7. Deploying and Tuning HBase
            1. Hardware requirements
            2. Deployment Considerations
            3. Tuning prerequisites
            4. Understanding your data access patterns
            5. Pre-Splitting your regions
            6. The importance of row key design
            7. Tuning major compactions
        3. Chapter 13: Applying Structure to Hadoop Data with Hive
          1. Saying Hello to Hive
          2. Seeing How the Hive is Put Together
          3. Getting Started with Apache Hive
          4. Examining the Hive Clients
            1. The Hive CLI client
            2. The web browser as Hive client
            3. SQuirreL as Hive client with the JDBC Driver
          5. Working with Hive Data Types
          6. Creating and Managing Databases and Tables
            1. Managing Hive databases
            2. Creating and managing tables with Hive
          7. Seeing How the Hive Data Manipulation Language Works
            1. LOAD DATA examples
            2. INSERT examples
            3. Create Table As Select (CTAS) examples
          8. Querying and Analyzing Data
            1. Joining tables with Hive
            2. Improving your Hive queries with indexes
            3. Windowing in HiveQL
            4. Other key HiveQL features
        4. Chapter 14: Integrating Hadoop with Relational Databases Using Sqoop
          1. The Principles of Sqoop Design
          2. Scooping Up Data with Sqoop
            1. Connectors and Drivers
            2. Importing Data with Sqoop
            3. Importing data into HDFS
            4. Importing data into Hive
            5. Importing data into HBase
            6. Importing incrementally
            7. Benefiting from additional Sqoop import features
          3. Sending Data Elsewhere with Sqoop
            1. Exporting data from HDFS
            2. Sqoop exports using the Insert approach
            3. Sqoop exports using the Update and Update Insert approach
            4. Sqoop exports using call stored procedures
            5. Sqoop exports and transactions
          4. Looking at Your Sqoop Input and Output Formatting Options
            1. Getting down to brass tacks: An example of output line-formatting and input-parsing
          5. Sqoop 2.0 Preview
        5. Chapter 15: The Holy Grail: Native SQL Access to Hadoop Data
          1. SQL’s Importance for Hadoop
          2. Looking at What SQL Access Actually Means
          3. SQL Access and Apache Hive
          4. Solutions Inspired by Google Dremel
            1. Apache Drill
            2. Cloudera Impala
          5. IBM Big SQL
          6. Pivotal HAWQ
          7. Hadapt
          8. The SQL Access Big Picture
      5. Part IV: Administering and Configuring Hadoop
        1. Chapter 16: Deploying Hadoop
          1. Working with Hadoop Cluster Components
            1. Rack considerations
            2. Master nodes
            3. Slave nodes
            4. Edge nodes
            5. Networking
          2. Hadoop Cluster Configurations
            1. Small
            2. Medium
            3. Large
          3. Alternate Deployment Form Factors
            1. Virtualized servers
            2. Cloud deployments
          4. Sizing Your Hadoop Cluster
        2. Chapter 17: Administering Your Hadoop Cluster
          1. Achieving Balance: A Big Factor in Cluster Health
          2. Mastering the Hadoop Administration Commands
          3. Understanding Factors for Performance
            1. Hardware
            2. MapReduce
            3. Benchmarking
          4. Tolerating Faults and Data Reliability
          5. Putting Apache Hadoop’s Capacity Scheduler to Good Use
          6. Setting Security: The Kerberos Protocol
          7. Expanding Your Toolset Options
            1. Hue
            2. Ambari
            3. Hadoop User Experience (Hue)
            4. The Hadoop shell
          8. Basic Hadoop Configuration Details
      6. Part V: The Part of Tens
        1. Chapter 18: Ten Hadoop Resources Worthy of a Bookmark
          1. Central Nervous System: Apache.org
          2. Tweet This
          3. Hortonworks University
          4. Cloudera University
          5. BigDataUniversity.com
          6. planet Big Data Blog Aggregator
          7. Quora’s Apache Hadoop Forum
          8. The IBM Big Data Hub
          9. Conferences Not to Be Missed
          10. The Google Papers That Started It All
          11. The Bonus Resource: What Did We Ever Do B.G.?
        2. Chapter 19: Ten Reasons to Adopt Hadoop
          1. Hadoop Is Relatively Inexpensive
          2. Hadoop Has an Active Open Source Community
          3. Hadoop Is Being Widely Adopted in Every Industry
          4. Hadoop Can Easily Scale Out As Your Data Grows
          5. Traditional Tools Are Integrating with Hadoop
          6. Hadoop Can Store Data in Any Format
          7. Hadoop Is Designed to Run Complex Analytics
          8. Hadoop Can Process a Full Data Set (As Opposed to Sampling)
          9. Hardware Is Being Optimized for Hadoop
          10. Hadoop Can Increasingly Handle Flexible Workloads (No Longer Just Batch)
        3. About the Authors
        4. Cheat Sheet
        5. More Dummies Products

    Product information

    • Title: Hadoop For Dummies
    • Author(s):
    • Release date: April 2014
    • Publisher(s): For Dummies
    • ISBN: 9781118607558