Hadoop Fundamentals for Data Scientists

Video description

Get a practical introduction to Hadoop, the framework that made big data and large-scale analytics possible by combining distributed computing techniques with distributed storage. In this video tutorial, hosts Benjamin Bengfort and Jenny Kim discuss the core concepts behind distributed computing and big data, and then show you how to work with a Hadoop cluster and program analytical jobs. You’ll also learn how to use higher-level tools such as Hive and Spark.

Hadoop is a cluster computing technology that has many moving parts, including distributed systems administration, data engineering and warehousing methodologies, software engineering for distributed computing, and large-scale analytics. With this video, you’ll learn how to operationalize analytics over large datasets and rapidly deploy analytical jobs with a variety of toolsets.

Once you’ve completed this video, you’ll understand how different parts of Hadoop combine to form an entire data pipeline managed by teams of data engineers, data programmers, data researchers, and data business people.

  • Understand the Hadoop architecture and set up a pseudo-distributed development environment
  • Learn how to develop distributed computations with MapReduce and the Hadoop Distributed File System (HDFS)
  • Work with Hadoop via the command-line interface
  • Use the Hadoop Streaming utility to execute MapReduce jobs in Python
  • Explore data warehousing, higher-order data flows, and other projects in the Hadoop ecosystem
  • Learn how to use Hive to query and analyze relational data using Hadoop
  • Use summarization, filtering, and aggregation to move Big Data towards last mile computation
  • Understand how analytical workflows including iterative machine learning, feature analysis, and data modeling work in a Big Data context

Benjamin Bengfort is a data scientist and programmer in Washington DC who prefers technology to politics but sees the value of data in every domain. Alongside his work teaching, writing, and developing large-scale analytics with a focus on statistical machine learning, he is finishing his PhD at the University of Maryland where he studies machine learning and artificial intelligence.

Jenny Kim, a software engineer in the San Francisco Bay Area, develops, teaches, and writes about big data analytics applications and specializes in large-scale, distributed computing infrastructures and machine-learning algorithms to support recommendations systems.

Publisher resources

Download Example Code

Table of contents

  1. Overview of the Video Course 00:08:24
  2. A Distributed Computing Environment
    1. The Motivation for Hadoop 00:09:23
    2. A Brief History of Hadoop 00:05:34
    3. Understanding the Hadoop Architecture 00:12:24
    4. Setting Up A Pseudo-Distributed Environment 00:03:47
    5. The Distributed File System (HDFS) 00:11:16
    6. Distributed Computing with MapReduce 00:07:45
    7. Word Count - the "Hello, World" of Hadoop! 00:08:02
  3. Computing with Hadoop
    1. How a MapReduce Job Works 00:10:27
    2. Mappers and Reducers in Detail 00:19:17
    3. Working with Hadoop via the Command Line: Starting HDFS and Yarn 00:07:54
    4. Working with Hadoop via the Command Line: Loading Data into HDFS 00:07:05
    5. Working with Hadoop via the Command Line: Running a MapReduce Job 00:07:55
    6. How To Use Our Github Goodies 00:00:38
    7. Working in Python with Hadoop Streaming 00:21:55
    8. Common MapReduce Tasks 00:13:54
    9. Spark on Hadoop 2 00:18:26
    10. Creating a Spark Application with Python 00:22:31
  4. The Hadoop Ecosystem
    1. The Hadoop Ecosystem 00:03:01
    2. Data Warehousing with Hadoop 00:17:15
    3. Higher Order Data Flows 00:11:21
    4. Other Notable Projects 00:08:31
  5. Working with Data on Hive
    1. Introduction to Hive 00:04:29
    2. Interacting with Data via the Hive Console 00:10:40
    3. Creating Databases, Tables, and Schemas for Hive 00:08:20
    4. Loading Data into Hive from HDFS 00:09:26
    5. Querying Data and Performing Aggregations With Hive 00:12:07
  6. Towards Last Mile Computing
    1. Decomposing Large Data Sets to a Computational Space 00:07:56
    2. Linear Regressions 00:20:11
    3. Summarizing Documents with TF-IDF 00:14:11
    4. Classification of Text 00:15:45
    5. Parallel Canopy Clustering 00:11:03
    6. Computing Recommendations via Linear Log-Likelihoods 00:14:51

Product information

  • Title: Hadoop Fundamentals for Data Scientists
  • Author(s):
  • Release date: January 2015
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781491913161