Video description
Understanding Hadoop is a highly valuable skill for anyone working at companies that work with large amounts of data. Companies such as Amazon, eBay, Facebook, Google, LinkedIn, IBM, Spotify, Twitter, and Yahoo, use Hadoop in some way to process huge chunks of data. This video course will make you familiar with Hadoop's ecosystem and help you to understand how to apply Hadoop skills in the real world.
The course starts by taking you through the installation process of Hadoop on your desktop. Next, you will manage big data on a cluster with Hadoop Distributed File System (HDFS) and MapReduce, and use Pig and Spark to analyze data on Hadoop. Moving along, you will learn how to store and query your data using applications, such as Sqoop, Hive, MySQL, Phoenix, and MongoDB. Next, you will design real-world systems using the Hadoop ecosystem and learn how to manage clusters with Yet Another Resource Negotiator (YARN), Mesos, Zookeeper, Oozie, Zeppelin, and Hue. Towards the end, you will uncover the techniques to handle and stream data in real-time using Kafka, Flume, Spark Streaming, Flink, and Storm.
By the end of this course, you will become well-versed with the Hadoop ecosystem and will develop the skills required to store, analyze, and scale big data using Hadoop.
What You Will Learn
- Become familiar with Hortonworks and the Ambari User Interface (UI)
- Use Pig and Spark to create scripts to process data on a Hadoop cluster
- Analyze non-relational data using HBase, Cassandra, and MongoDB
- Query data interactively with Drill, Phoenix, and Presto
- Publish data to your Hadoop cluster using Kafka, Sqoop, and Flume
- Consume streaming data using Spark Streaming, Flink, and Storm
Audience
This video course is designed for people at every level; whether you are a software engineer or a programmer who wants to understand the Hadoop ecosystem, or a project manager who wants to become familiar with the Hadoop's lingo, or a system architect who wants to understand the components available in the Hadoop system. To get started with this course, a basic understanding of Python or Scala and ground-level knowledge of the Linux command line are recommended.
About The Author
Frank Kane: Frank Kane has spent nine years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers all the time. He holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology and teaches others about big data analysis.
Publisher resources
Table of contents
- Chapter 1 : Learning All the Buzzwords and Installing the Hortonworks Data Platform Sandbox
-
Chapter 2 : Using the Hadoop's Core: Hadoop Distributed File System (HDFS) and MapReduce
- Hadoop Distributed File System (HDFS): What it is and How it Works
- Installing the MovieLens Dataset
- Activity - Installing the MovieLens Dataset into Hadoop's Distributed File System (HDFS) using the Command Line
- MapReduce: What it is and How it Works
- How MapReduce Distributes Processing
- MapReduce Example: Breaking Down the Movie Ratings by Rating Score
- Activity - Installing Python, MRJob, and Nano
- Activity - Coding Up and Running the Ratings Histogram MapReduce Job
- Exercise – Ranking Movies by Their Popularity
- Activity - Checking Results
- Chapter 3 : Programming Hadoop with Pig
-
Chapter 4 : Programming Hadoop with Spark
- Why Spark?
- The Resilient Distributed Datasets (RDD)
- Activity – Finding the Movie with the Lowest Average Rating with the Resilient Distributed Datasets (RDD)
- Datasets and Spark 2.0
- Activity – Finding the movie with the Lowest Average Rating with DataFrames
- Activity – Recommending a Movie with Spark's Machine Learning Library (MLLib)
- Exercise – Filtering the Lowest-Rated Movies by Number of Ratings
- Activity - Checking Results
-
Chapter 5 : Using Relational Datastores with Hadoop
- What is Hive?
- Activity – Using Hive to Find the Most Popular Movie
- How Hive Works?
- Exercise – Using Hive to Find the Movie with the Highest Average Rating
- Comparing Solutions
- Integrating MySQL with Hadoop
- Activity – Installing MySQL and Importing Movie Data
- Activity - Using Sqoop to Import Data from MySQL to HFDS/Hive
- Activity – Using Sqoop to Export Data from Hadoop to MySQL
-
Chapter 6 : Using Non-Relational Data Stores with Hadoop
- Why NoSQL?
- What is HBase?
- Activity – Importing Movie Ratings into HBase
- Activity – Using HBase with Pig to Import Data at Scale
- Cassandra – Overview
- Activity - Installing Cassandra
- Activity - Writing Spark Output into Cassandra
- MongoDB - Overview
- Activity - Installing MongoDB and Integrating Spark with MongoDB
- Activity - Using the MongoDB Shell
- Choosing Database Technology
- Exercise - Choosing a Database for a Given Problem
-
Chapter 7 : Querying Data Interactively
- Overview of Drill
- Activity - Setting Up Drill
- Activity - Querying Across Multiple Databases with Drill
- Overview of Phoenix
- Activity - Installing Phoenix and Querying HBase
- Activity - Integrating Phoenix with the Pig
- Overview of Presto
- Activity - Installing Presto and Querying Hive
- Activity - Querying Both Cassandra and Hive Using Presto
-
Chapter 8 : Managing Your Cluster
- Yet Another Resource Negotiator (YARN)
- Tez
- Activity - Using Hive on Tez and Measuring the Performance Benefit
- Mesos
- ZooKeeper
- Activity - Simulating a Failing Master with ZooKeeper
- Oozie
- Activity – Setting Up a Simple Oozie Workflow
- Zeppelin - Overview
- Activity - Using Zeppelin to Analyze Movie Ratings - Part 1
- Activity - Using Zeppelin to Analyze Movie Ratings - Part 2
- Hue - Overview
- Other Technologies Worth Mentioning
- Chapter 9 : Feeding Data to Your Cluster
-
Chapter 10 : Analyzing Streams of Data
- Spark Streaming: Introduction
- Activity - Analyzing Web Logs Published with Flume using Spark Streaming
- Exercise - Monitor Flume-Published Logs for Errors in Real Time
- Exercise Solution: Aggregating the Hypertext Transfer Protocol (HTTP) Access Codes with Spark Streaming
- Apache Storm: Introduction
- Activity - Counting Words with Storm
- Flink: Overview
- Activity - Counting Words with Flink
-
Chapter 11 : Designing Real-World Systems
- The Best of the Rest
- Review: How the Pieces Fit Together?
- Understanding Your Requirements
- Sample Application: Consuming Web Server Logs and Keeping Track of Top-Sellers
- Sample Application: Serving Movie Recommendations to a Website
- Exercise - Designing a System to Report Web Sessions Per Day
- Exercise Solution: Designing a System to Count Daily Sessions
- Chapter 12 : Learning More
Product information
- Title: The Ultimate Hands-On Hadoop
- Author(s):
- Release date: December 2020
- Publisher(s): Packt Publishing
- ISBN: 9781788478489
You might also like
book
Hadoop: The Definitive Guide, 4th Edition
Get ready to unlock the power of your data. With the fourth edition of this comprehensive …
book
Kafka: The Definitive Guide
Every enterprise application creates data, whether it’s log messages, metrics, user activity, outgoing messages, or something …
video
Apache Kafka for Absolute Beginners
This course is designed to get you up and running with the fundamentals and the working …
video
Apache Kafka Series - Learn Apache Kafka for Beginners v3
The high throughput and low latency of Apache Kafka have made it one of the leading …