Video description
Apache Hadoop is a freely available open source tool-set that enables big data analysis. This Hadoop Fundamentals LiveLessons tutorial demonstrates the core components of Hadoop including Hadoop Distriuted File Systems (HDFS) and MapReduce. In addition, the tutorial demonstrates how to use Hadoop at several levels including the native Java interface, C++ pipes, and the universal streaming program interface. Examples of how to use high level tools include the Pig scripting language and the Hive 'SQL like' interface. Finally, the steps for installing Hadoop on a desktop virtual machine, in a Cloud environment, and on a local stand-alone cluster are presented. Topics covered in this tutorial apply to Hadoop version 2 (i.e., MR2 or Yarn).
About the Author:
Douglas Eadline, PhD, began his career as a practitioner and a chronicler of the Linux Cluster HPC revolution and now documents big data analytics. Starting with the first Beowulf How To document, Dr. Eadline has written hundreds of articles, white papers, and instructional documents covering virtually all aspects of HPC computing. Prior to starting and editing the popular ClusterMonkey.net web site in 2005, he served as Editorinchief for ClusterWorld Magazine, and was Senior HPC Editor for Linux Magazine. Currently, he is a consultant to the HPC industry and writes a monthly column in HPC Admin Magazine. Both clients and readers have recognized Dr. Eadline's ability to present a "technological value proposition" in a clear and accurate style. He has practical hands on experience in many aspects of HPC including, hardware and software design, benchmarking, storage, GPU, cloud, and parallel computing.
Table of contents
- Introduction
- Lesson 1: Background Concepts
- Lesson 2: Running Hadoop on a Desktop or Laptop
- Lesson 3: The Hadoop Distributed File System
- Lesson 4: Hadoop MapReduce
- Lesson 5: Hadoop Examples
-
Lesson 6: Higher Level Tools
- Learning objectives
- 6.1 Use Pig
- 6.2 Use Hive
- 6.3 Demonstrate an Apache Flume example—Part 1
- 6.3 Demonstrate an Apache Flume example—Part 2
- 6.4 Demonstrate an Apache Sqoop example—Part 1
- 6.4 Demonstrate an Apache Sqoop example—Part 2
- 6.5 Demonstrate an Apache Oozie example—Part 1
- 6.5 Demonstrate an Apache Oozie example—Part 2
- Lesson 7: Setting Up Hadoop in the Cloud
-
Lesson 8: Set Up Hadoop on a Local Cluster
- Learning objectives
- 8.1 Specify and prepare servers
- 8.2 Install and configure Hadoop Core
- 8.3 Install and configure Pig and Hive
- 8.4 Install and configure Ganglia
- 8.5 Perform simple administration and monitoring
- 8.6 Install and configure Hadoop using Ambari
- 8.7 Perform simple administration and monitoring with Ambari—Part 1
- 8.7 Perform simple administration and monitoring with Ambari—Part 2
- Summary
Product information
- Title: Hadoop Fundamentals LiveLessons (Video Training), 2/e
- Author(s):
- Release date: November 2014
- Publisher(s): Addison-Wesley Professional
- ISBN: 013405248X
You might also like
video
Microsoft Power BI - The Complete Masterclass [2023 EDITION]
Microsoft Power BI is an interactive data visualization software primarily focusing on business intelligence, part of …
video
Case Study: How Honeycomb used Serverless to Speed Up Its Servers
At Honeycomb, customers send lots of data and then compose complex, ad hoc queries. Most are …
video
Data Engineering Foundations LiveLessons Part 1: Using Spark, Hive, and Hadoop Scalable Tools
6+ Hours of Video Instruction One Line Sell The perfect way to get started with scalable …
video
Master Big Data Ingestion and Analytics with Flume, Sqoop, Hive and Spark
Complete course on Sqoop, Flume, and Hive: Great for CCA175 and Hortonworks Spark Certification preparation About …