Book description
Over 90 hands-on recipes to help you learn and master the intricacies of Apache Hadoop 2.X, YARN, Hive, Pig, Oozie, Flume, Sqoop, Apache Spark, and Mahout
About This Book
- Implement outstanding Machine Learning use cases on your own analytics models and processes.
- Solutions to common problems when working with the Hadoop ecosystem.
- Step-by-step implementation of end-to-end big data use cases.
Who This Book Is For
Readers who have a basic knowledge of big data systems and want to advance their knowledge with hands-on recipes.
What You Will Learn
- Installing and maintaining Hadoop 2.X cluster and its ecosystem.
- Write advanced Map Reduce programs and understand design patterns.
- Advanced Data Analysis using the Hive, Pig, and Map Reduce programs.
- Import and export data from various sources using Sqoop and Flume.
- Data storage in various file formats such as Text, Sequential, Parquet, ORC, and RC Files.
- Machine learning principles with libraries such as Mahout
- Batch and Stream data processing using Apache Spark
In Detail
Big data is the current requirement. Most organizations produce huge amount of data every day. With the arrival of Hadoop-like tools, it has become easier for everyone to solve big data problems with great efficiency and at minimal cost. Grasping Machine Learning techniques will help you greatly in building predictive models and using this data to make the right decisions for your organization.
Hadoop Real World Solutions Cookbook gives readers insights into learning and mastering big data via recipes. The book not only clarifies most big data tools in the market but also provides best practices for using them. The book provides recipes that are based on the latest versions of Apache Hadoop 2.X, YARN, Hive, Pig, Sqoop, Flume, Apache Spark, Mahout and many more such ecosystem tools. This real-world-solution cookbook is packed with handy recipes you can apply to your own everyday issues. Each chapter provides in-depth recipes that can be referenced easily. This book provides detailed practices on the latest technologies such as YARN and Apache Spark. Readers will be able to consider themselves as big data experts on completion of this book.
This guide is an invaluable tutorial if you are planning to implement a big data warehouse for your business.
Style and approach
An easy-to-follow guide that walks you through world of big data. Each tool in the Hadoop ecosystem is explained in detail and the recipes are placed in such a manner that readers can implement them sequentially. Plenty of reference links are provided for advanced reading.
Table of contents
-
Hadoop Real-World Solutions Cookbook Second Edition
- Table of Contents
- Hadoop Real-World Solutions Cookbook Second Edition
- Credits
- About the Author
- Acknowledgements
- About the Reviewer
- www.PacktPub.com
- Preface
-
1. Getting Started with Hadoop 2.X
- Introduction
- Installing a single-node Hadoop Cluster
- Installing a multi-node Hadoop cluster
- Adding new nodes to existing Hadoop clusters
- Executing the balancer command for uniform data distribution
- Entering and exiting from the safe mode in a Hadoop cluster
- Decommissioning DataNodes
- Performing benchmarking on a Hadoop cluster
-
2. Exploring HDFS
- Introduction
- Loading data from a local machine to HDFS
- Exporting HDFS data to a local machine
- Changing the replication factor of an existing file in HDFS
- Setting the HDFS block size for all the files in a cluster
- Setting the HDFS block size for a specific file in a cluster
- Enabling transparent encryption for HDFS
- Importing data from another Hadoop cluster
- Recycling deleted data from trash to HDFS
- Saving compressed data in HDFS
-
3. Mastering Map Reduce Programs
- Introduction
- Writing the Map Reduce program in Java to analyze web log data
- Executing the Map Reduce program in a Hadoop cluster
- Adding support for a new writable data type in Hadoop
- Implementing a user-defined counter in a Map Reduce program
- Map Reduce program to find the top X
- Map Reduce program to find distinct values
- Map Reduce program to partition data using a custom partitioner
- Writing Map Reduce results to multiple output files
- Performing Reduce side Joins using Map Reduce
- Unit testing the Map Reduce code using MRUnit
-
4. Data Analysis Using Hive, Pig, and Hbase
- Introduction
- Storing and processing Hive data in a sequential file format
- Storing and processing Hive data in the RC file format
- Storing and processing Hive data in the ORC file format
- Storing and processing Hive data in the Parquet file format
- Performing FILTER By queries in Pig
- Performing Group By queries in Pig
- Performing Order By queries in Pig
- Performing JOINS in Pig
- Writing a user-defined function in Pig
- Analyzing web log data using Pig
- Performing the Hbase operation in CLI
- Performing Hbase operations in Java
- Executing the MapReduce programming with an Hbase Table
-
5. Advanced Data Analysis Using Hive
- Introduction
- Processing JSON data in Hive using JSON SerDe
- Processing XML data in Hive using XML SerDe
- Processing Hive data in the Avro format
- Writing a user-defined function in Hive
- Performing table joins in Hive
- Executing map side joins in Hive
- Performing context Ngram in Hive
- Call Data Record Analytics using Hive
- Twitter sentiment analysis using Hive
- Implementing Change Data Capture using Hive
- Multiple table inserting using Hive
-
6. Data Import/Export Using Sqoop and Flume
- Introduction
- Importing data from RDMBS to HDFS using Sqoop
- Exporting data from HDFS to RDBMS
- Using query operator in Sqoop import
- Importing data using Sqoop in compressed format
- Performing Atomic export using Sqoop
- Importing data into Hive tables using Sqoop
- Importing data into HDFS from Mainframes
- Incremental import using Sqoop
- Creating and executing Sqoop job
- Importing data from RDBMS to Hbase using Sqoop
- Importing Twitter data into HDFS using Flume
- Importing data from Kafka into HDFS using Flume
- Importing web logs data into HDFS using Flume
-
7. Automation of Hadoop Tasks Using Oozie
- Introduction
- Implementing a Sqoop action job using Oozie
- Implementing a Map Reduce action job using Oozie
- Implementing a Java action job using Oozie
- Implementing a Hive action job using Oozie
- Implementing a Pig action job using Oozie
- Implementing an e-mail action job using Oozie
- Executing parallel jobs using Oozie (fork)
- Scheduling a job in Oozie
-
8. Machine Learning and Predictive Analytics Using Mahout and R
- Introduction
- Setting up the Mahout development environment
- Creating an item-based recommendation engine using Mahout
- Creating a user-based recommendation engine using Mahout
- Predictive analytics on Bank Data using Mahout
- Text data clustering using K-Means using Mahout
- Population Data Analytics using R
- Twitter Sentiment Analytics using R
- Performing Predictive Analytics using R
-
9. Integration with Apache Spark
- Introduction
- Running Spark standalone
- Running Spark on YARN
- Performing Olympics Athletes analytics using the Spark Shell
- Creating Twitter trending topics using Spark Streaming
- Twitter trending topics using Spark streaming
- Analyzing Parquet files using Spark
- Analyzing JSON data using Spark
- Processing graphs using Graph X
- Conducting predictive analytics using Spark MLib
- 10. Hadoop Use Cases
- Index
Product information
- Title: Hadoop Real-World Solutions Cookbook - Second Edition
- Author(s):
- Release date: March 2016
- Publisher(s): Packt Publishing
- ISBN: 9781784395506
You might also like
book
Hadoop in Practice, Second Edition
Hadoop in Practice, Second Edition provides over 100 tested, instantly useful techniques that will help you …
book
Hadoop 2.x Administration Cookbook
Over 100 practical recipes to help you become an expert Hadoop administrator About This Book Become …
book
Hadoop Application Architectures
Get expert guidance on architecting end-to-end data management solutions with Apache Hadoop. While many sources explain …
book
Apache Hadoop 3 Quick Start Guide
A fast paced guide that will help you learn about Apache Hadoop 3 and its ecosystem …