If you want to start working on big data projects fast, this is the guide you've been looking for. Delve deep into Talend and discover how just how easily you can revolutionize your data handling and presentation.
Talend, a successful Open Source Data Integration Solution, accelerates the adoption of new big data technologies and efficiently integrates them into your existing IT infrastructure. It is able to do this because of its intuitive graphical language, its multiple connectors to the Hadoop ecosystem, and its array of tools for data integration, quality, management, and governance.
This is a concise, pragmatic book that will guide you through design and implement big data transfer easily and perform big data analytics jobs using Hadoop technologies like HDFS, HBase, Hive, Pig, and Sqoop. You will see and learn how to write complex processing job codes and how to leverage the power of Hadoop projects through the design of graphical Talend jobs using business modeler, meta-data repository, and a palette of configurable components.
Starting with understanding how to process a large amount of data using Talend big data components, you will then learn how to write job procedures in HDFS. You will then look at how to use Hadoop projects to process data and how to export the data to your favourite relational database system.
You will learn how to implement Hive ELT jobs, Pig aggregation and filtering jobs, and simple Sqoop jobs using the Talend big data component palette. You will also learn the basics of Twitter sentiment analysis the instructions to format data with Apache Hive.
Talend for Big Data will enable you to start working on big data projects immediately, from simple processing projects to complex projects using common big data patterns.
What You Will Learn
- Discover the structure of the Talend Unified Platform
- Work with Talend HDFS components
- Implement ELT processing jobs using Talend Hive components
- Load, filter, aggregate, and store data using Talend Pig components
- Integrate HDFS with RDBMS using Sqoop components
- Use the streaming pattern for big data
- Learn to reuse the partitioning pattern for Big Data
Table of contents
Talend for Big Data
- Table of Contents
- Talend for Big Data
- About the Author
- About the Reviewers
- 1. Getting Started with Talend Big Data
- 2. Building Our First Big Data Job
- 3. Formatting Data
- 4. Processing Tweets with Apache Hive
- 5. Aggregate Data with Apache Pig
- 6. Back to the SQL Database
- 7. Big Data Architecture and Integration Patterns
- A. Installing Your Hadoop Cluster with Cloudera CDH VM
- Title: Talend for Big Data
- Release date: February 2014
- Publisher(s): Packt Publishing
- ISBN: 9781782169499
You might also like
Kubernetes in Action
Kubernetes in Action teaches you to use Kubernetes to deploy container-based distributed applications. You'll start with …
Data Lake for Enterprises
A practical guide to implementing your enterprise data lake using Lambda Architecture as the base About …
Learning SQL, 3rd Edition
As data floods into your company, you need to put it to work right away—and SQL …
AWS Certified Solutions Architect Official Study Guide
Validate your AWS skills. This is your opportunity to take the next step in your career …