Fast Data Processing with Spark 2 - Third Edition

Book description

Learn how to use Spark to process big data at speed and scale for sharper analytics. Put the principles into practice for faster, slicker big data projects.

About This Book

  • A quick way to get started with Spark – and reap the rewards

  • From analytics to engineering your big data architecture, we’ve got it covered

  • Bring your Scala and Java knowledge – and put it to work on new and exciting problems

  • Who This Book Is For

    This book is for developers with little to no knowledge of Spark, but with a background in Scala/Java programming. It’s recommended that you have experience in dealing and working with big data and a strong interest in data science.

    What You Will Learn

  • Install and set up Spark in your cluster

  • Prototype distributed applications with Spark's interactive shell

  • Perform data wrangling using the new DataFrame APIs

  • Get to know the different ways to interact with Spark's distributed representation of data (RDDs)

  • Query Spark with a SQL-like query syntax

  • See how Spark works with big data

  • Implement machine learning systems with highly scalable algorithms

  • Use R, the popular statistical language, to work with Spark

  • Apply interesting graph algorithms and graph processing with GraphX

  • In Detail

    When people want a way to process big data at speed, Spark is invariably the solution. With its ease of development (in comparison to the relative complexity of Hadoop), it’s unsurprising that it’s becoming popular with data analysts and engineers everywhere.

    Beginning with the fundamentals, we’ll show you how to get set up with Spark with minimum fuss. You’ll then get to grips with some simple APIs before investigating machine learning and graph processing – throughout we’ll make sure you know exactly how to apply your knowledge.

    You will also learn how to use the Spark shell, how to load data before finding out how to build and run your own Spark applications. Discover how to manipulate your RDD and get stuck into a range of DataFrame APIs. As if that’s not enough, you’ll also learn some useful Machine Learning algorithms with the help of Spark MLlib and integrating Spark with R. We’ll also make sure you’re confident and prepared for graph processing, as you learn more about the GraphX API.

    Style and approach

    This book is a basic, step-by-step tutorial that will help you take advantage of all that Spark has to offer.

    Publisher resources

    Download Example Code

    Table of contents

    1. Fast Data Processing with Spark 2 Third Edition
      1. Fast Data Processing with Spark 2 Third Edition
      2. Credits
      3. About the Author
      4. About the Reviewers
        1. Why subscribe?
      6. Preface
        1. What this book covers
        2. What you need for this book
        3. Who this book is for
        4. Conventions
        5. Reader feedback
        6. Customer support
          1. Downloading the example code
          2. Errata
          3. Piracy
          4. Questions
      7. 1. Installing Spark and Setting Up Your Cluster
        1. Directory organization and convention
        2. Installing the prebuilt distribution
        3. Building Spark from source
          1. Downloading the source
          2. Compiling the source with Maven
          3. Compilation switches
          4. Testing the installation
        4. Spark topology
        5. A single machine
        6. Running Spark on EC2
          1. Downloading EC-scripts
          2. Running Spark on EC2 with the scripts
          3. Deploying Spark on Elastic MapReduce
        7. Deploying Spark with Chef (Opscode)
        8. Deploying Spark on Mesos
        9. Spark on YARN
        10. Spark standalone mode
        11. References
        12. Summary
      8. 2. Using the Spark Shell
        1. The Spark shell
          1. Exiting out of the shell
          2. Using Spark shell to run the book code
        2. Loading a simple text file
        3. Interactively loading data from S3
          1. Running the Spark shell in Python
        4. Summary
      9. 3. Building and Running a Spark Application
        1. Building Spark applications
        2. Data wrangling with iPython
        3. Developing Spark with Eclipse
        4. Developing Spark with other IDEs
        5. Building your Spark job with Maven
        6. Building your Spark job with something else
        7. References
        8. Summary
      10. 4. Creating a SparkSession Object
        1. SparkSession versus SparkContext
        2. Building a SparkSession object
        3. SparkContext - metadata
        4. Shared Java and Scala APIs
        5. Python
        6. iPython
        7. Reference
        8. Summary
      11. 5. Loading and Saving Data in Spark
        1. Spark abstractions
          1. RDDs
        2. Data modalities
        3. Data modalities and Datasets/DataFrames/RDDs
        4. Loading data into an RDD
        5. Saving your data
        6. References
        7. Summary
      12. 6. Manipulating Your RDD
        1. Manipulating your RDD in Scala and Java
          1. Scala RDD functions
          2. Functions for joining the PairRDD classes
          3. Other PairRDD functions
          4. Double RDD functions
          5. General RDD functions
          6. Java RDD functions
            1. Spark Java function classes
            2. Common Java RDD functions
            3. Methods for combining JavaRDDs
            4. Functions on JavaPairRDDs
        2. Manipulating your RDD in Python
          1. Standard RDD functions
          2. The PairRDD functions
        3. References
        4. Summary
      13. 7. Spark 2.0 Concepts
        1. Code and Datasets for the rest of the book
          1. Code
          2. IDE
          3. iPython startup and test
          4. Datasets
            1. Car-mileage
            2. Northwind industries sales data
            3. Titanic passenger list
            4. State of the Union speeches by POTUS
            5. Movie lens Dataset
        2. The data scientist and Spark features
          1. Who is this data scientist DevOps person?
          2. The Data Lake architecture
            1. Data Hub
            2. Reporting Hub
            3. Analytics Hub
        3. Spark v2.0 and beyond
        4. Apache Spark - evolution
        5. Apache Spark - the full stack
        6. The art of a big data store - Parquet
          1. Column projection and data partition
          2. Compression
          3. Smart data storage and predicate pushdown
          4. Support for evolving schema
          5. Performance
        7. References
        8. Summary
      14. 8. Spark SQL
        1. The Spark SQL architecture
        2. Spark SQL how-to in a nutshell
          1. Spark SQL with Spark 2.0
        3. Spark SQL programming
          1. Datasets/DataFrames
          2. SQL access to a simple data table
            1. Handling multiple tables with Spark SQL
            2. Aftermath
        4. References
        5. Summary
      15. 9. Foundations of Datasets/DataFrames – The Proverbial Workhorse for DataScientists
        1. Datasets - a quick introduction
        2. Dataset APIs - an overview
          1. org.apache.spark.sql.SparkSession/pyspark.sql.SparkSession
          2. org.apache.spark.sql.Dataset/pyspark.sql.DataFrame
          3. org.apache.spark.sql.{Column,Row}/pyspark.sql.(Column,Row)
            1. org.apache.spark.sql.Column
            2. org.apache.spark.sql.Row
          4. org.apache.spark.sql.functions/pyspark.sql.functions
        3. Dataset interfaces and functions
          1. Read/write operations
          2. Aggregate functions
          3. Statistical functions
          4. Scientific functions
          5. Data wrangling with Datasets
            1. Reading data into the respective Datasets
            2. Aggregate and sort
            3. Date columns, totals, and aggregations
              1. The OrderTotal column
              2. Date operations
            4. Final aggregations for the answers we want
        4. References
        5. Summary
      16. 10. Spark with Big Data
        1. Parquet - an efficient and interoperable big data format
          1. Saving files in the Parquet format
          2. Loading Parquet files
          3. Saving processed RDDs in the Parquet format
        2. HBase
          1. Loading from HBase
          2. Saving to HBase
          3. Other HBase operations
        3. Reference
        4. Summary
      17. 11. Machine Learning with Spark ML Pipelines
        1. Spark's machine learning algorithm table
        2. Spark machine learning APIs - ML pipelines and MLlib
        3. ML pipelines
        4. Spark ML examples
        5. The API organization
        6. Basic statistics
          1. Loading data
          2. Computing statistics
        7. Linear regression
          1. Data transformation and feature extraction
          2. Data split
          3. Predictions using the model
          4. Model evaluation
        8. Classification
          1. Loading data
          2. Data transformation and feature extraction
          3. Data split
          4. The regression model
          5. Prediction using the model
          6. Model evaluation
        9. Clustering
          1. Loading data
          2. Data transformation and feature extraction
          3. Data split
          4. Predicting using the model
          5. Model evaluation and interpretation
          6. Clustering model interpretation
        10. Recommendation
          1. Loading data
          2. Data transformation and feature extraction
          3. Data splitting
          4. Predicting using the model
          5. Model evaluation and interpretation
        11. Hyper parameters
        12. The final thing
        13. References
        14. Summary
      18. 12. GraphX
        1. Graphs and graph processing - an introduction
        2. Spark GraphX
        3. GraphX - computational model
        4. The first example - graph
        5. Building graphs
        6. The GraphX API landscape
        7. Structural APIs
          1. What's wrong with the output?
        8. Community, affiliation, and strengths
        9. Algorithms
          1. Graph parallel computation APIs
            1. The aggregateMessages() API
              1. The first example - the oldest follower
              2. The second example - the oldest followee
              3. The third example - the youngest follower/followee
              4. The fourth example - inDegree/outDegree
        10. Partition strategy
        11. Case study - AlphaGo tweets analytics
          1. Data pipeline
          2. GraphX modeling
          3. GraphX processing and algorithms
        12. References
        13. Summary

    Product information

    • Title: Fast Data Processing with Spark 2 - Third Edition
    • Author(s): Krishna Sankar
    • Release date: October 2016
    • Publisher(s): Packt Publishing
    • ISBN: 9781785889271