Large Scale Machine Learning with Python

Book description

Learn to build powerful machine learning models quickly and deploy large-scale predictive applications

About This Book

  • Design, engineer and deploy scalable machine learning solutions with the power of Python
  • Take command of Hadoop and Spark with Python for effective machine learning on a map reduce framework
  • Build state-of-the-art models and develop personalized recommendations to perform machine learning at scale

Who This Book Is For

This book is for anyone who intends to work with large and complex data sets. Familiarity with basic Python and machine learning concepts is recommended. Working knowledge in statistics and computational mathematics would also be helpful.

What You Will Learn

  • Apply the most scalable machine learning algorithms
  • Work with modern state-of-the-art large-scale machine learning techniques
  • Increase predictive accuracy with deep learning and scalable data-handling techniques
  • Improve your work by combining the MapReduce framework with Spark
  • Build powerful ensembles at scale
  • Use data streams to train linear and non-linear predictive models from extremely large datasets using a single machine

In Detail

Large Python machine learning projects involve new problems associated with specialized machine learning architectures and designs that many data scientists have yet to tackle. But finding algorithms and designing and building platforms that deal with large sets of data is a growing need. Data scientists have to manage and maintain increasingly complex data projects, and with the rise of big data comes an increasing demand for computational and algorithmic efficiency. Large Scale Machine Learning with Python uncovers a new wave of machine learning algorithms that meet scalability demands together with a high predictive accuracy.

Dive into scalable machine learning and the three forms of scalability. Speed up algorithms that can be used on a desktop computer with tips on parallelization and memory allocation. Get to grips with new algorithms that are specifically designed for large projects and can handle bigger files, and learn about machine learning in big data environments. We will also cover the most effective machine learning techniques on a map reduce framework in Hadoop and Spark in Python.

Style and Approach

This efficient and practical title is stuffed full of the techniques, tips and tools you need to ensure your large scale Python machine learning runs swiftly and seamlessly.

Large-scale machine learning tackles a different issue to what is currently on the market. Those working with Hadoop clusters and in data intensive environments can now learn effective ways of building powerful machine learning models from prototype to production.

This book is written in a style that programmers from other languages (R, Julia, Java, Matlab) can follow.

Table of contents

  1. Large Scale Machine Learning with Python
    1. Table of Contents
    2. Large Scale Machine Learning with Python
    3. Credits
    4. About the Authors
    5. About the Reviewer
    6. www.PacktPub.com
      1. eBooks, discount offers, and more
        1. Why subscribe?
    7. Preface
      1. What this book covers
      2. What you need for this book
      3. Who this book is for
      4. Conventions
      5. Reader feedback
      6. Customer support
        1. Downloading the example code
        2. Downloading the color images of this book
        3. Errata
        4. Piracy
        5. Questions
    8. 1. First Steps to Scalability
      1. Explaining scalability in detail
        1. Making large scale examples
        2. Introducing Python
        3. Scale up with Python
        4. Scale out with Python
      2. Python for large scale machine learning
        1. Choosing between Python 2 and Python 3
        2. Installing Python
        3. Step-by-step installation
        4. The installation of packages
        5. Package upgrades
        6. Scientific distributions
        7. Introducing Jupyter/IPython
      3. Python packages
        1. NumPy
        2. SciPy
        3. Pandas
        4. Scikit-learn
          1. The matplotlib package
          2. Gensim
          3. H2O
          4. XGBoost
          5. Theano
          6. TensorFlow
          7. The sknn library
          8. Theanets
          9. Keras
          10. Other useful packages to install on your system
      4. Summary
    9. 2. Scalable Learning in Scikit-learn
      1. Out-of-core learning
        1. Subsampling as a viable option
        2. Optimizing one instance at a time
        3. Building an out-of-core learning system
      2. Streaming data from sources
        1. Datasets to try the real thing yourself
        2. The first example – streaming the bike-sharing dataset
        3. Using pandas I/O tools
        4. Working with databases
        5. Paying attention to the ordering of instances
      3. Stochastic learning
        1. Batch gradient descent
        2. Stochastic gradient descent
        3. The Scikit-learn SGD implementation
        4. Defining SGD learning parameters
      4. Feature management with data streams
        1. Describing the target
        2. The hashing trick
        3. Other basic transformations
        4. Testing and validation in a stream
        5. Trying SGD in action
      5. Summary
    10. 3. Fast SVM Implementations
      1. Datasets to experiment with on your own
        1. The bike-sharing dataset
        2. The covertype dataset
      2. Support Vector Machines
        1. Hinge loss and its variants
        2. Understanding the Scikit-learn SVM implementation
        3. Pursuing nonlinear SVMs by subsampling
        4. Achieving SVM at scale with SGD
      3. Feature selection by regularization
      4. Including non-linearity in SGD
        1. Trying explicit high-dimensional mappings
      5. Hyperparameter tuning
        1. Other alternatives for SVM fast learning
          1. Nonlinear and faster with Vowpal Wabbit
          2. Installing VW
          3. Understanding the VW data format
          4. Python integration
          5. A few examples using reductions for SVM and neural nets
          6. Faster bike-sharing
          7. The covertype dataset crunched by VW
      6. Summary
    11. 4. Neural Networks and Deep Learning
      1. The neural network architecture
        1. What and how neural networks learn
        2. Choosing the right architecture
          1. The input layer
          2. The hidden layer
          3. The output layer
        3. Neural networks in action
        4. Parallelization for sknn
      2. Neural networks and regularization
      3. Neural networks and hyperparameter optimization
      4. Neural networks and decision boundaries
      5. Deep learning at scale with H2O
        1. Large scale deep learning with H2O
        2. Gridsearch on H2O
      6. Deep learning and unsupervised pretraining
      7. Deep learning with theanets
      8. Autoencoders and unsupervised learning
        1. Autoencoders
      9. Summary
    12. 5. Deep Learning with TensorFlow
      1. TensorFlow installation
        1. TensorFlow operations
          1. GPU computing
          2. Linear regression with SGD
          3. A neural network from scratch in TensorFlow
      2. Machine learning on TensorFlow with SkFlow
        1. Deep learning with large files – incremental learning
      3. Keras and TensorFlow installation
      4. Convolutional Neural Networks in TensorFlow through Keras
        1. The convolution layer
        2. The pooling layer
        3. The fully connected layer
      5. CNN's with an incremental approach
      6. GPU Computing
      7. Summary
    13. 6. Classification and Regression Trees at Scale
      1. Bootstrap aggregation
      2. Random forest and extremely randomized forest
      3. Fast parameter optimization with randomized search
        1. Extremely randomized trees and large datasets
      4. CART and boosting
        1. Gradient Boosting Machines
          1. max_depth
          2. learning_rate
          3. Subsample
          4. Faster GBM with warm_start
            1. Speeding up GBM with warm_start
          5. Training and storing GBM models
      5. XGBoost
        1. XGBoost regression
          1. XGBoost and variable importance
        2. XGBoost streaming large datasets
        3. XGBoost model persistence
      6. Out-of-core CART with H2O
        1. Random forest and gridsearch on H2O
        2. Stochastic gradient boosting and gridsearch on H2O
      7. Summary
    14. 7. Unsupervised Learning at Scale
      1. Unsupervised methods
      2. Feature decomposition – PCA
        1. Randomized PCA
        2. Incremental PCA
        3. Sparse PCA
      3. PCA with H2O
      4. Clustering – K-means
        1. Initialization methods
        2. K-means assumptions
        3. Selection of the best K
        4. Scaling K-means – mini-batch
      5. K-means with H2O
      6. LDA
        1. Scaling LDA – memory, CPUs, and machines
      7. Summary
    15. 8. Distributed Environments – Hadoop and Spark
      1. From a standalone machine to a bunch of nodes
        1. Why do we need a distributed framework?
      2. Setting up the VM
        1. VirtualBox
        2. Vagrant
        3. Using the VM
      3. The Hadoop ecosystem
        1. Architecture
        2. HDFS
        3. MapReduce
        4. YARN
      4. Spark
        1. pySpark
      5. Summary
    16. 9. Practical Machine Learning with Spark
      1. Setting up the VM for this chapter
      2. Sharing variables across cluster nodes
        1. Broadcast read-only variables
        2. Accumulators write-only variables
        3. Broadcast and accumulators together – an example
      3. Data preprocessing in Spark
        1. JSON files and Spark DataFrames
        2. Dealing with missing data
        3. Grouping and creating tables in-memory
        4. Writing the preprocessed DataFrame or RDD to disk
        5. Working with Spark DataFrames
      4. Machine learning with Spark
        1. Spark on the KDD99 dataset
        2. Reading the dataset
        3. Feature engineering
        4. Training a learner
        5. Evaluating a learner's performance
        6. The power of the ML pipeline
        7. Manual tuning
        8. Cross-validation
          1. Final cleanup
      5. Summary
    17. A. Introduction to GPUs and Theano
      1. GPU computing
      2. Theano – parallel computing on the GPU
      3. Installing Theano
    18. Index

Product information

  • Title: Large Scale Machine Learning with Python
  • Author(s): Bastiaan Sjardin, Luca Massaron, Alberto Boschetti
  • Release date: August 2016
  • Publisher(s): Packt Publishing
  • ISBN: 9781785887215