Data Science Projects with Python

Book description

Gain hands-on experience with industry-standard data analysis and machine learning tools in Python

Key Features

  • Tackle data science problems by identifying the problem to be solved
  • Illustrate patterns in data using appropriate visualizations
  • Implement suitable machine learning algorithms to gain insights from data

Book Description

Data Science Projects with Python is designed to give you practical guidance on industry-standard data analysis and machine learning tools, by applying them to realistic data problems. You will learn how to use pandas and Matplotlib to critically examine datasets with summary statistics and graphs, and extract the insights you seek to derive. You will build your knowledge as you prepare data using the scikit-learn package and feed it to machine learning algorithms such as regularized logistic regression and random forest. You'll discover how to tune algorithms to provide the most accurate predictions on new and unseen data. As you progress, you'll gain insights into the working and output of these algorithms, building your understanding of both the predictive capabilities of the models and why they make these predictions.

By then end of this book, you will have the necessary skills to confidently use machine learning algorithms to perform detailed data analysis and extract meaningful insights from unstructured data.

What you will learn

  • Install the required packages to set up a data science coding environment
  • Load data into a Jupyter notebook running Python
  • Use Matplotlib to create data visualizations
  • Fit machine learning models using scikit-learn
  • Use lasso and ridge regression to regularize your models
  • Compare performance between models to find the best outcomes
  • Use k-fold cross-validation to select model hyperparameters

Who this book is for

If you are a data analyst, data scientist, or business analyst who wants to get started using Python and machine learning techniques to analyze data and predict outcomes, this book is for you. Basic knowledge of Python and data analytics will help you get the most from this book. Familiarity with mathematical concepts such as algebra and basic statistics will also be useful.

Publisher resources

Download Example Code

Table of contents

  1. Preface
    1. About the Book
      1. About the Author
      2. Objectives
      3. Audience
      4. Approach
      5. Hardware Requirements
      6. Software Requirements
      7. Installation and Setup
      8. Conventions
  2. Chapter 1:
  3. Data Exploration and Cleaning
    1. Introduction
    2. Python and the Anaconda Package Management System
      1. Indexing and the Slice Operator
      2. Exercise 1: Examining Anaconda and Getting Familiar with Python
    3. Different Types of Data Science Problems
    4. Loading the Case Study Data with Jupyter and pandas
      1. Exercise 2: Loading the Case Study Data in a Jupyter Notebook
      2. Getting Familiar with Data and Performing Data Cleaning
      3. The Business Problem
      4. Data Exploration Steps
      5. Exercise 3: Verifying Basic Data Integrity
      6. Boolean Masks
      7. Exercise 4: Continuing Verification of Data Integrity
      8. Exercise 5: Exploring and Cleaning the Data
    5. Data Quality Assurance and Exploration
      1. Exercise 6: Exploring the Credit Limit and Demographic Features
      2. Deep Dive: Categorical Features
      3. Exercise 7: Implementing OHE for a Categorical Feature
    6. Exploring the Financial History Features in the Dataset
      1. Activity 1: Exploring Remaining Financial Features in the Dataset
    7. Summary
  4. Chapter 2:
  5. Introduction to Scikit-Learn and Model Evaluation
    1. Introduction
    2. Exploring the Response Variable and Concluding the Initial Exploration
    3. Introduction to Scikit-Learn
      1. Generating Synthetic Data
      2. Data for a Linear Regression
      3. Exercise 8: Linear Regression in Scikit-Learn
    4. Model Performance Metrics for Binary Classification
      1. Splitting the Data: Training and Testing sets
      2. Classification Accuracy
      3. True Positive Rate, False Positive Rate, and Confusion Matrix
      4. Exercise 9: Calculating the True and False Positive and Negative Rates and Confusion Matrix in Python
      5. Discovering Predicted Probabilities: How Does Logistic Regression Make Predictions?
      6. Exercise 10: Obtaining Predicted Probabilities from a Trained Logistic Regression Model
      7. The Receiver Operating Characteristic (ROC) Curve
      8. Precision
      9. Activity 2: Performing Logistic Regression with a New Feature and Creating a Precision-Recall Curve
    5. Summary
  6. Chapter 3:
  7. Details of Logistic Regression and Feature Exploration
    1. Introduction
    2. Examining the Relationships between Features and the Response
      1. Pearson Correlation
      2. F-test
      3. Exercise 11: F-test and Univariate Feature Selection
      4. Finer Points of the F-test: Equivalence to t-test for Two Classes and Cautions
      5. Hypotheses and Next Steps
      6. Exercise 12: Visualizing the Relationship between Features and Response
    3. Univariate Feature Selection: What It Does and Doesn't Do
      1. Understanding Logistic Regression with function Syntax in Python and the Sigmoid Function
      2. Exercise 13: Plotting the Sigmoid Function
      3. Scope of Functions
      4. Why is Logistic Regression Considered a Linear Model?
      5. Exercise 14: Examining the Appropriateness of Features for Logistic Regression
      6. From Logistic Regression Coefficients to Predictions Using the Sigmoid
      7. Exercise 15: Linear Decision Boundary of Logistic Regression
      8. Activity 3: Fitting a Logistic Regression Model and Directly Using the Coefficients
    4. Summary
  8. Chapter 4:
  9. The Bias-Variance Trade-off
    1. Introduction
    2. Estimating the Coefficients and Intercepts of Logistic Regression
      1. Gradient Descent to Find Optimal Parameter Values
      2. Exercise 16: Using Gradient Descent to Minimize a Cost Function
      3. Assumptions of Logistic Regression
      4. The Motivation for Regularization: The Bias-Variance Trade-off
      5. Exercise 17: Generating and Modeling Synthetic Classification Data
      6. Lasso (L1) and Ridge (L2) Regularization
    3. Cross Validation: Choosing the Regularization Parameter and Other Hyperparameters
      1. Exercise 18: Reducing Overfitting on the Synthetic Data Classification Problem
      2. Options for Logistic Regression in Scikit-Learn
      3. Scaling Data, Pipelines, and Interaction Features in Scikit-Learn
      4. Activity 4: Cross-Validation and Feature Engineering with the Case Study Data
    4. Summary
  10. Chapter 5:
  11. Decision Trees and Random Forests
    1. Introduction
    2. Decision trees
      1. The Terminology of Decision Trees and Connections to Machine Learning
      2. Exercise 19: A Decision Tree in scikit-learn
      3. Training Decision Trees: Node Impurity
      4. Features Used for the First splits: Connections to Univariate Feature Selection and Interactions
      5. Training Decision Trees: A Greedy Algorithm
      6. Training Decision Trees: Different Stopping Criteria
      7. Using Decision Trees: Advantages and Predicted Probabilities
      8. A More Convenient Approach to Cross-Validation
      9. Exercise 20: Finding Optimal Hyperparameters for a Decision Tree
    3. Random Forests: Ensembles of Decision Trees
      1. Random Forest: Predictions and Interpretability
      2. Exercise 21: Fitting a Random Forest
      3. Checkerboard Graph
      4. Activity 5: Cross-Validation Grid Search with Random Forest
    4. Summary
  12. Chapter 6:
  13. Imputation of Missing Data, Financial Analysis, and Delivery to Client
    1. Introduction
    2. Review of Modeling Results
    3. Dealing with Missing Data: Imputation Strategies
      1. Preparing Samples with Missing Data
      2. Exercise 22: Cleaning the Dataset
      3. Exercise 23: Mode and Random Imputation of PAY_1
      4. A Predictive Model for PAY_1
      5. Exercise 24: Building a Multiclass Classification Model for Imputation
      6. Using the Imputation Model and Comparing it to Other Methods
      7. Confirming Model Performance on the Unseen Test Set
      8. Financial Analysis
      9. Financial Conversation with the Client
      10. Exercise 25: Characterizing Costs and Savings
      11. Activity 6: Deriving Financial Insights
    4. Final Thoughts on Delivering the Predictive Model to the Client
    5. Summary
  14. Appendix
    1. Chapter 1: Data Exploration and Cleaning
      1. Activity 1: Exploring Remaining Financial Features in the Dataset
    2. Chapter 2: Introduction to Scikit-Learn and Model Evaluation
      1. Activity 2: Performing Logistic Regression with a New Feature and Creating a Precision-Recall Curve
    3. Chapter 3: Details of Logistic Regression and Feature Exploration
      1. Activity 3: Fitting a Logistic Regression Model and Directly Using the Coefficients
    4. Chapter 4: The Bias-Variance Trade-off
      1. Activity 4: Cross-Validation and Feature Engineering with the Case Study Data
    5. Chapter 5: Decision Trees and Random Forests
      1. Activity 5: Cross-Validation Grid Search with Random Forest
    6. Chapter 6: Imputation of Missing Data, Financial Analysis, and Delivery to Client
      1. Activity 6: Deriving Financial Insights

Product information

  • Title: Data Science Projects with Python
  • Author(s): Stephen Klosterman
  • Release date: April 2019
  • Publisher(s): Packt Publishing
  • ISBN: 9781838551025