Practical Statistics for Data Scientists, 2nd Edition

Book Description

Statistical methods are a key part of data science, yet few data scientists have formal statistical training. Courses and books on basic statistics rarely cover the topic from a data science perspective. The second edition of this popular guide adds comprehensive examples in Python, provides practical guidance on applying statistical methods to data science, tells you how to avoid their misuse, and gives you advice on what’s important and what’s not.

Many data science resources incorporate statistical methods but lack a deeper statistical perspective. If you’re familiar with the R or Python programming languages and have some exposure to statistics, this quick reference bridges the gap in an accessible, readable format.

With this book, you’ll learn:

  • Why exploratory data analysis is a key preliminary step in data science
  • How random sampling can reduce bias and yield a higher-quality dataset, even with big data
  • How the principles of experimental design yield definitive answers to questions
  • How to use regression to estimate outcomes and detect anomalies
  • Key classification techniques for predicting which categories a record belongs to
  • Statistical machine learning methods that "learn" from data
  • Unsupervised learning methods for extracting meaning from unlabeled data

Table of Contents

  1. Preface
    1. Conventions Used in This Book
    2. Using Code Examples
    3. O’Reilly Online Learning
    4. How to Contact Us
    5. Acknowledgments
  2. 1. Exploratory Data Analysis
    1. Elements of Structured Data
      1. Further Reading
    2. Rectangular Data
      1. Data Frames and Indexes
      2. Nonrectangular Data Structures
      3. Further Reading
    3. Estimates of Location
      1. Mean
      2. Median and Robust Estimates
      3. Example: Location Estimates of Population and Murder Rates
      4. Further Reading
    4. Estimates of Variability
      1. Standard Deviation and Related Estimates
      2. Estimates Based on Percentiles
      3. Example: Variability Estimates of State Population
      4. Further Reading
    5. Exploring the Data Distribution
      1. Percentiles and Boxplots
      2. Frequency Tables and Histograms
      3. Density Plots and Estimates
      4. Further Reading
    6. Exploring Binary and Categorical Data
      1. Mode
      2. Expected Value
      3. Probability
      4. Further Reading
    7. Correlation
      1. Scatterplots
      2. Further Reading
    8. Exploring Two or More Variables
      1. Hexagonal Binning and Contours (Plotting Numeric Versus Numeric Data)
      2. Two Categorical Variables
      3. Categorical and Numeric Data
      4. Visualizing Multiple Variables
      5. Further Reading
    9. Summary
  3. 2. Data and Sampling Distributions
    1. Random Sampling and Sample Bias
      1. Bias
      2. Random Selection
      3. Size Versus Quality: When Does Size Matter?
      4. Sample Mean Versus Population Mean
      5. Further Reading
    2. Selection Bias
      1. Regression to the Mean
      2. Further Reading
    3. Sampling Distribution of a Statistic
      1. Central Limit Theorem
      2. Standard Error
      3. Further Reading
    4. The Bootstrap
      1. Resampling Versus Bootstrapping
      2. Further Reading
    5. Confidence Intervals
      1. Further Reading
    6. Normal Distribution
      1. Standard Normal and QQ-Plots
    7. Long-Tailed Distributions
      1. Further Reading
    8. Student’s t-Distribution
      1. Further Reading
    9. Binomial Distribution
      1. Further Reading
    10. Chi-Square Distribution
      1. Further Reading
    11. F-Distribution
      1. Further Reading
    12. Poisson and Related Distributions
      1. Poisson Distributions
      2. Exponential Distribution
      3. Estimating the Failure Rate
      4. Weibull Distribution
      5. Further Reading
    13. Summary
  4. 3. Statistical Experiments and Significance Testing
    1. A/B Testing
      1. Why Have a Control Group?
      2. Why Just A/B? Why Not C, D,…?
      3. Further Reading
    2. Hypothesis Tests
      1. The Null Hypothesis
      2. Alternative Hypothesis
      3. One-Way Versus Two-Way Hypothesis Tests
      4. Further Reading
    3. Resampling
      1. Permutation Test
      2. Example: Web Stickiness
      3. Exhaustive and Bootstrap Permutation Tests
      4. Permutation Tests: The Bottom Line for Data Science
      5. Further Reading
    4. Statistical Significance and p-Values
      1. p-Value
      2. Alpha
      3. Type 1 and Type 2 Errors
      4. Data Science and p-Values
      5. Further Reading
    5. t-Tests
      1. Further Reading
    6. Multiple Testing
      1. Further Reading
    7. Degrees of Freedom
      1. Further Reading
    8. ANOVA
      1. F-Statistic
      2. Two-Way ANOVA
      3. Further Reading
    9. Chi-Square Test
      1. Chi-Square Test: A Resampling Approach
      2. Chi-Square Test: Statistical Theory
      3. Fisher’s Exact Test
      4. Relevance for Data Science
      5. Further Reading
    10. Multi-Arm Bandit Algorithm
      1. Further Reading
    11. Power and Sample Size
      1. Sample Size
      2. Further Reading
    12. Summary
  5. 4. Regression and Prediction
    1. Simple Linear Regression
      1. The Regression Equation
      2. Fitted Values and Residuals
      3. Least Squares
      4. Prediction Versus Explanation (Profiling)
      5. Further Reading
    2. Multiple Linear Regression
      1. Example: King County Housing Data
      2. Assessing the Model
      3. Cross-Validation
      4. Model Selection and Stepwise Regression
      5. Weighted Regression
      6. Further Reading
    3. Prediction Using Regression
      1. The Dangers of Extrapolation
      2. Confidence and Prediction Intervals
    4. Factor Variables in Regression
      1. Dummy Variables Representation
      2. Factor Variables with Many Levels
      3. Ordered Factor Variables
    5. Interpreting the Regression Equation
      1. Correlated Predictors
      2. Multicollinearity
      3. Confounding Variables
      4. Interactions and Main Effects
    6. Regression Diagnostics
      1. Outliers
      2. Influential Values
      3. Heteroskedasticity, Non-Normality, and Correlated Errors
      4. Partial Residual Plots and Nonlinearity
    7. Polynomial and Spline Regression
      1. Polynomial
      2. Splines
      3. Generalized Additive Models
      4. Further Reading
    8. Summary
  6. 5. Classification
    1. Naive Bayes
      1. Why Exact Bayesian Classification Is Impractical
      2. The Naive Solution
      3. Numeric Predictor Variables
      4. Further Reading
    2. Discriminant Analysis
      1. Covariance Matrix
      2. Fisher’s Linear Discriminant
      3. A Simple Example
      4. Further Reading
    3. Logistic Regression
      1. Logistic Response Function and Logit
      2. Logistic Regression and the GLM
      3. Generalized Linear Models
      4. Predicted Values from Logistic Regression
      5. Interpreting the Coefficients and Odds Ratios
      6. Linear and Logistic Regression: Similarities and Differences
      7. Assessing the Model
      8. Further Reading
    4. Evaluating Classification Models
      1. Confusion Matrix
      2. The Rare Class Problem
      3. Precision, Recall, and Specificity
      4. ROC Curve
      5. AUC
      6. Lift
      7. Further Reading
    5. Strategies for Imbalanced Data
      1. Undersampling
      2. Oversampling and Up/Down Weighting
      3. Data Generation
      4. Cost-Based Classification
      5. Exploring the Predictions
      6. Further Reading
    6. Summary
  7. 6. Statistical Machine Learning
    1. K-Nearest Neighbors
      1. A Small Example: Predicting Loan Default
      2. Distance Metrics
      3. One Hot Encoder
      4. Standardization (Normalization, z-Scores)
      5. Choosing K
      6. KNN as a Feature Engine
    2. Tree Models
      1. A Simple Example
      2. The Recursive Partitioning Algorithm
      3. Measuring Homogeneity or Impurity
      4. Stopping the Tree from Growing
      5. Predicting a Continuous Value
      6. How Trees Are Used
      7. Further Reading
    3. Bagging and the Random Forest
      1. Bagging
      2. Random Forest
      3. Variable Importance
      4. Hyperparameters
    4. Boosting
      1. The Boosting Algorithm
      2. XGBoost
      3. Regularization: Avoiding Overfitting
      4. Hyperparameters and Cross-Validation
    5. Summary
  8. 7. Unsupervised Learning
    1. Principal Components Analysis
      1. A Simple Example
      2. Computing the Principal Components
      3. Interpreting Principal Components
      4. Correspondence Analysis
      5. Further Reading
    2. K-Means Clustering
      1. A Simple Example
      2. K-Means Algorithm
      3. Interpreting the Clusters
      4. Selecting the Number of Clusters
    3. Hierarchical Clustering
      1. A Simple Example
      2. The Dendrogram
      3. The Agglomerative Algorithm
      4. Measures of Dissimilarity
    4. Model-Based Clustering
      1. Multivariate Normal Distribution
      2. Mixtures of Normals
      3. Selecting the Number of Clusters
      4. Further Reading
    5. Scaling and Categorical Variables
      1. Scaling the Variables
      2. Dominant Variables
      3. Categorical Data and Gower’s Distance
      4. Problems with Clustering Mixed Data
    6. Summary
  9. Bibliography
  10. Index

Product Information

  • Title: Practical Statistics for Data Scientists, 2nd Edition
  • Author(s): Peter Bruce, Andrew Bruce, Peter Gedeck
  • Release date: May 2020
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781492072942