The Kaggle Book

Book description

Get a step ahead of your competitors with insights from over 30 Kaggle Masters and Grandmasters. Discover tips, tricks, and best practices for competing effectively on Kaggle and becoming a better data scientist. Purchase of the print or Kindle book includes a free eBook in the PDF format.

Key Features

  • Learn how Kaggle works and how to make the most of competitions from over 30 expert Kagglers
  • Sharpen your modeling skills with ensembling, feature engineering, adversarial validation and AutoML
  • A concise collection of smart data handling techniques for modeling and parameter tuning

Book Description

Millions of data enthusiasts from around the world compete on Kaggle, the most famous data science competition platform of them all. Participating in Kaggle competitions is a surefire way to improve your data analysis skills, network with an amazing community of data scientists, and gain valuable experience to help grow your career.

The first book of its kind, The Kaggle Book assembles in one place the techniques and skills you’ll need for success in competitions, data science projects, and beyond. Two Kaggle Grandmasters walk you through modeling strategies you won’t easily find elsewhere, and the knowledge they’ve accumulated along the way. As well as Kaggle-specific tips, you’ll learn more general techniques for approaching tasks based on image, tabular, textual data, and reinforcement learning. You’ll design better validation schemes and work more comfortably with different evaluation metrics.

Whether you want to climb the ranks of Kaggle, build some more data science skills, or improve the accuracy of your existing models, this book is for you.

Plus, join our Discord Community to learn along with more than 1,000 members and meet like-minded people!

What you will learn

  • Get acquainted with Kaggle as a competition platform
  • Make the most of Kaggle Notebooks, Datasets, and Discussion forums
  • Create a portfolio of projects and ideas to get further in your career
  • Design k-fold and probabilistic validation schemes
  • Get to grips with common and never-before-seen evaluation metrics
  • Understand binary and multi-class classification and object detection
  • Approach NLP and time series tasks more effectively
  • Handle simulation and optimization competitions on Kaggle

Who this book is for

This book is suitable for anyone new to Kaggle, veteran users, and anyone in between. Data analysts/scientists who are trying to do better in Kaggle competitions and secure jobs with tech giants will find this book useful. A basic understanding of machine learning concepts will help you make the most of this book.

Table of contents

  1. Preface
  2. Part I: Introduction to Competitions
  3. Introducing Kaggle and Other Data Science Competitions
    1. The rise of data science competition platforms
      1. The Kaggle competition platform
        1. A history of Kaggle
      2. Other competition platforms
    2. Introducing Kaggle
      1. Stages of a competition
      2. Types of competitions and examples
      3. Submission and leaderboard dynamics
        1. Explaining the Common Task Framework paradigm
        2. Understanding what can go wrong in a competition
      4. Computational resources
        1. Kaggle Notebooks
      5. Teaming and networking
      6. Performance tiers and rankings
      7. Criticism and opportunities
    3. Summary
  4. Organizing Data with Datasets
    1. Setting up a dataset
    2. Gathering the data
    3. Working with datasets
    4. Using Kaggle Datasets in Google Colab
    5. Legal caveats
    6. Summary
  5. Working and Learning with Kaggle Notebooks
    1. Setting up a Notebook
    2. Running your Notebook
    3. Saving Notebooks to GitHub
    4. Getting the most out of Notebooks
      1. Upgrading to Google Cloud Platform (GCP)
      2. One step beyond
    5. Kaggle Learn courses
    6. Summary
  6. Leveraging Discussion Forums
    1. How forums work
    2. Example discussion approaches
    3. Netiquette
    4. Summary
  7. Part II: Sharpening Your Skills for Competitions
  8. Competition Tasks and Metrics
    1. Evaluation metrics and objective functions
    2. Basic types of tasks
      1. Regression
      2. Classification
      3. Ordinal
    3. The Meta Kaggle dataset
    4. Handling never-before-seen metrics
    5. Metrics for regression (standard and ordinal)
      1. Mean squared error (MSE) and R squared
      2. Root mean squared error (RMSE)
      3. Root mean squared log error (RMSLE)
      4. Mean absolute error (MAE)
    6. Metrics for classification (label prediction and probability)
      1. Accuracy
      2. Precision and recall
      3. The F1 score
      4. Log loss and ROC-AUC
      5. Matthews correlation coefficient (MCC)
    7. Metrics for multi-class classification
    8. Metrics for object detection problems
      1. Intersection over union (IoU)
      2. Dice
    9. Metrics for multi-label classification and recommendation problems
      1. MAP@{K}
    10. Optimizing evaluation metrics
      1. Custom metrics and custom objective functions
      2. Post-processing your predictions
        1. Predicted probability and its adjustment
    11. Summary
  9. Designing Good Validation
    1. Snooping on the leaderboard
    2. The importance of validation in competitions
      1. Bias and variance
    3. Trying different splitting strategies
      1. The basic train-test split
      2. Probabilistic evaluation methods
        1. k-fold cross-validation
        2. Subsampling
        3. The bootstrap
    4. Tuning your model validation system
    5. Using adversarial validation
      1. Example implementation
      2. Handling different distributions of training and test data
    6. Handling leakage
    7. Summary
  10. Modeling for Tabular Competitions
    1. The Tabular Playground Series
    2. Setting a random state for reproducibility
    3. The importance of EDA
      1. Dimensionality reduction with t-SNE and UMAP
    4. Reducing the size of your data
    5. Applying feature engineering
      1. Easily derived features
      2. Meta-features based on rows and columns
      3. Target encoding
      4. Using feature importance to evaluate your work
    6. Pseudo-labeling
    7. Denoising with autoencoders
    8. Neural networks for tabular competitions
    9. Summary
  11. Hyperparameter Optimization
    1. Basic optimization techniques
      1. Grid search
      2. Random search
      3. Halving search
    2. Key parameters and how to use them
      1. Linear models
      2. Support-vector machines
      3. Random forests and extremely randomized trees
      4. Gradient tree boosting
        1. LightGBM
        2. XGBoost
        3. CatBoost
        4. HistGradientBoosting
    3. Bayesian optimization
      1. Using Scikit-optimize
      2. Customizing a Bayesian optimization search
      3. Extending Bayesian optimization to neural architecture search
      4. Creating lighter and faster models with KerasTuner
      5. The TPE approach in Optuna
    4. Summary
  12. Ensembling with Blending and Stacking Solutions
    1. A brief introduction to ensemble algorithms
    2. Averaging models into an ensemble
      1. Majority voting
      2. Averaging of model predictions
      3. Weighted averages
      4. Averaging in your cross-validation strategy
      5. Correcting averaging for ROC-AUC evaluations
    3. Blending models using a meta-model
      1. Best practices for blending
    4. Stacking models together
      1. Stacking variations
    5. Creating complex stacking and blending solutions
    6. Summary
  13. Modeling for Computer Vision
    1. Augmentation strategies
      1. Keras built-in augmentations
        1. ImageDataGenerator approach
        2. Preprocessing layers
      2. albumentations
    2. Classification
    3. Object detection
    4. Semantic segmentation
    5. Summary
  14. Modeling for NLP
    1. Sentiment analysis
    2. Open domain Q&A
    3. Text augmentation strategies
      1. Basic techniques
      2. nlpaug
    4. Summary
  15. Simulation and Optimization Competitions
    1. Connect X
    2. Rock-paper-scissors
    3. Santa competition 2020
    4. The name of the game
    5. Summary
  16. Part III: Leveraging Competitions for Your Career
  17. Creating Your Portfolio of Projects and Ideas
    1. Building your portfolio with Kaggle
      1. Leveraging Notebooks and discussions
      2. Leveraging Datasets
    2. Arranging your online presence beyond Kaggle
      1. Blogs and publications
      2. GitHub
    3. Monitoring competition updates and newsletters
    4. Summary
  18. Finding New Professional Opportunities
    1. Building connections with other competition data scientists
    2. Participating in Kaggle Days and other Kaggle meetups
    3. Getting spotted and other job opportunities
      1. The STAR approach
    4. Summary (and some parting words)
  19. Other Books You May Enjoy
  20. Index

Product information

  • Title: The Kaggle Book
  • Author(s): Konrad Banachewicz, Luca Massaron
  • Release date: April 2022
  • Publisher(s): Packt Publishing
  • ISBN: 9781801817479