Modern Time Series Forecasting with Python

Book description

Build real-world time series forecasting systems which scale to millions of time series by applying modern machine learning and deep learning concepts

Key Features

  • Explore industry-tested machine learning techniques used to forecast millions of time series
  • Get started with the revolutionary paradigm of global forecasting models
  • Get to grips with new concepts by applying them to real-world datasets of energy forecasting

Book Description

We live in a serendipitous era where the explosion in the quantum of data collected and a renewed interest in data-driven techniques such as machine learning (ML), has changed the landscape of analytics, and with it, time series forecasting. This book, filled with industry-tested tips and tricks, takes you beyond commonly used classical statistical methods such as ARIMA and introduces to you the latest techniques from the world of ML.

This is a comprehensive guide to analyzing, visualizing, and creating state-of-the-art forecasting systems, complete with common topics such as ML and deep learning (DL) as well as rarely touched-upon topics such as global forecasting models, cross-validation strategies, and forecast metrics. You'll begin by exploring the basics of data handling, data visualization, and classical statistical methods before moving on to ML and DL models for time series forecasting. This book takes you on a hands-on journey in which you'll develop state-of-the-art ML (linear regression to gradient-boosted trees) and DL (feed-forward neural networks, LSTMs, and transformers) models on a real-world dataset along with exploring practical topics such as interpretability.

By the end of this book, you'll be able to build world-class time series forecasting systems and tackle problems in the real world.

What you will learn

  • Find out how to manipulate and visualize time series data like a pro
  • Set strong baselines with popular models such as ARIMA
  • Discover how time series forecasting can be cast as regression
  • Engineer features for machine learning models for forecasting
  • Explore the exciting world of ensembling and stacking models
  • Get to grips with the global forecasting paradigm
  • Understand and apply state-of-the-art DL models such as N-BEATS and Autoformer
  • Explore multi-step forecasting and cross-validation strategies

Who this book is for

The book is for data scientists, data analysts, machine learning engineers, and Python developers who want to build industry-ready time series models. Since the book explains most concepts from the ground up, basic proficiency in Python is all you need. Prior understanding of machine learning or forecasting will help speed up your learning. For experienced machine learning and forecasting practitioners, this book has a lot to offer in terms of advanced techniques and traversing the latest research frontiers in time series forecasting.

Publisher resources

Download Example Code

Table of contents

  1. Modern Time Series Forecasting with Python
  2. Contributors
  3. About the author
  4. About the reviewers
  5. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Setting up an environment
      2. Download the data
    4. Download the example code files
    5. Download the color images
    6. Conventions used
    7. Get in touch
    8. Share Your Thoughts
    9. Download a free PDF copy of this book
  6. Part 1 – Getting Familiar with Time Series
  7. Chapter 1: Introducing Time Series
    1. Technical requirements
    2. What is a time series?
      1. Types of time series
      2. Main areas of application for time series analysis
    3. Data-generating process (DGP)
      1. Generating synthetic time series
      2. Stationary and non-stationary time series
    4. What can we forecast?
    5. Forecasting terminology
    6. Summary
    7. Further reading
  8. Chapter 2: Acquiring and Processing Time Series Data
    1. Technical requirements
    2. Understanding the time series dataset
      1. Preparing a data model
    3. pandas datetime operations, indexing, and slicing 
– a refresher
      1. Converting the date columns into pd.Timestamp/DatetimeIndex
      2. Using the .dt accessor and datetime properties
      3. Slicing and indexing
      4. Creating date sequences and managing date offsets
    4. Handling missing data
      1. Converting the half-hourly block-level data (hhblock) into time series data
      2. Compact, expanded, and wide forms of data
      3. Enforcing regular intervals in time series
      4. Converting the London Smart Meters dataset into a time series format
    5. Mapping additional information
    6. Saving and loading files to disk
    7. Handling longer periods of missing data
      1. Imputing with the previous day
      2. Hourly average profile
      3. The hourly average for each weekday
      4. Seasonal interpolation
    8. Summary
  9. Chapter 3: Analyzing and Visualizing Time Series Data
    1. Technical requirements
    2. Components of a time series
      1. The trend component
      2. The seasonal component
      3. The cyclical component
      4. The irregular component
    3. Visualizing time series data
      1. Line charts
      2. Seasonal plots
      3. Seasonal box plots
      4. Calendar heatmaps
      5. Autocorrelation plot
    4. Decomposing a time series
      1. Detrending
      2. Deseasonalizing
      3. Implementations
    5. Detecting and treating outliers
      1. Standard deviation
      2. Interquartile range (IQR)
      3. Isolation Forest
      4. Extreme studentized deviate (ESD) and seasonal ESD (S-ESD)
      5. Treating outliers
    6. Summary
    7. References
    8. Further reading
  10. Chapter 4: Setting a Strong Baseline Forecast
    1. Technical requirements
    2. Setting up a test harness
      1. Creating holdout (test) and validation datasets
      2. Choosing an evaluation metric
    3. Generating strong baseline forecasts
      1. Naïve forecast
      2. Moving average forecast
      3. Seasonal naive forecast
      4. Exponential smoothing (ETS)
      5. ARIMA
      6. Theta Forecast
      7. Fast Fourier Transform forecast
      8. Evaluating the baseline forecasts
    4. Assessing the forecastability of a time series
      1. Coefficient of Variation (CoV)
      2. Residual variability (RV)
      3. Entropy-based measures
      4. Kaboudan metric
    5. Summary
    6. References
    7. Further reading
  11. Part 2 – Machine Learning for Time Series
  12. Chapter 5: Time Series Forecasting as Regression
    1. Understanding the basics of machine learning
      1. Supervised machine learning tasks
      2. Overfitting and underfitting
      3. Hyperparameters and validation sets
    2. Time series forecasting as regression
      1. Time delay embedding
      2. Temporal embedding
    3. Global forecasting models – a paradigm shift
    4. Summary
    5. References
    6. Further reading
  13. Chapter 6: Feature Engineering for Time Series Forecasting
    1. Technical requirements
    2. Feature engineering
    3. Avoiding data leakage
    4. Setting a forecast horizon
    5. Time delay embedding
      1. Lags or backshift
      2. Rolling window aggregations
      3. Seasonal rolling window aggregations
      4. Exponentially weighted moving averages (EWMA)
    6. Temporal embedding
      1. Calendar features
      2. Time elapsed
      3. Fourier terms
    7. Summary
  14. Chapter 7: Target Transformations for Time Series Forecasting
    1. Technical requirements
    2. Handling non-stationarity in time series
    3. Detecting and correcting for unit roots
      1. Unit roots
      2. The Augmented Dickey-Fuller (ADF) test
      3. Differencing transform
    4. Detecting and correcting for trends
      1. Deterministic and stochastic trends
      2. Kendall’s Tau
      3. Mann-Kendall test (M-K test)
      4. Detrending transform
    5. Detecting and correcting for seasonality
      1. Detecting seasonality
      2. Deseasonalizing transform
    6. Detecting and correcting for heteroscedasticity
      1. Detecting heteroscedasticity
      2. Log transform
      3. Box-Cox transform
    7. AutoML approach to target transformation
    8. Summary
    9. References
    10. Further reading
  15. Chapter 8: Forecasting Time Series with Machine Learning Models
    1. Technical requirements
    2. Training and predicting with machine learning models
    3. Generating single-step forecast baselines
    4. Standardized code to train and evaluate machine learning models
      1. FeatureConfig
      2. MissingValueConfig
      3. ModelConfig
      4. MLForecast
      5. Helper functions for evaluating models
      6. Linear regression
      7. Regularized linear regression
      8. Decision trees
      9. Random forest
      10. Gradient boosting decision trees
    5. Training and predicting for multiple households
      1. Using AutoStationaryTransformer
    6. Summary
    7. References
    8. Further reading
  16. Chapter 9: Ensembling and Stacking
    1. Technical requirements
    2. Combining forecasts
      1. Best fit
      2. Measures of central tendency
      3. Simple hill climbing
      4. Stochastic hill climbing
      5. Simulated annealing
      6. Optimal weighted ensemble
    3. Stacking or blending
    4. Summary
    5. References
    6. Further reading
  17. Chapter 10: Global Forecasting Models
    1. Technical requirements
    2. Why Global Forecasting Models (GFMs)?
      1. Sample size
      2. Cross-learning
      3. Multi-task learning
      4. Engineering complexity
    3. Creating GFMs
    4. Strategies to improve GFMs
      1. Increasing memory
      2. Using time series meta-features
      3. Tuning hyperparameters
      4. Partitioning
    5. Bonus – interpretability
    6. Summary
    7. References
    8. Further reading
  18. Part 3 – Deep Learning for Time Series
  19. Chapter 11: Introduction to Deep Learning
    1. Technical requirements
    2. What is deep learning and why now?
      1. Why now?
      2. What is deep learning?
      3. Perceptron – the first neural network
    3. Components of a deep learning system
      1. Representation learning
      2. Linear transformation
      3. Activation functions
      4. Output activation functions
      5. Loss function
      6. Forward and backward propagation
    4. Summary
    5. References
    6. Further reading
  20. Chapter 12: Building Blocks of Deep Learning for Time Series
    1. Technical requirements
    2. Understanding the encoder-decoder paradigm
    3. Feed-forward networks
    4. Recurrent neural networks
      1. The RNN layer in PyTorch
    5. Long short-term memory (LSTM) networks
      1. The LSTM layer in PyTorch
    6. Gated recurrent unit (GRU)
      1. The GRU layer in PyTorch
    7. Convolution networks
      1. Convolution
      2. Padding, stride, and dilations
      3. The convolution layer in PyTorch
    8. Summary
    9. References
    10. Further reading
  21. Chapter 13: Common Modeling Patterns for Time Series
    1. Technical requirements
    2. Tabular regression
    3. Single-step-ahead recurrent neural networks
    4. Sequence-to-sequence (Seq2Seq) models
      1. RNN-to-fully connected network
      2. RNN-to-RNN
    5. Summary
    6. Reference
    7. Further reading
  22. Chapter 14: Attention and Transformers for Time Series
    1. Technical requirements
    2. What is attention?
    3. The generalized attention model
      1. Alignment functions
      2. The distribution function
    4. Forecasting with sequence-to-sequence models and attention
    5. Transformers – Attention is all you need
      1. Attention is all you need
      2. Transformers in time series
    6. Forecasting with Transformers
    7. Summary
    8. References
    9. Further reading
  23. Chapter 15: Strategies for Global Deep Learning Forecasting Models
    1. Technical requirements
    2. Creating global deep learning forecasting models
      1. Preprocessing the data
      2. Understanding TimeSeriesDataset from PyTorch Forecasting
      3. Building the first global deep learning forecasting model
    3. Using time-varying information
    4. Using static/meta information
      1. One-hot encoding and why it is not ideal
      2. Embedding vectors and dense representations
      3. Defining a model with categorical features
    5. Using the scale of the time series
    6. Balancing the sampling procedure
      1. Visualizing the data distribution
      2. Tweaking the sampling procedure
      3. Using and visualizing the dataloader with WeightedRandomSampler
    7. Summary
    8. Further reading
  24. Chapter 16: Specialized Deep Learning Architectures for Forecasting
    1. Technical requirements
    2. The need for specialized architectures
    3. Neural Basis Expansion Analysis for Interpretable Time Series Forecasting (N-BEATS)
      1. The architecture of N-BEATS
      2. Forecasting with N-BEATS
      3. Interpreting N-BEATS forecasting
    4. Neural Basis Expansion Analysis for Interpretable Time Series Forecasting with Exogenous Variables (N-BEATSx)
      1. Handling exogenous variables
      2. Exogenous blocks
    5. Neural Hierarchical Interpolation for Time Series Forecasting (N-HiTS)
      1. The Architecture of N-HiTS
      2. Forecasting with N-HiTS
    6. Informer
      1. The architecture of the Informer model
      2. Forecasting with the Informer model
    7. Autoformer
      1. The architecture of the Autoformer model
      2. Forecasting with Autoformer
    8. Temporal Fusion Transformer (TFT)
      1. The Architecture of TFT
      2. Forecasting with TFT
      3. Interpreting TFT
    9. Interpretability
    10. Probabilistic forecasting
      1. Probability Density Function (PDF)
      2. Quantile functions
      3. Other approaches
    11. Summary
    12. References
    13. Further reading
  25. Part 4 – Mechanics of Forecasting
  26. Chapter 17: Multi-Step Forecasting
    1. Why multi-step forecasting?
    2. Recursive strategy
      1. Training regime
      2. Forecasting regime
    3. Direct strategy
      1. Training regime
      2. Forecasting regime
    4. Joint strategy
      1. Training regime
      2. Forecasting regime
    5. Hybrid strategies
      1. DirRec Strategy
      2. Iterative block-wise direct strategy
      3. Rectify strategy
      4. RecJoint
    6. How to choose a multi-step forecasting strategy?
    7. Summary
    8. References
  27. Chapter 18: Evaluating Forecasts – Forecast Metrics
    1. Technical requirements
    2. Taxonomy of forecast error measures
      1. Intrinsic metrics
      2. Extrinsic metrics
    3. Investigating the error measures
      1. Loss curves and complementarity
      2. Bias towards over- or under-forecasting
    4. Experimental study of the error measures
      1. Using Spearman’s rank correlation
    5. Guidelines for choosing a metric
    6. Summary
    7. References
    8. Further reading
  28. Chapter 19: Evaluating Forecasts – Validation Strategies
    1. Technical requirements
    2. Model validation
    3. Holdout strategies
      1. Window strategy
      2. Calibration strategy
      3. Sampling strategy
    4. Cross-validation strategies
    5. Choosing a validation strategy
    6. Validation strategies for datasets with multiple time series
    7. Summary
    8. References
    9. Further reading
  29. Index
    1. Why subscribe?
  30. Other Books You May Enjoy
    1. Packt is searching for authors like you
    2. Share Your Thoughts
    3. Download a free PDF copy of this book

Product information

  • Title: Modern Time Series Forecasting with Python
  • Author(s): Manu Joseph
  • Release date: November 2022
  • Publisher(s): Packt Publishing
  • ISBN: 9781803246802