Statistical Reinforcement Learning

Book description

Reinforcement learning is a mathematical framework for developing computer agents that can learn an optimal behavior by relating generic reward signals with its past actions. With numerous successful applications in business intelligence, plant control, and gaming, the RL framework is ideal for decision making in unknown environments with large amounts of data.

Supplying an up-to-date and accessible introduction to the field, Statistical Reinforcement Learning: Modern Machine Learning Approaches presents fundamental concepts and practical algorithms of statistical reinforcement learning from the modern machine learning viewpoint. It covers various types of RL approaches, including model-based and model-free approaches, policy iteration, and policy search methods.

  • Covers the range of reinforcement learning algorithms from a modern perspective
  • Lays out the associated optimization problems for each reinforcement learning scenario covered
  • Provides thought-provoking statistical treatment of reinforcement learning algorithms

The book covers approaches recently introduced in the data mining and machine learning fields to provide a systematic bridge between RL and data mining/machine learning researchers. It presents state-of-the-art results, including dimensionality reduction in RL and risk-sensitive RL. Numerous illustrative examples are included to help readers understand the intuition and usefulness of reinforcement learning techniques.

This book is an ideal resource for graduate-level students in computer science and applied statistics programs, as well as researchers and engineers in related fields.

Table of contents

  1. Cover
  2. Contents
  3. Foreword
  4. Preface
  5. Author
  6. Part I: Introduction
    1. Chapter 1: Introduction to Reinforcement Learning (1/3)
    2. Chapter 1: Introduction to Reinforcement Learning (2/3)
    3. Chapter 1: Introduction to Reinforcement Learning (3/3)
  7. Part II: Model-Free Policy Iteration
    1. Chapter 2: Policy Iteration with Value Function Approximation (1/2)
    2. Chapter 2: Policy Iteration with Value Function Approximation (2/2)
    3. Chapter 3: Basis Design for Value Function Approximation (1/4)
    4. Chapter 3: Basis Design for Value Function Approximation (2/4)
    5. Chapter 3: Basis Design for Value Function Approximation (3/4)
    6. Chapter 3: Basis Design for Value Function Approximation (4/4)
    7. Chapter 4: Sample Reuse in Policy Iteration (1/4)
    8. Chapter 4: Sample Reuse in Policy Iteration (2/4)
    9. Chapter 4: Sample Reuse in Policy Iteration (3/4)
    10. Chapter 4: Sample Reuse in Policy Iteration (4/4)
    11. Chapter 5: Active Learning in Policy Iteration (1/3)
    12. Chapter 5: Active Learning in Policy Iteration (2/3)
    13. Chapter 5: Active Learning in Policy Iteration (3/3)
    14. Chapter 6: Robust Policy Iteration (1/3)
    15. Chapter 6: Robust Policy Iteration (2/3)
    16. Chapter 6: Robust Policy Iteration (3/3)
  8. Part III: Model-Free Policy Search
    1. Chapter 7: Direct Policy Search by Gradient Ascent (1/5)
    2. Chapter 7: Direct Policy Search by Gradient Ascent (2/5)
    3. Chapter 7: Direct Policy Search by Gradient Ascent (3/5)
    4. Chapter 7: Direct Policy Search by Gradient Ascent (4/5)
    5. Chapter 7: Direct Policy Search by Gradient Ascent (5/5)
    6. Chapter 8: Direct Policy Search by Expectation-Maximization (1/4)
    7. Chapter 8: Direct Policy Search by Expectation-Maximization (2/4)
    8. Chapter 8: Direct Policy Search by Expectation-Maximization (3/4)
    9. Chapter 8: Direct Policy Search by Expectation-Maximization (4/4)
    10. Chapter 9: Policy-Prior Search (1/5)
    11. Chapter 9: Policy-Prior Search (2/5)
    12. Chapter 9: Policy-Prior Search (3/5)
    13. Chapter 9: Policy-Prior Search (4/5)
    14. Chapter 9: Policy-Prior Search (5/5)
  9. Part IV: Model-Based Reinforcement Learning
    1. Chapter 10: Transition Model Estimation (1/4)
    2. Chapter 10: Transition Model Estimation (2/4)
    3. Chapter 10: Transition Model Estimation (3/4)
    4. Chapter 10: Transition Model Estimation (4/4)
    5. Chapter 11: Dimensionality Reduction for Transition Model Estimation (1/2)
    6. Chapter 11: Dimensionality Reduction for Transition Model Estimation (2/2)
  10. References (1/2)
  11. References (2/2)

Product information

  • Title: Statistical Reinforcement Learning
  • Author(s): Masashi Sugiyama
  • Release date: March 2015
  • Publisher(s): CRC Press
  • ISBN: 9781439856901