O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Decision Theory

Book Description

Decision theory provides a formal framework for making logical choices in the face of uncertainty. Given a set of alternatives, a set of consequences, and a correspondence between those sets, decision theory offers conceptually simple procedures for choice. This book presents an overview of the fundamental concepts and outcomes of rational decision making under uncertainty, highlighting the implications for statistical practice.

The authors have developed a series of self contained chapters focusing on bridging the gaps between the different fields that have contributed to rational decision making and presenting ideas in a unified framework and notation while respecting and highlighting the different and sometimes conflicting perspectives.

This book:

  • Provides a rich collection of techniques and procedures.
  • Discusses the foundational aspects and modern day practice.
  • Links foundations to practical applications in biostatistics, computer science, engineering and economics.
  • Presents different perspectives and controversies to encourage readers to form their own opinion of decision making and statistics.

Decision Theory is fundamental to all scientific disciplines, including biostatistics, computer science, economics and engineering. Anyone interested in the whys and wherefores of statistical science will find much to enjoy in this book.

Table of Contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Preface
  6. Acknowledgments
  7. Chapter 1: Introduction
    1. 1.1 Controversies
    2. 1.2 A guided tour of decision theory
  8. Part One: Foundations
    1. Chapter 2: Coherence
      1. 2.1 The “Dutch Book” theorem
        1. 2.1.1 Betting odds
        2. 2.1.2 Coherence and the axioms of probability
        3. 2.1.3 Coherent conditional probabilities
        4. 2.1.4 The implications of Dutch Book theorems
      2. 2.2 Temporal coherence
      3. 2.3 Scoring rules and the axioms of probabilities
      4. 2.4 Exercises
    2. Chapter 3: Utility
      1. 3.1 St. Petersburg paradox
      2. 3.2 Expected utility theory and the theory of means
        1. 3.2.1 Utility and means
        2. 3.2.2 Associative means
        3. 3.2.3 Functional means
      3. 3.3 The expected utility principle
      4. 3.4 The von Neumann–Morgenstern representation theorem
        1. 3.4.1 Axioms
        2. 3.4.2 Representation of preferences via expected utility
      5. 3.5 Allais’ criticism
      6. 3.6 Extensions
      7. 3.7 Exercises
    3. Chapter 4: Utility in action
      1. 4.1 The “standard gamble”
      2. 4.2 Utility of money
        1. 4.2.1 Certainty equivalents
        2. 4.2.2 Risk aversion
        3. 4.2.3 A measure of risk aversion
      3. 4.3 Utility functions for medical decisions
        1. 4.3.1 Length and quality of life
        2. 4.3.2 Standard gamble for health states
        3. 4.3.3 The time trade-off methods
        4. 4.3.4 Relation between QALYs and utilities
        5. 4.3.5 Utilities for time in ill health
        6. 4.3.6 Difficulties in assessing utility
      4. 4.4 Exercises
    4. Chapter 5: Ramsey and Savage
      1. 5.1 Ramsey’s theory
      2. 5.2 Savage’s theory
        1. 5.2.1 Notation and overview
        2. 5.2.2 The sure thing principle
        3. 5.2.3 Conditional and a posteriori preferences
        4. 5.2.4 Subjective probability
        5. 5.2.5 Utility and expected utility
      3. 5.3 Allais revisited
      4. 5.4 Ellsberg paradox
      5. 5.5 Exercises
    5. Chapter 6: State independence
      1. 6.1 Horse lotteries
      2. 6.2 State-dependent utilities
      3. 6.3 State-independent utilities
      4. 6.4 Anscombe–Aumann representation theorem
      5. 6.5 Exercises
  9. Part Two: Statistical Decision Theory
    1. Chapter 7: Decision functions
      1. 7.1 Basic concepts
        1. 7.1.1 The loss function
        2. 7.1.2 Minimax
        3. 7.1.3 Expected utility principle
        4. 7.1.4 Illustrations
      2. 7.2 Data-based decisions
        1. 7.2.1 Risk
        2. 7.2.2 Optimality principles
        3. 7.2.3 Rationality principles and the Likelihood Principle
        4. 7.2.4 Nuisance parameters
      3. 7.3 The travel insurance example
      4. 7.4 Randomized decision rules
      5. 7.5 Classification and hypothesis tests
        1. 7.5.1 Hypothesis testing
        2. 7.5.2 Multiple hypothesis testing
        3. 7.5.3 Classification
      6. 7.6 Estimation
        1. 7.6.1 Point estimation
        2. 7.6.2 Interval inference
      7. 7.7 Minimax–Bayes connections
      8. 7.8 Exercises
    2. Chapter 8: Admissibility
      1. 8.1 Admissibility and completeness
      2. 8.2 Admissibility and minimax
      3. 8.3 Admissibility and Bayes
        1. 8.3.1 Proper Bayes rules
        2. 8.3.2 Generalized Bayes rules
      4. 8.4 Complete classes
        1. 8.4.1 Completeness and Bayes
        2. 8.4.2 Sufficiency and the Rao–Blackwell inequality
        3. 8.4.3 The Neyman–Pearson lemma
      5. 8.5 Using the same α level across studies with different sample sizes is inadmissible
      6. 8.6 Exercises
    3. Chapter 9: Shrinkage
      1. 9.1 The Stein effect
      2. 9.2 Geometric and empirical Bayes heuristics
        1. 9.2.1 Is x too big for θ?
        2. 9.2.2 Empirical Bayes shrinkage
      3. 9.3 General shrinkage functions
        1. 9.3.1 Unbiased estimation of the risk of x + g(x)
        2. 9.3.2 Bayes and minimax shrinkage
      4. 9.4 Shrinkage with different likelihood and losses
      5. 9.5 Exercises
    4. Chapter 10: Scoring rules
      1. 10.1 Betting and forecasting
      2. 10.2 Scoring rules
        1. 10.2.1 Definition
        2. 10.2.2 Proper scoring rules
        3. 10.2.3 The quadratic scoring rules
        4. 10.2.4 Scoring rules that are not proper
      3. 10.3 Local scoring rules
      4. 10.4 Calibration and refinement
        1. 10.4.1 The well-calibrated forecaster
        2. 10.4.2 Are Bayesians well calibrated?
      5. 10.5 Exercises
    5. Chapter 11: Choosing models
      1. 11.1 The “true model” perspective
        1. 11.1.1 Model probabilities
        2. 11.1.2 Model selection and Bayes factors
        3. 11.1.3 Model averaging for prediction and selection
      2. 11.2 Model elaborations
      3. 11.3 Exercises
  10. Part Three: Optimal Design
    1. Chapter 12: Dynamic programming
      1. 12.1 History
      2. 12.2 The travel insurance example revisited
      3. 12.3 Dynamic programming
        1. 12.3.1 Two-stage finite decision problems
        2. 12.3.2 More than two stages
      4. 12.4 Trading off immediate gains and information
        1. 12.4.1 The secretary problem
        2. 12.4.2 The prophet inequality
      5. 12.5 Sequential clinical trials
        1. 12.5.1 Two-armed bandit problems
        2. 12.5.2 Adaptive designs for binary outcomes
      6. 12.6 Variable selection in multiple regression
      7. 12.7 Computing
      8. 12.8 Exercises
    2. Chapter 13: Changes in utility as information
      1. 13.1 Measuring the value of information
        1. 13.1.1 The value function
        2. 13.1.2 Information from a perfect experiment
        3. 13.1.3 Information from a statistical experiment
        4. 13.1.4 The distribution of information
      2. 13.2 Examples
        1. 13.2.1 Tasting grapes
        2. 13.2.2 Medical testing
        3. 13.2.3 Hypothesis testing
      3. 13.3 Lindley information
        1. 13.3.1 Definition
        2. 13.3.2 Properties
        3. 13.3.3 Computing
        4. 13.3.4 Optimal design
      4. 13.4 Minimax and the value of information
      5. 13.5 Exercises
    3. Chapter 14: Sample size
      1. 14.1 Decision-theoretic approaches to sample size
        1. 14.1.1 Sample size and power
        2. 14.1.2 Sample size as a decision problem
        3. 14.1.3 Bayes and minimax optimal sample size
        4. 14.1.4 A minimax paradox
        5. 14.1.5 Goal sampling
      2. 14.2 Computing
      3. 14.3 Examples
        1. 14.3.1 Point estimation with quadratic loss
        2. 14.3.2 Composite hypothesis testing
        3. 14.3.3 A two-action problem with linear utility
        4. 14.3.4 Lindley information for exponential data
        5. 14.3.5 Multicenter clinical trials
      4. 14.4 Exercises
    4. Chapter 15: Stopping
      1. 15.1 Historical note
      2. 15.2 A motivating example
      3. 15.3 Bayesian optimal stopping
        1. 15.3.1 Notation
        2. 15.3.2 Bayes sequential procedure
        3. 15.3.3 Bayes truncated procedure
      4. 15.4 Examples
        1. 15.4.1 Hypotheses testing
        2. 15.4.2 An example with equivalence between sequential and fixed sample size designs
      5. 15.5 Sequential sampling to reduce uncertainty
      6. 15.6 The stopping rule principle
        1. 15.6.1 Stopping rules and the Likelihood Principle
        2. 15.6.2 Sampling to a foregone conclusion
      7. 15.7 Exercises
    5. Appendix
      1. A.1 Notation
      2. A.2 Relations
      3. A.3 Probability (density) functions of some distributions
      4. A.4 Conjugate updating
    6. References
    7. Index
    8. Wiley Series in Probability and Statistics