American-Type Options

Book description

The series is devoted to the publication of monographs and high-level textbooks in mathematics, mathematical methods and their applications. Apart from covering important areas of current interest, a major aim is to make topics of an interdisciplinary nature accessible to the non-specialist.

The works in this series are addressed to advanced students and researchers in mathematics and theoretical physics. In addition, it can serve as a guide for lectures and seminars on a graduate level.

The series de Gruyter Studies in Mathematics was founded ca. 30 years ago by the late Professor Heinz Bauer and Professor Peter Gabriel with the aim to establish a series of monographs and textbooks of high standard, written by scholars with an international reputation presenting current fields of research in pure and applied mathematics.
While the editorial board of the Studies has changed with the years, the aspirations of the Studies are unchanged. In times of rapid growth of mathematical knowledge carefully written monographs and textbooks written by experts are needed more than ever, not least to pave the way for the next generation of mathematicians. In this sense the editorial board and the publisher of the Studies are devoted to continue the Studies as a service to the mathematical community.

Please submit any book proposals to Niels Jacob.

Table of contents

  1. American-Type Options
  2. De Gruyter Studies in Mathematics
  3. Title Page
  4. Copyright Page
  5. Preface
  6. Table of Contents
  7. 1 Reward approximations for autoregressive log-price processes (LPP)
    1. 1.1 Markov Gaussian LPP
      1. 1.1.1 Upper bounds for rewards of Markov Gaussian log-price processes with linear drift and bounded volatility
      2. 1.1.2 Space skeleton approximations for option rewards of Markov Gaussian log-price processes with linear drift and constant bounded coefficients
      3. 1.1.3 Convergence of option reward functions for space skeleton approximations for Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
      4. 1.1.4 Convergence of optimal expected rewards for space skeleton approximations of Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
    2. 1.2 Autoregressive LPP
      1. 1.2.1 Upper bounds for rewards of autoregressive log-price processes
      2. 1.2.2 Space-skeleton approximations for option rewards of autoregressive log-price processes
      3. 1.2.3 Convergence of option reward functions for space skeleton approximations for option rewards of autoregressive log-price processes
      4. 1.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive log-price processes
    3. 1.3 Autoregressive moving average LPP
      1. 1.3.1 Upper bounds for rewards of autoregressive moving average type log-price processes
      2. 1.3.2 Space skeleton approximations for option reward functions for autoregressive moving average log-price processes
      3. 1.3.3 Convergence of option reward functions for space skeleton approximations for option reward functions for autoregressive moving average log-price processes with Gaussian noise terms
      4. 1.3.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive moving average log-price processes
    4. 1.4 Modulated Markov Gaussian LPP
      1. 1.4.1 Upper bounds for rewards of modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
      2. 1.4.2 Space-skeleton approximations for option rewards of modulated Markov Gaussian log-price processes with linear drift and constant diffusion coefficients
      3. 1.4.3 Convergence of option reward functions for space skeleton approximations for modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
      4. 1.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
    5. 1.5 Modulated autoregressive LPP
      1. 1.5.1 Upper bounds for rewards of modulated autoregressive typelog-price processes
      2. 1.5.2 Space skeleton approximations for option rewards of modulated autoregressive log-price processes
      3. 1.5.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive log-price processes with Gaussian noise terms
      4. 1.5.4 Convergence of optimal expected rewards for space skeleton approximaions of modulated autoregressive log-price processes
    6. 1.6 Modulated autoregressive moving average LPP
      1. 1.6.1 Upper bounds for rewards of mixed modulated autoregressive moving average type log-price processes with Gaussian noise terms
      2. 1.6.2 Space skeleton approximations for option rewards of modulated autoregressive moving average log-price processes
      3. 1.6.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive moving average log-price processes
      4. 1.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive moving average log-price processes
  8. 2 Reward approximations for autoregressive stochastic volatility LPP
    1. 2.1 Nonlinear autoregressive stochastic volatility LPP
      1. 2.1.1 Upper bounds for rewards of nonlinear autoregressive stochastic volatility log-price processes
      2. 2.1.2 Space-skeleton approximations for option rewards of nonlinear autoregressive stochastic volatility log-price processes
      3. 2.1.3 Convergence of option reward functions for space skeleton approximations of nonlinear autoregressive stochastic volatility log-price processes
      4. 2.1.4 Convergence of optimal expected rewards for space skeleton approximations of nonlinear autoregressive stochastic volatility log-price processes
    2. 2.2 Autoregressive conditional heteroskedastic LPP
      1. 2.2.1 Upper bounds for rewards of autoregressive conditional heteroskedasticlog-price processes
      2. 2.2.2 Space skeleton approximations for option rewards of autoregressive conditional heteroskedastic log-price processes
      3. 2.2.3 Convergence of option reward functions for space-skeleton approximations of autoregressive conditional heteroskedastic log-price processes
      4. 2.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive conditional heteroskedastic log-price processes
    3. 2.3 Generalized autoregressive conditional heteroskedastic LPP
      1. 2.3.1 Upper bounds for rewards of generalized autoregressive conditional heteroskedastic log-price processes
      2. 2.3.2 Space-skeleton approximations for option rewards of generalized autoregressive conditional heteroskedastic log-price processes
      3. 2.3.3 Convergence of option reward functions for generalized autoregressive conditional heteroskedastic log-price processes
      4. 2.3.4 Convergence of optimal expected rewards for space skeleton approximations of generalized autoregressive conditional heteroskedastic log-price processes
    4. 2.4 Modulated nonlinear autoregressive stochastic volatility LPP
      1. 2.4.1 Upper bound for rewards of modulated nonlinear autoregressive stochastic volatility log-price processes
      2. 2.4.2 Space-skeleton approximations for option rewards of modulated nonlinear autoregressive stochastic volatility log-price processes
      3. 2.4.3 Convergence of option reward functions for modulated nonlinear autoregressive stochastic volatility log-price processes
      4. 2.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated nonlinear autoregressive stochastic volatility log-price processes
    5. 2.5 Modulated autoregressive conditional heteroskedastic LPP
      1. 2.5.1 Space-skeleton approximations for modulated autoregressive conditional heteroskedastic log-price processes
      2. 2.5.2 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-priceprocesses
      3. 2.5.3 Convergence of reward functions for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedastic log-price processes
      4. 2.5.4 Convergence of optimal expected rewards for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedasticlog-price processes
    6. 2.6 Modulated generalized autoregressive conditional heteroskedastic LPP
      1. 2.6.1 Upper bounds for option rewards of modulated generalized autoregressive conditional heteroskedastic log-price processes
      2. 2.6.2 Space-skeleton approximations for option rewards of modulated generalized autoregressive conditional heteroskedastic log-price processes
      3. 2.6.3 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-price processes
      4. 2.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-priceprocesses
  9. 3 American-type options for continuous time Markov LPP
    1. 3.1 Markov LPP
      1. 3.1.1 Continuous time log-price and price processes
      2. 3.1.2 Spaces of trajectories for log-price and price processes
      3. 3.1.3 Information filtrations generated by log-price and price processes
      4. 3.1.4 Markov log-price and price processes
      5. 3.1.5 Modulated Markov log-price and price processes
    2. 3.2 LPP with independent increments
      1. 3.2.1 Log-price and price processes with independent increments
      2. 3.2.2 Step-wise log-price and price processes with independent increments
      3. 3.2.3 Lévy log-price and price processes
      4. 3.2.4 Log-price and price processes with independent increments modulated by semi-Markov indices
    3. 3.3 Diffusion LPP
      1. 3.3.1 Diffusion log-price and price processes
      2. 3.3.2 Diffusion log-price processes modulated by semi-Markov indices
    4. 3.4 American-type options for Markov LPP
      1. 3.4.1 American-type options for continuous time price processes
      2. 3.4.2 Optimal expected rewards, reward functions and optimal stopping times
      3. 3.4.3 Pay-off functions
      4. 3.4.4 Pay-off functions for call and put type options
      5. 3.4.5 Nonlinear pay-off functions for call- and put-type options
      6. 3.4.6 Pay-off functions for exchange of assets contracts
  10. 4 Upper bounds for option rewards for Markov LPP
    1. 4.1 Upper bounds for rewards for Markov LPP
      1. 4.1.1 Upper bounds for supremums of log-price processes
      2. 4.1.2 Upper bounds for reward functions
      3. 4.1.3 Upper bounds for optimal expected rewards
      4. 4.1.4 Asymptotically uniform upper bounds for supremums of log-priceprocesses
      5. 4.1.5 Asymptotically uniform upper bounds for reward functions
      6. 4.1.6 Asymptotically uniform upper bounds for optimal expected rewards
    2. 4.2 Asymptotically uniform conditions of compactness for Markov LPP
      1. 4.2.1 The first-type conditions of moment compactness for log-price processes
      2. 4.2.2 The second-type conditions of moment compactness for log-price processes
      3. 4.2.3 Conditions of moment compactness for index processes
      4. 4.2.4 Time-skeleton approximations for log-price processes
    3. 4.3 Upper bounds for rewards for LPP with independent increments
      1. 4.3.1 Upper bounds for option rewards for log-price processes with independent increments
      2. 4.3.2 Asymptotically uniform upper bounds for option rewards for perturbed log-price and price processes with independent increments
    4. 4.4 Upper bounds for rewards for diffusion LPP
      1. 4.4.1 Skeleton approximations for diffusion log-price processes
      2. 4.4.2 Upper bounds for option rewards for diffusion log-price processes with bounded characteristics and their time-skeleton approximations
      3. 4.4.3 Upper bounds for option rewards for diffusion log-price processes with bounded characteristics and their martingale-type approximations
      4. 4.4.4 Upper bounds for option rewards for univariate diffusion log-price processes with bounded characteristics and their trinomial-tree approximations
    5. 4.5 Upper bounds for rewards for mean-reverse diffusion LPP
      1. 4.5.1 Univariate mean-reverse diffusion log-price processes
      2. 4.5.2 Upper bounds for exponential moments for supremums for univariate mean-reverse diffusion log-price processes
      3. 4.5.3 Upper bounds for option rewards for univariate mean-reverse diffusion log-price processes
  11. 5 Time-skeleton reward approximations for Markov LPP
    1. 5.1 Lipschitz-type conditions for pay-off functions
      1. 5.1.1 Asymptotically uniform Lipschitz-type conditions for pay-off functions expressed in terms of price arguments
      2. 5.1.2 Asymptotically uniform Lipschitz-type conditions for pay-off functions expressed in terms of log-price arguments
      3. 5.1.3 Asymptotically uniform Lipschitz-type conditions and rates of growth for pay-off functions
      4. 5.1.4 Weaken Lipschitz-type conditions for pay-off functions
    2. 5.2 Time-skeleton approximations for optimal expected rewards
      1. 5.2.1 Inequalities connecting reward functionals for American and Bermudian options
      2. 5.2.2 Time-skeleton approximations for optimal expected rewards for multivariate modulated Markov log-price processes
    3. 5.3 Time-skeleton approximations for reward functions
      1. 5.3.1 Time-skeleton approximations for reward functions for multivariate modulated Markov log-price processes
      2. 5.3.2 Time-skeleton approximations for reward functions for multivariate modulated Markov log-price processes
    4. 5.4 Time-skeleton reward approximations for LPP with independent increments
      1. 5.4.1 Time-skeleton approximations for optimal expected rewards for multivariate log-price processes with independent increments
      2. 5.4.2 Time-skeleton approximations for reward functions for multivariate log-price processes with independent increments
    5. 5.5 Time-skeleton reward approximations for diffusion LPP
      1. 5.5.1 Time-skeleton reward approximations for multivariate diffusion log-price processes with bounded characteristics and their time-skeleton approximations
      2. 5.5.2 Time-skeleton reward approximations for multivariate diffusion log-price processes with bounded characteristics and their martingale-type approximations
      3. 5.5.3 Time-skeleton reward approximations for optimal expected rewards for univariate diffusion log-price processes with bounded characteristics and their trinomial-tree approximations
  12. 6 Time-space-skeleton reward approximations for Markov LPP
    1. 6.1 Time-space-skeleton reward approximations for Markov LPP
      1. 6.1.1 Convergence of time-skeleton reward approximations based on a given partition of time interval, for multivariate modulated Markov log-price processes
      2. 6.1.2 Convergence of time-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate modulated Markov log-price processes
      3. 6.1.3 Time-space-skeleton reward approximations for multivariate modulated Markov log-price processes
      4. 6.1.4 Convergence of time-space-skeleton reward approximations based on a given partition of time interval for multivariate modulated Markov log-price processes
      5. 6.1.5 Convergence of time-space-skeleton reward approximations based on an arbitrary partitions of time interval, for multivariate modulated Markovlog-price processes
    2. 6.2 Time-space-skeleton reward approximations for LPP with independent increments
      1. 6.2.1 Convergence of time-skeleton reward approximations based on a given partition of time interval, for multivariate log-price processes with independent increments
      2. 6.2.2 Convergence of time-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate log-price processes with independent increments
      3. 6.2.3 Time-space-skeleton reward approximations with fixed space-skeleton structure, for multivariate log-price processes with independent increments
      4. 6.2.4 Time-space-skeleton reward approximations with an additive space-skeletonstructure, for multivariate log-price processes with independent increments
      5. 6.2.5 Convergence of time-space-skeleton reward approximations for a given partition of time interval, for multivariate log-price processes with independent increments
      6. 6.2.6 Convergence of time-space-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate log-price processes with independent increments
    3. 6.3 Time-space-skeleton reward approximations for diffusion LPP
      1. 6.3.1 Convergence of time-skeleton reward approximations for multivariate diffusion log-price processes
      2. 6.3.2 Convergence of martingale-type reward approximations for diffusion type log-price processes with bounded characteristics
      3. 6.3.3 Convergence of trinomial-tree reward approximations for univariate diffusion log-price processes
      4. 6.3.4 Time-space-skeleton reward approximations for diffusion log-price processes
      5. 6.3.5 Convergence of time-space-skeleton reward approximations with fixed skeleton structure based on arbitrary partition of time interval, for diffusion log-price processes
  13. 7 Convergence of option rewards for continuous time Markov LPP
    1. 7.1 Convergence of rewards for continuous time Markov LPP
      1. 7.1.1 Convergence of optimal expected rewards for multivariate modulated Markov log-price processes
      2. 7.1.2 Convergence of reward functions for multivariate modulated Markov log-price processes
    2. 7.2 Convergence of rewards for LPP with independent increments
      1. 7.2.1 Convergence of rewards for multivariate log-price processes with independent increments
      2. 7.2.2 Convergence of rewards for time-skeleton approximations of multivariate log-price processes with independent increments
      3. 7.2.3 Convergence of rewards for time-space approximations of multivariate log-price processes with independent increments
      4. 7.2.4 Convergence of time-space-skeleton reward approximations for multivariate Levy log-price processes
    3. 7.3 Convergence of rewards for univariate Gaussian LPP with independent increments
      1. 7.3.1 Convergence of binomial-tree reward approximations for univariate Gaussian log-price processes with independent increments
      2. 7.3.2 Fitting of parameters for binomial-tree reward approximations for univariate Wiener log-price processes
      3. 7.3.3 Fitting of parameters for binomial-tree reward approximations for univariate inhomogeneous in time Gaussian log-price processes
      4. 7.3.4 Convergence of time-space skeleton reward approximations for univariate Gaussian log-price processes with independent increments
    4. 7.4 Convergence of rewards for multivariate Gaussian LPP with independent increments
      1. 7.4.1 Convergence of trinomial-tree reward approximations for multivariate Gaussian log-price processes
      2. 7.4.2 Fitting of parameters for binomial-tree reward approximations for multivariate Wiener log-price processes
      3. 7.4.3 Fitting of parameters for trinomial-tree reward approximations for multivariate inhomogeneous in time Gaussian log-price processes with independent increments
      4. 7.4.4 Convergence of time-space skeleton reward approximations for multivariate Gaussian log-price processes with independent increments
  14. 8 Convergence of option rewards for diffusion LPP
    1. 8.1 Convergence of rewards for time-skeleton approximations of diffusion LPP
      1. 8.1.1 Convergence of rewards for multivariate diffusion log-price processes and their time-skeleton approximations
      2. 8.1.2 Convergence of rewards for embedded time-skeleton approximations of multivariate diffusion log-price processes
    2. 8.2 Convergence of rewards for martingale-type approximations of diffusion LPP
      1. 8.2.1 Convergence of rewards for diffusion log-price processes and their martingale-typetime-skeleton approximations
      2. 8.2.2 Convergence of rewards for embedded martingale-type approximations for multivariate diffusion log-price processes
    3. 8.3 Convergence of rewards for trinomial-tree approximations of diffusion LPP
      1. 8.3.1 Convergence of rewards for univariate diffusion log-price processes with bounded characteristics and their trinomial-tree approximations
      2. 8.3.2 Convergence of rewards for embedded trinomial-tree approximations for univariate diffusion log-price processes
    4. 8.4 Rewards approximations for mean-reverse diffusion LPP
      1. 8.4.1 Trinomial tree reward approximations for mean-reverse diffusion log-price processes
      2. 8.4.2 Approximation of rewards for diffusion log-price processes based on space truncation of drift and diffusion functional coefficients
      3. 8.4.3 Asymptotic reward approximations for diffusion log-price processes based on the space truncation of drift and diffusion functional coefficients
      4. 8.4.4 Asymptotic reward approximations for mean-reverse diffusion log-price processes based on the space truncation of drift and diffusion functional coefficients
  15. 9 European, knockout, reselling and random pay-off options
    1. 9.1 Reward approximations for European-type options
      1. 9.1.1 Reward approximation for European- and American-type options formultivariate modulated Markov log-price processes
      2. 9.1.2 Convergence of reward approximations for European-type options for multivariate modulated Markov log-price processes
      3. 9.1.3 Other results about convergence of reward approximations for European-type options for multivariate modulated Markov log-priceprocesses
    2. 9.2 Reward approximations for knockout American-type options
      1. 9.2.1 Knockout American-type options
      2. 9.2.2 Imbedding into the model of ordinary American-type options
      3. 9.2.3 Imbedding of discrete time knockout American-type options into the model of ordinary discrete time American-type options
      4. 9.2.4 Convergence of reward approximations for knockout American-type options for multivariate modulated Markov log-price processes
    3. 9.3 Reward approximations for reselling options
      1. 9.3.1 Reselling of European options
      2. 9.3.2 Convergence of optimal expected rewards for binomial-trinomial-tree approximations in the model of reselling for European options
      3. 9.3.3 Convergence of binomial-trinomial-tree reward approximations in the model of reselling of European options
    4. 9.4 Reward approximations for American-type options with random pay-off
      1. 9.4.1 American-type options with random pay-off
      2. 9.4.2 Convergence of rewards for American-type options with random pay-off
      3. 9.4.3 Approximation of rewards based on skeleton approximations for a random pay-off
      4. 9.4.4 Convergence of means for rewards of American-type options with random pay-off
  16. 10 Results of experimental studies
    1. 10.1 Binomial- and trinomial-tree reward approximations for discrete time models
      1. 10.1.1 Binomial and trinomial reward approximations for log-price processes represented by Gaussian random walks
      2. 10.1.2 Experimental results for binomial and trinomial reward approximations
    2. 10.2 Skeleton reward approximations for discrete time models
      1. 10.2.1 Skeleton reward approximations for log-price processes represented by random walks
      2. 10.2.2 Experimental results for space-skeleton reward approximations for the Gaussian and compound Gaussian models
      3. 10.2.3 Rate of convergence for skeleton reward approximation models with pay-off functions generated by a compound Poisson-gamma process
    3. 10.3 Reward approximations for continuous time models
      1. 10.3.1 Bivariate binomial-tree reward approximations for a model of exchange of assets
      2. 10.3.2 A numerical example for the model of exchange of assets
      3. 10.3.3 A numerical example for Schwartz model
      4. 10.3.4 Numerical examples for the model of reselling of an European option
    4. 10.4 Reward approximation algorithms for Markov LPP
      1. 10.4.1 A theoretical scheme for three-steps time-space-skeleton reward approximations for Markov log-price processes
      2. 10.4.2 Three-steps time-space-skeleton reward approximation algorithms for Markov log-price processes
      3. 10.4.3 Modified three-steps time-space-skeleton reward approximation algorithms for Markov log-price processes
  17. Bibliographical Remarks
  18. Bibliography
  19. Index
  20. De Gruyter Studies in Mathematics

Product information

  • Title: American-Type Options
  • Author(s): Dmitrii S. Silvestrov
  • Release date: March 2015
  • Publisher(s): De Gruyter
  • ISBN: 9783110389906