Book description
The book gives a systematical presentation of stochastic approximation methods for discrete time Markov price processes. Advanced methods combining backward recurrence algorithms for computing of option rewards and general results on convergence of stochastic space skeleton and tree approximations for option rewards are applied to a variety of models of multivariate modulated Markov price processes. The principal novelty of presented results is based on consideration of multivariate modulated Markov price processes and general payoff functions, which can depend not only on price but also an additional stochastic modulating index component, and use of minimal conditions of smoothness for transition probabilities and payoff functions, compactness conditions for logprice processes and rate of growth conditions for payoff functions. The volume presents results on structural studies of optimal stopping domains, Monte Carlo based approximation reward algorithms, and convergence of Americantype options for autoregressive and continuous time models, as well as results of the corresponding experimental studies.
Table of contents
 AmericanType Options
 De Gruyter Studies in Mathematics
 Title Page
 Copyright Page
 Preface
 Table of Contents

1 Reward approximations for autoregressive logprice processes (LPP)

1.1 Markov Gaussian LPP
 1.1.1 Upper bounds for rewards of Markov Gaussian logprice processes with linear drift and bounded volatility
 1.1.2 Space skeleton approximations for option rewards of Markov Gaussian logprice processes with linear drift and constant bounded coefficients
 1.1.3 Convergence of option reward functions for space skeleton approximations for Markov Gaussian logprice processes with linear drift and bounded volatility coefficients
 1.1.4 Convergence of optimal expected rewards for space skeleton approximations of Markov Gaussian logprice processes with linear drift and bounded volatility coefficients

1.2 Autoregressive LPP
 1.2.1 Upper bounds for rewards of autoregressive logprice processes
 1.2.2 Spaceskeleton approximations for option rewards of autoregressive logprice processes
 1.2.3 Convergence of option reward functions for space skeleton approximations for option rewards of autoregressive logprice processes
 1.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive logprice processes

1.3 Autoregressive moving average LPP
 1.3.1 Upper bounds for rewards of autoregressive moving average type logprice processes
 1.3.2 Space skeleton approximations for option reward functions for autoregressive moving average logprice processes
 1.3.3 Convergence of option reward functions for space skeleton approximations for option reward functions for autoregressive moving average logprice processes with Gaussian noise terms
 1.3.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive moving average logprice processes

1.4 Modulated Markov Gaussian LPP
 1.4.1 Upper bounds for rewards of modulated Markov Gaussian logprice processes with linear drift and bounded volatility coefficients
 1.4.2 Spaceskeleton approximations for option rewards of modulated Markov Gaussian logprice processes with linear drift and constant diffusion coefficients
 1.4.3 Convergence of option reward functions for space skeleton approximations for modulated Markov Gaussian logprice processes with linear drift and bounded volatility coefficients
 1.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated Markov Gaussian logprice processes with linear drift and bounded volatility coefficients

1.5 Modulated autoregressive LPP
 1.5.1 Upper bounds for rewards of modulated autoregressive typelogprice processes
 1.5.2 Space skeleton approximations for option rewards of modulated autoregressive logprice processes
 1.5.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive logprice processes with Gaussian noise terms
 1.5.4 Convergence of optimal expected rewards for space skeleton approximaions of modulated autoregressive logprice processes

1.6 Modulated autoregressive moving average LPP
 1.6.1 Upper bounds for rewards of mixed modulated autoregressive moving average type logprice processes with Gaussian noise terms
 1.6.2 Space skeleton approximations for option rewards of modulated autoregressive moving average logprice processes
 1.6.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive moving average logprice processes
 1.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive moving average logprice processes

1.1 Markov Gaussian LPP

2 Reward approximations for autoregressive stochastic volatility LPP

2.1 Nonlinear autoregressive stochastic volatility LPP
 2.1.1 Upper bounds for rewards of nonlinear autoregressive stochastic volatility logprice processes
 2.1.2 Spaceskeleton approximations for option rewards of nonlinear autoregressive stochastic volatility logprice processes
 2.1.3 Convergence of option reward functions for space skeleton approximations of nonlinear autoregressive stochastic volatility logprice processes
 2.1.4 Convergence of optimal expected rewards for space skeleton approximations of nonlinear autoregressive stochastic volatility logprice processes

2.2 Autoregressive conditional heteroskedastic LPP
 2.2.1 Upper bounds for rewards of autoregressive conditional heteroskedasticlogprice processes
 2.2.2 Space skeleton approximations for option rewards of autoregressive conditional heteroskedastic logprice processes
 2.2.3 Convergence of option reward functions for spaceskeleton approximations of autoregressive conditional heteroskedastic logprice processes
 2.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive conditional heteroskedastic logprice processes

2.3 Generalized autoregressive conditional heteroskedastic LPP
 2.3.1 Upper bounds for rewards of generalized autoregressive conditional heteroskedastic logprice processes
 2.3.2 Spaceskeleton approximations for option rewards of generalized autoregressive conditional heteroskedastic logprice processes
 2.3.3 Convergence of option reward functions for generalized autoregressive conditional heteroskedastic logprice processes
 2.3.4 Convergence of optimal expected rewards for space skeleton approximations of generalized autoregressive conditional heteroskedastic logprice processes

2.4 Modulated nonlinear autoregressive stochastic volatility LPP
 2.4.1 Upper bound for rewards of modulated nonlinear autoregressive stochastic volatility logprice processes
 2.4.2 Spaceskeleton approximations for option rewards of modulated nonlinear autoregressive stochastic volatility logprice processes
 2.4.3 Convergence of option reward functions for modulated nonlinear autoregressive stochastic volatility logprice processes
 2.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated nonlinear autoregressive stochastic volatility logprice processes

2.5 Modulated autoregressive conditional heteroskedastic LPP
 2.5.1 Spaceskeleton approximations for modulated autoregressive conditional heteroskedastic logprice processes
 2.5.2 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic logpriceprocesses
 2.5.3 Convergence of reward functions for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedastic logprice processes
 2.5.4 Convergence of optimal expected rewards for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedasticlogprice processes

2.6 Modulated generalized autoregressive conditional heteroskedastic LPP
 2.6.1 Upper bounds for option rewards of modulated generalized autoregressive conditional heteroskedastic logprice processes
 2.6.2 Spaceskeleton approximations for option rewards of modulated generalized autoregressive conditional heteroskedastic logprice processes
 2.6.3 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic logprice processes
 2.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive conditional heteroskedastic logpriceprocesses

2.1 Nonlinear autoregressive stochastic volatility LPP

3 Americantype options for continuous time Markov LPP
 3.1 Markov LPP
 3.2 LPP with independent increments
 3.3 Diffusion LPP

3.4 Americantype options for Markov LPP
 3.4.1 Americantype options for continuous time price processes
 3.4.2 Optimal expected rewards, reward functions and optimal stopping times
 3.4.3 Payoff functions
 3.4.4 Payoff functions for call and put type options
 3.4.5 Nonlinear payoff functions for call and puttype options
 3.4.6 Payoff functions for exchange of assets contracts

4 Upper bounds for option rewards for Markov LPP

4.1 Upper bounds for rewards for Markov LPP
 4.1.1 Upper bounds for supremums of logprice processes
 4.1.2 Upper bounds for reward functions
 4.1.3 Upper bounds for optimal expected rewards
 4.1.4 Asymptotically uniform upper bounds for supremums of logpriceprocesses
 4.1.5 Asymptotically uniform upper bounds for reward functions
 4.1.6 Asymptotically uniform upper bounds for optimal expected rewards
 4.2 Asymptotically uniform conditions of compactness for Markov LPP
 4.3 Upper bounds for rewards for LPP with independent increments

4.4 Upper bounds for rewards for diffusion LPP
 4.4.1 Skeleton approximations for diffusion logprice processes
 4.4.2 Upper bounds for option rewards for diffusion logprice processes with bounded characteristics and their timeskeleton approximations
 4.4.3 Upper bounds for option rewards for diffusion logprice processes with bounded characteristics and their martingaletype approximations
 4.4.4 Upper bounds for option rewards for univariate diffusion logprice processes with bounded characteristics and their trinomialtree approximations
 4.5 Upper bounds for rewards for meanreverse diffusion LPP

4.1 Upper bounds for rewards for Markov LPP

5 Timeskeleton reward approximations for Markov LPP

5.1 Lipschitztype conditions for payoff functions
 5.1.1 Asymptotically uniform Lipschitztype conditions for payoff functions expressed in terms of price arguments
 5.1.2 Asymptotically uniform Lipschitztype conditions for payoff functions expressed in terms of logprice arguments
 5.1.3 Asymptotically uniform Lipschitztype conditions and rates of growth for payoff functions
 5.1.4 Weaken Lipschitztype conditions for payoff functions
 5.2 Timeskeleton approximations for optimal expected rewards
 5.3 Timeskeleton approximations for reward functions
 5.4 Timeskeleton reward approximations for LPP with independent increments

5.5 Timeskeleton reward approximations for diffusion LPP
 5.5.1 Timeskeleton reward approximations for multivariate diffusion logprice processes with bounded characteristics and their timeskeleton approximations
 5.5.2 Timeskeleton reward approximations for multivariate diffusion logprice processes with bounded characteristics and their martingaletype approximations
 5.5.3 Timeskeleton reward approximations for optimal expected rewards for univariate diffusion logprice processes with bounded characteristics and their trinomialtree approximations

5.1 Lipschitztype conditions for payoff functions

6 Timespaceskeleton reward approximations for Markov LPP

6.1 Timespaceskeleton reward approximations for Markov LPP
 6.1.1 Convergence of timeskeleton reward approximations based on a given partition of time interval, for multivariate modulated Markov logprice processes
 6.1.2 Convergence of timeskeleton reward approximations based on arbitrary partitions of time interval, for multivariate modulated Markov logprice processes
 6.1.3 Timespaceskeleton reward approximations for multivariate modulated Markov logprice processes
 6.1.4 Convergence of timespaceskeleton reward approximations based on a given partition of time interval for multivariate modulated Markov logprice processes
 6.1.5 Convergence of timespaceskeleton reward approximations based on an arbitrary partitions of time interval, for multivariate modulated Markovlogprice processes

6.2 Timespaceskeleton reward approximations for LPP with independent increments
 6.2.1 Convergence of timeskeleton reward approximations based on a given partition of time interval, for multivariate logprice processes with independent increments
 6.2.2 Convergence of timeskeleton reward approximations based on arbitrary partitions of time interval, for multivariate logprice processes with independent increments
 6.2.3 Timespaceskeleton reward approximations with fixed spaceskeleton structure, for multivariate logprice processes with independent increments
 6.2.4 Timespaceskeleton reward approximations with an additive spaceskeletonstructure, for multivariate logprice processes with independent increments
 6.2.5 Convergence of timespaceskeleton reward approximations for a given partition of time interval, for multivariate logprice processes with independent increments
 6.2.6 Convergence of timespaceskeleton reward approximations based on arbitrary partitions of time interval, for multivariate logprice processes with independent increments

6.3 Timespaceskeleton reward approximations for diffusion LPP
 6.3.1 Convergence of timeskeleton reward approximations for multivariate diffusion logprice processes
 6.3.2 Convergence of martingaletype reward approximations for diffusion type logprice processes with bounded characteristics
 6.3.3 Convergence of trinomialtree reward approximations for univariate diffusion logprice processes
 6.3.4 Timespaceskeleton reward approximations for diffusion logprice processes
 6.3.5 Convergence of timespaceskeleton reward approximations with fixed skeleton structure based on arbitrary partition of time interval, for diffusion logprice processes

6.1 Timespaceskeleton reward approximations for Markov LPP

7 Convergence of option rewards for continuous time Markov LPP
 7.1 Convergence of rewards for continuous time Markov LPP

7.2 Convergence of rewards for LPP with independent increments
 7.2.1 Convergence of rewards for multivariate logprice processes with independent increments
 7.2.2 Convergence of rewards for timeskeleton approximations of multivariate logprice processes with independent increments
 7.2.3 Convergence of rewards for timespace approximations of multivariate logprice processes with independent increments
 7.2.4 Convergence of timespaceskeleton reward approximations for multivariate Levy logprice processes

7.3 Convergence of rewards for univariate Gaussian LPP with independent increments
 7.3.1 Convergence of binomialtree reward approximations for univariate Gaussian logprice processes with independent increments
 7.3.2 Fitting of parameters for binomialtree reward approximations for univariate Wiener logprice processes
 7.3.3 Fitting of parameters for binomialtree reward approximations for univariate inhomogeneous in time Gaussian logprice processes
 7.3.4 Convergence of timespace skeleton reward approximations for univariate Gaussian logprice processes with independent increments

7.4 Convergence of rewards for multivariate Gaussian LPP with independent increments
 7.4.1 Convergence of trinomialtree reward approximations for multivariate Gaussian logprice processes
 7.4.2 Fitting of parameters for binomialtree reward approximations for multivariate Wiener logprice processes
 7.4.3 Fitting of parameters for trinomialtree reward approximations for multivariate inhomogeneous in time Gaussian logprice processes with independent increments
 7.4.4 Convergence of timespace skeleton reward approximations for multivariate Gaussian logprice processes with independent increments

8 Convergence of option rewards for diffusion LPP
 8.1 Convergence of rewards for timeskeleton approximations of diffusion LPP
 8.2 Convergence of rewards for martingaletype approximations of diffusion LPP
 8.3 Convergence of rewards for trinomialtree approximations of diffusion LPP

8.4 Rewards approximations for meanreverse diffusion LPP
 8.4.1 Trinomial tree reward approximations for meanreverse diffusion logprice processes
 8.4.2 Approximation of rewards for diffusion logprice processes based on space truncation of drift and diffusion functional coefficients
 8.4.3 Asymptotic reward approximations for diffusion logprice processes based on the space truncation of drift and diffusion functional coefficients
 8.4.4 Asymptotic reward approximations for meanreverse diffusion logprice processes based on the space truncation of drift and diffusion functional coefficients

9 European, knockout, reselling and random payoff options

9.1 Reward approximations for Europeantype options
 9.1.1 Reward approximation for European and Americantype options formultivariate modulated Markov logprice processes
 9.1.2 Convergence of reward approximations for Europeantype options for multivariate modulated Markov logprice processes
 9.1.3 Other results about convergence of reward approximations for Europeantype options for multivariate modulated Markov logpriceprocesses

9.2 Reward approximations for knockout Americantype options
 9.2.1 Knockout Americantype options
 9.2.2 Imbedding into the model of ordinary Americantype options
 9.2.3 Imbedding of discrete time knockout Americantype options into the model of ordinary discrete time Americantype options
 9.2.4 Convergence of reward approximations for knockout Americantype options for multivariate modulated Markov logprice processes
 9.3 Reward approximations for reselling options
 9.4 Reward approximations for Americantype options with random payoff

9.1 Reward approximations for Europeantype options

10 Results of experimental studies
 10.1 Binomial and trinomialtree reward approximations for discrete time models

10.2 Skeleton reward approximations for discrete time models
 10.2.1 Skeleton reward approximations for logprice processes represented by random walks
 10.2.2 Experimental results for spaceskeleton reward approximations for the Gaussian and compound Gaussian models
 10.2.3 Rate of convergence for skeleton reward approximation models with payoff functions generated by a compound Poissongamma process
 10.3 Reward approximations for continuous time models

10.4 Reward approximation algorithms for Markov LPP
 10.4.1 A theoretical scheme for threesteps timespaceskeleton reward approximations for Markov logprice processes
 10.4.2 Threesteps timespaceskeleton reward approximation algorithms for Markov logprice processes
 10.4.3 Modified threesteps timespaceskeleton reward approximation algorithms for Markov logprice processes
 Bibliographical Remarks
 Bibliography
 Index
 De Gruyter Studies in Mathematics
Product information
 Title: AmericanType Options
 Author(s):
 Release date: March 2015
 Publisher(s): De Gruyter
 ISBN: 9783110389906
You might also like
book
HandsOn Machine Learning with ScikitLearn, Keras, and TensorFlow, 2nd Edition
Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. …
book
Clean Code: A Handbook of Agile Software Craftsmanship
Even bad code can function. But if code isn't clean, it can bring a development organization …
book
40 Algorithms Every Programmer Should Know
Learn algorithms for solving classic computer science problems with this concise guide covering everything from fundamental …
book
Head First Design Patterns, 2nd Edition
You know you don’t want to reinvent the wheel, so you look to design patterns—the lessons …