Book description
The book gives a systematical presentation of stochastic approximation methods for discrete time Markov price processes. Advanced methods combining backward recurrence algorithms for computing of option rewards and general results on convergence of stochastic space skeleton and tree approximations for option rewards are applied to a variety of models of multivariate modulated Markov price processes. The principal novelty of presented results is based on consideration of multivariate modulated Markov price processes and general pay-off functions, which can depend not only on price but also an additional stochastic modulating index component, and use of minimal conditions of smoothness for transition probabilities and pay-off functions, compactness conditions for log-price processes and rate of growth conditions for pay-off functions. The volume presents results on structural studies of optimal stopping domains, Monte Carlo based approximation reward algorithms, and convergence of American-type options for autoregressive and continuous time models, as well as results of the corresponding experimental studies.
Table of contents
- American-Type Options
- De Gruyter Studies in Mathematics
- Title Page
- Copyright Page
- Preface
- Table of Contents
-
1 Reward approximations for autoregressive log-price processes (LPP)
-
1.1 Markov Gaussian LPP
- 1.1.1 Upper bounds for rewards of Markov Gaussian log-price processes with linear drift and bounded volatility
- 1.1.2 Space skeleton approximations for option rewards of Markov Gaussian log-price processes with linear drift and constant bounded coefficients
- 1.1.3 Convergence of option reward functions for space skeleton approximations for Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
- 1.1.4 Convergence of optimal expected rewards for space skeleton approximations of Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
-
1.2 Autoregressive LPP
- 1.2.1 Upper bounds for rewards of autoregressive log-price processes
- 1.2.2 Space-skeleton approximations for option rewards of autoregressive log-price processes
- 1.2.3 Convergence of option reward functions for space skeleton approximations for option rewards of autoregressive log-price processes
- 1.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive log-price processes
-
1.3 Autoregressive moving average LPP
- 1.3.1 Upper bounds for rewards of autoregressive moving average type log-price processes
- 1.3.2 Space skeleton approximations for option reward functions for autoregressive moving average log-price processes
- 1.3.3 Convergence of option reward functions for space skeleton approximations for option reward functions for autoregressive moving average log-price processes with Gaussian noise terms
- 1.3.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive moving average log-price processes
-
1.4 Modulated Markov Gaussian LPP
- 1.4.1 Upper bounds for rewards of modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
- 1.4.2 Space-skeleton approximations for option rewards of modulated Markov Gaussian log-price processes with linear drift and constant diffusion coefficients
- 1.4.3 Convergence of option reward functions for space skeleton approximations for modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
- 1.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients
-
1.5 Modulated autoregressive LPP
- 1.5.1 Upper bounds for rewards of modulated autoregressive typelog-price processes
- 1.5.2 Space skeleton approximations for option rewards of modulated autoregressive log-price processes
- 1.5.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive log-price processes with Gaussian noise terms
- 1.5.4 Convergence of optimal expected rewards for space skeleton approximaions of modulated autoregressive log-price processes
-
1.6 Modulated autoregressive moving average LPP
- 1.6.1 Upper bounds for rewards of mixed modulated autoregressive moving average type log-price processes with Gaussian noise terms
- 1.6.2 Space skeleton approximations for option rewards of modulated autoregressive moving average log-price processes
- 1.6.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive moving average log-price processes
- 1.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive moving average log-price processes
-
1.1 Markov Gaussian LPP
-
2 Reward approximations for autoregressive stochastic volatility LPP
-
2.1 Nonlinear autoregressive stochastic volatility LPP
- 2.1.1 Upper bounds for rewards of nonlinear autoregressive stochastic volatility log-price processes
- 2.1.2 Space-skeleton approximations for option rewards of nonlinear autoregressive stochastic volatility log-price processes
- 2.1.3 Convergence of option reward functions for space skeleton approximations of nonlinear autoregressive stochastic volatility log-price processes
- 2.1.4 Convergence of optimal expected rewards for space skeleton approximations of nonlinear autoregressive stochastic volatility log-price processes
-
2.2 Autoregressive conditional heteroskedastic LPP
- 2.2.1 Upper bounds for rewards of autoregressive conditional heteroskedasticlog-price processes
- 2.2.2 Space skeleton approximations for option rewards of autoregressive conditional heteroskedastic log-price processes
- 2.2.3 Convergence of option reward functions for space-skeleton approximations of autoregressive conditional heteroskedastic log-price processes
- 2.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive conditional heteroskedastic log-price processes
-
2.3 Generalized autoregressive conditional heteroskedastic LPP
- 2.3.1 Upper bounds for rewards of generalized autoregressive conditional heteroskedastic log-price processes
- 2.3.2 Space-skeleton approximations for option rewards of generalized autoregressive conditional heteroskedastic log-price processes
- 2.3.3 Convergence of option reward functions for generalized autoregressive conditional heteroskedastic log-price processes
- 2.3.4 Convergence of optimal expected rewards for space skeleton approximations of generalized autoregressive conditional heteroskedastic log-price processes
-
2.4 Modulated nonlinear autoregressive stochastic volatility LPP
- 2.4.1 Upper bound for rewards of modulated nonlinear autoregressive stochastic volatility log-price processes
- 2.4.2 Space-skeleton approximations for option rewards of modulated nonlinear autoregressive stochastic volatility log-price processes
- 2.4.3 Convergence of option reward functions for modulated nonlinear autoregressive stochastic volatility log-price processes
- 2.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated nonlinear autoregressive stochastic volatility log-price processes
-
2.5 Modulated autoregressive conditional heteroskedastic LPP
- 2.5.1 Space-skeleton approximations for modulated autoregressive conditional heteroskedastic log-price processes
- 2.5.2 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-priceprocesses
- 2.5.3 Convergence of reward functions for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedastic log-price processes
- 2.5.4 Convergence of optimal expected rewards for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedasticlog-price processes
-
2.6 Modulated generalized autoregressive conditional heteroskedastic LPP
- 2.6.1 Upper bounds for option rewards of modulated generalized autoregressive conditional heteroskedastic log-price processes
- 2.6.2 Space-skeleton approximations for option rewards of modulated generalized autoregressive conditional heteroskedastic log-price processes
- 2.6.3 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-price processes
- 2.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-priceprocesses
-
2.1 Nonlinear autoregressive stochastic volatility LPP
-
3 American-type options for continuous time Markov LPP
- 3.1 Markov LPP
- 3.2 LPP with independent increments
- 3.3 Diffusion LPP
-
3.4 American-type options for Markov LPP
- 3.4.1 American-type options for continuous time price processes
- 3.4.2 Optimal expected rewards, reward functions and optimal stopping times
- 3.4.3 Pay-off functions
- 3.4.4 Pay-off functions for call and put type options
- 3.4.5 Nonlinear pay-off functions for call- and put-type options
- 3.4.6 Pay-off functions for exchange of assets contracts
-
4 Upper bounds for option rewards for Markov LPP
-
4.1 Upper bounds for rewards for Markov LPP
- 4.1.1 Upper bounds for supremums of log-price processes
- 4.1.2 Upper bounds for reward functions
- 4.1.3 Upper bounds for optimal expected rewards
- 4.1.4 Asymptotically uniform upper bounds for supremums of log-priceprocesses
- 4.1.5 Asymptotically uniform upper bounds for reward functions
- 4.1.6 Asymptotically uniform upper bounds for optimal expected rewards
- 4.2 Asymptotically uniform conditions of compactness for Markov LPP
- 4.3 Upper bounds for rewards for LPP with independent increments
-
4.4 Upper bounds for rewards for diffusion LPP
- 4.4.1 Skeleton approximations for diffusion log-price processes
- 4.4.2 Upper bounds for option rewards for diffusion log-price processes with bounded characteristics and their time-skeleton approximations
- 4.4.3 Upper bounds for option rewards for diffusion log-price processes with bounded characteristics and their martingale-type approximations
- 4.4.4 Upper bounds for option rewards for univariate diffusion log-price processes with bounded characteristics and their trinomial-tree approximations
- 4.5 Upper bounds for rewards for mean-reverse diffusion LPP
-
4.1 Upper bounds for rewards for Markov LPP
-
5 Time-skeleton reward approximations for Markov LPP
-
5.1 Lipschitz-type conditions for pay-off functions
- 5.1.1 Asymptotically uniform Lipschitz-type conditions for pay-off functions expressed in terms of price arguments
- 5.1.2 Asymptotically uniform Lipschitz-type conditions for pay-off functions expressed in terms of log-price arguments
- 5.1.3 Asymptotically uniform Lipschitz-type conditions and rates of growth for pay-off functions
- 5.1.4 Weaken Lipschitz-type conditions for pay-off functions
- 5.2 Time-skeleton approximations for optimal expected rewards
- 5.3 Time-skeleton approximations for reward functions
- 5.4 Time-skeleton reward approximations for LPP with independent increments
-
5.5 Time-skeleton reward approximations for diffusion LPP
- 5.5.1 Time-skeleton reward approximations for multivariate diffusion log-price processes with bounded characteristics and their time-skeleton approximations
- 5.5.2 Time-skeleton reward approximations for multivariate diffusion log-price processes with bounded characteristics and their martingale-type approximations
- 5.5.3 Time-skeleton reward approximations for optimal expected rewards for univariate diffusion log-price processes with bounded characteristics and their trinomial-tree approximations
-
5.1 Lipschitz-type conditions for pay-off functions
-
6 Time-space-skeleton reward approximations for Markov LPP
-
6.1 Time-space-skeleton reward approximations for Markov LPP
- 6.1.1 Convergence of time-skeleton reward approximations based on a given partition of time interval, for multivariate modulated Markov log-price processes
- 6.1.2 Convergence of time-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate modulated Markov log-price processes
- 6.1.3 Time-space-skeleton reward approximations for multivariate modulated Markov log-price processes
- 6.1.4 Convergence of time-space-skeleton reward approximations based on a given partition of time interval for multivariate modulated Markov log-price processes
- 6.1.5 Convergence of time-space-skeleton reward approximations based on an arbitrary partitions of time interval, for multivariate modulated Markovlog-price processes
-
6.2 Time-space-skeleton reward approximations for LPP with independent increments
- 6.2.1 Convergence of time-skeleton reward approximations based on a given partition of time interval, for multivariate log-price processes with independent increments
- 6.2.2 Convergence of time-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate log-price processes with independent increments
- 6.2.3 Time-space-skeleton reward approximations with fixed space-skeleton structure, for multivariate log-price processes with independent increments
- 6.2.4 Time-space-skeleton reward approximations with an additive space-skeletonstructure, for multivariate log-price processes with independent increments
- 6.2.5 Convergence of time-space-skeleton reward approximations for a given partition of time interval, for multivariate log-price processes with independent increments
- 6.2.6 Convergence of time-space-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate log-price processes with independent increments
-
6.3 Time-space-skeleton reward approximations for diffusion LPP
- 6.3.1 Convergence of time-skeleton reward approximations for multivariate diffusion log-price processes
- 6.3.2 Convergence of martingale-type reward approximations for diffusion type log-price processes with bounded characteristics
- 6.3.3 Convergence of trinomial-tree reward approximations for univariate diffusion log-price processes
- 6.3.4 Time-space-skeleton reward approximations for diffusion log-price processes
- 6.3.5 Convergence of time-space-skeleton reward approximations with fixed skeleton structure based on arbitrary partition of time interval, for diffusion log-price processes
-
6.1 Time-space-skeleton reward approximations for Markov LPP
-
7 Convergence of option rewards for continuous time Markov LPP
- 7.1 Convergence of rewards for continuous time Markov LPP
-
7.2 Convergence of rewards for LPP with independent increments
- 7.2.1 Convergence of rewards for multivariate log-price processes with independent increments
- 7.2.2 Convergence of rewards for time-skeleton approximations of multivariate log-price processes with independent increments
- 7.2.3 Convergence of rewards for time-space approximations of multivariate log-price processes with independent increments
- 7.2.4 Convergence of time-space-skeleton reward approximations for multivariate Levy log-price processes
-
7.3 Convergence of rewards for univariate Gaussian LPP with independent increments
- 7.3.1 Convergence of binomial-tree reward approximations for univariate Gaussian log-price processes with independent increments
- 7.3.2 Fitting of parameters for binomial-tree reward approximations for univariate Wiener log-price processes
- 7.3.3 Fitting of parameters for binomial-tree reward approximations for univariate inhomogeneous in time Gaussian log-price processes
- 7.3.4 Convergence of time-space skeleton reward approximations for univariate Gaussian log-price processes with independent increments
-
7.4 Convergence of rewards for multivariate Gaussian LPP with independent increments
- 7.4.1 Convergence of trinomial-tree reward approximations for multivariate Gaussian log-price processes
- 7.4.2 Fitting of parameters for binomial-tree reward approximations for multivariate Wiener log-price processes
- 7.4.3 Fitting of parameters for trinomial-tree reward approximations for multivariate inhomogeneous in time Gaussian log-price processes with independent increments
- 7.4.4 Convergence of time-space skeleton reward approximations for multivariate Gaussian log-price processes with independent increments
-
8 Convergence of option rewards for diffusion LPP
- 8.1 Convergence of rewards for time-skeleton approximations of diffusion LPP
- 8.2 Convergence of rewards for martingale-type approximations of diffusion LPP
- 8.3 Convergence of rewards for trinomial-tree approximations of diffusion LPP
-
8.4 Rewards approximations for mean-reverse diffusion LPP
- 8.4.1 Trinomial tree reward approximations for mean-reverse diffusion log-price processes
- 8.4.2 Approximation of rewards for diffusion log-price processes based on space truncation of drift and diffusion functional coefficients
- 8.4.3 Asymptotic reward approximations for diffusion log-price processes based on the space truncation of drift and diffusion functional coefficients
- 8.4.4 Asymptotic reward approximations for mean-reverse diffusion log-price processes based on the space truncation of drift and diffusion functional coefficients
-
9 European, knockout, reselling and random pay-off options
-
9.1 Reward approximations for European-type options
- 9.1.1 Reward approximation for European- and American-type options formultivariate modulated Markov log-price processes
- 9.1.2 Convergence of reward approximations for European-type options for multivariate modulated Markov log-price processes
- 9.1.3 Other results about convergence of reward approximations for European-type options for multivariate modulated Markov log-priceprocesses
-
9.2 Reward approximations for knockout American-type options
- 9.2.1 Knockout American-type options
- 9.2.2 Imbedding into the model of ordinary American-type options
- 9.2.3 Imbedding of discrete time knockout American-type options into the model of ordinary discrete time American-type options
- 9.2.4 Convergence of reward approximations for knockout American-type options for multivariate modulated Markov log-price processes
- 9.3 Reward approximations for reselling options
- 9.4 Reward approximations for American-type options with random pay-off
-
9.1 Reward approximations for European-type options
-
10 Results of experimental studies
- 10.1 Binomial- and trinomial-tree reward approximations for discrete time models
-
10.2 Skeleton reward approximations for discrete time models
- 10.2.1 Skeleton reward approximations for log-price processes represented by random walks
- 10.2.2 Experimental results for space-skeleton reward approximations for the Gaussian and compound Gaussian models
- 10.2.3 Rate of convergence for skeleton reward approximation models with pay-off functions generated by a compound Poisson-gamma process
- 10.3 Reward approximations for continuous time models
-
10.4 Reward approximation algorithms for Markov LPP
- 10.4.1 A theoretical scheme for three-steps time-space-skeleton reward approximations for Markov log-price processes
- 10.4.2 Three-steps time-space-skeleton reward approximation algorithms for Markov log-price processes
- 10.4.3 Modified three-steps time-space-skeleton reward approximation algorithms for Markov log-price processes
- Bibliographical Remarks
- Bibliography
- Index
- De Gruyter Studies in Mathematics
Product information
- Title: American-Type Options
- Author(s):
- Release date: March 2015
- Publisher(s): De Gruyter
- ISBN: 9783110389906
You might also like
book
Nonlinear Option Pricing
New Tools to Solve Your Option Pricing Problems For nonlinear PDEs encountered in quantitative finance, advanced …
article
Have ChatGPT Ask You Questions
ChatGPT Shortcuts shows future prompt engineers how to harness the full potential of the state-of-the-art AI …
book
Semi-Markov Processes
Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space …
book
An Introduction to Exotic Option Pricing
In an easy-to-understand, nontechnical yet mathematically elegant manner, An Introduction to Exotic Option Pricing shows how …