3  Dynamic Optimization Techniques and Optimal Control

3.1  Introduction

Dynamic optimization techniques reflect a development of the classical optimization programming techniques, which allows for the handling of time variant problems. Optimization over time in a single- or multi-stage decision process is generally formulated as dynamic programming (DP), involving large number of variables under different stages [1,2,3,5,7].

Here, an overview of optimal control, dynamic programming, and underlying concepts such as the generalized Hamiltonian-Jacobi, Pontryagin’s principle, and Bellman’s optimality conditions is presented [3,4]. Expansion of the DP to handle nondeterministic or random processes has led to the development of stochastic DP ...

Get Adaptive Stochastic Optimization Techniques with Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.