7Calculus of Variations
So far we have discussed optimization when the variables belong to finite‐dimensional vector spaces. However, it is often of interest to also optimize over infinite‐dimensional vector spaces. The most simple case is when the variable is a real‐valued function of a real variable. This has applications in optimal control in continuous time, where the optimal control signal is a real‐valued function of time. Another important application is the derivation of probability density functions from the principle of maximizing the entropy subject to moment constraints. In this case, the variable is the probability density function. For probability density functions, the argument is often a vector. How to solve these types of optimization problems is called calculus of variations. The origin of this theory goes back to Newton's minimal resistance problem. Major contributions were made by Euler and Lagrange. The generalizations to optimal control were made by Pontryagin. We will present the theory in this general form, but we will not be able to prove all the results. The interested reader is referred to the vast literature on optimal control for most of the proofs, especially for the most general results. However, we will provide the proofs for some special cases to build intuition.
7.1 Extremum of Functionals
We consider a normed linear space of continuously differentiable real‐valued functions defined on , which we denote by . The norm is defined as
We then ...
Get Optimization for Learning and Control now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.