6Optimization Methods
We now turn our attention to numerical methods that can be used to solve different classes of optimization problems. The methods are mostly iterative: given some initial point , they generate a sequence of points , , that converges to a local or global minimum. Such methods were proposed already by Isaac Newton and Carl Friedrich Gauss. We will start by reviewing some basic principles and properties that we will make use of throughout this chapter. We then discuss first‐order methods for unconstrained optimization, which are methods that make use of first‐order derivatives of the objective function. Second‐order methods require that the Hessian of the objective function exists and is available, and we will see that the use of second‐order information can dramatically reduce the number of iterations required to find a solution. However, this typically comes at the expense of more costly iterations, and we will explore the trade‐off between the cost per iteration and the number of iterations through the lens of variable metric methods, which use first‐order derivatives to approximate second‐order derivatives. We will also consider methods for nonlinear least‐squares ...
Get Optimization for Learning and Control now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.