CHAPTER 11
QUASI-NEWTON METHODS
11.1 Introduction
Newton’s method is one of the more successful algorithms for optimization. If it converges, it has a quadratic order of convergence. However, as pointed out before, for a general nonlinear objective function, convergence to a solution cannot be guaranteed from an arbitrary initial point x(0) In general, if the initial point is not sufficiently close to the solution, then the algorithm may not possess the descent property for some k].
Recall that the idea behind Newton’s method is to locally approximate the function f being minimized, at every iteration, by a quadratic function. The minimizer for the quadratic approximation is used as the starting point for the next iteration. This leads to Newton’s recursive algorithm
We may try to guarantee that the algorithm has the descent property by modifying the original algorithm as follows:
where αk is chosen to ensure that
For example, we may choose αk = arg minα≥0 f(x(k) − αF(x(k))−1g(k)) (see Theorem 9.2). We can then determine an appropriate value of αk by performing a line search in the ...
Get An Introduction to Optimization, 4th Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.