O'Reilly logo

An Introduction to Optimization, 4th Edition by Stanislaw H. Zak, Edwin K. P. Chong

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

CHAPTER 11

QUASI-NEWTON METHODS

11.1 Introduction

Newton’s method is one of the more successful algorithms for optimization. If it converges, it has a quadratic order of convergence. However, as pointed out before, for a general nonlinear objective function, convergence to a solution cannot be guaranteed from an arbitrary initial point x(0) In general, if the initial point is not sufficiently close to the solution, then the algorithm may not possess the descent property for some k].

Recall that the idea behind Newton’s method is to locally approximate the function f being minimized, at every iteration, by a quadratic function. The minimizer for the quadratic approximation is used as the starting point for the next iteration. This leads to Newton’s recursive algorithm

equation

We may try to guarantee that the algorithm has the descent property by modifying the original algorithm as follows:

equation

where αk is chosen to ensure that

equation

For example, we may choose αk = arg minα≥0 f(x(k) − αF(x(k))−1g(k)) (see Theorem 9.2). We can then determine an appropriate value of αk by performing a line search in the ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required