55
3
Unconstrained Optimization
3.1 Introduction
The solution techniques for unconstrained optimization problems with mul-
tiple variables are dealt in this chapter. In practice, optimization problems
are constrained, and unconstrained optimization problems are few. One
example of an unconstrained optimization problem is data tting, where
one ts a curve on the measured data. However, the algorithms presented in
this chapter can be used to solve constrained optimization problems as well.
This is done by suitably modifying the objective function, which includes a
penalty term in case constraints are violated.
The solution methods for unconstrained optimization problems can
be broadly classied into gradient-based and non–gradient-based search
methods. As the name suggests, gradient-based methods require gradi-
ent information in determining the search direction. The gradient-based
methods discussed in this chapter are steepest descent, Davidon–Fletcher–
Powell (DFP), Broyden–Fletcher–Goldfarb–Shanno (BFGS), Newton, and
Levenberg–Marquardt methods. The search direction computed by these
methods uses the gradient information, Hessian information, or a combina-
tion of these two. Some methods also make an approximation of the Hessian
matrix. Once the search direction is identied, one needs to evaluate how
much to move in that direction so as to minimize the function. This is a
one-dimensional problem. We will be using the golden section method, as
discussed in Chapter 2, for solving the one-dimensional problem. The non–
gradient-based method does not require derivatives or second derivative
information in nding the search direction. The search direction is guided
by the function evaluations as well as the search directions computed from
earlier iterations. Powell’s conjugate direction method, a non–gradient-based
method, is elaborated in this chapter as it is much superior (shows quadratic
convergence) to other non-gradient methods such as simplex and pattern
search methods. The simplex method (Nelder–Mead algorithm) is also
discussed in Section 3.4.9 on the direct search method. In the last section,
Powell’s method is used to solve a complicated motion design problem of a
robot. The road map of this chapter is shown in Figure 3.1.
56 Optimization: Algorithms and Applications
For a single-variable function, it was discussed earlier that the derivative
of the function vanishes at the optimum and the second derivative of the
function is greater than zero at the minimum of the function. The same can
be extended to a multivariable function. The necessary conditions for x* to
be a minimum are that
∇f(x*) = 0 (3.1)
and x
T
Hx is positive denite (x
T
Hx > 0). To ensure this, eigenvalues of H are
to be positive. Consider a two-variable function
f x x x( )x = + −
1
2
2
2
1
2 (3.2)
Test problem
(spring system)
Gradient-based methods Direct search method
Powell’s method
Nelder–Mead algorithm
Steepest descent method
Newton’s method
Modified Newton’s method
Marquardt’s method
Conjugate gradient method
DFP method
BFGS method
Multivariable
optimization methods
Unconstrained
optimization
Application to robotics
Additional test functions
Rosenbrock’s function
Wood’s function
Quadratic function
Nonlinear function
FIGURE 3.1
Road map of Chapter 3.
57Unconstrained Optimization
The gradient is
∇ =
−
f
x
x
( )x
2 2
2
1
2
(3.3)
Equating the gradient to zero, the optimum is at (1, 0). For this function
x
T
Hx> 0. Hence, the point (1, 0) is the minimum of f(x). The surface-contour
plot of this function is shown in Figure 3.2.
For a two-variable function
f x x( )x = −
1
2
2
2
(3.4)
the optimum is at (0, 0) from the rst-order condition. Checking the second-
order condition, we nd that x
T
Hx = 0. Therefore, the point (0, 0) represents
saddle point (see Figure 3.3).
3.2 Unidirectional Search
The unidirectional search refers to minimizing the value of a multivariable
function along a specied direction. For example, if x
i
is the initial starting
point of the design variables for minimizing a multivariable function and S
i
−5
0
5
−5
0
5
−20
0
20
40
60
x
1
x
2
f (x
1
, x
2
)
Minimum point
FIGURE 3.2
Surface-contour plot of the function.
58 Optimization: Algorithms and Applications
is the search direction, then we need to determine a scalar quantity α such
that the function
f(α) = x
i
+ αS
i
(3.5)
is minimized. The value of α at which this function reaches a minimum is
given by α*. This is a one-dimensional optimization problem and we can use
the golden section technique to minimize this function. The golden section
method is modied to handle multivariable functions and the MATLAB
®
code golden_funct1.m is given.
Let us perform a unidirectional search on the Rosenbrock function given by
f x x x x( ) ( )= −
( )
+ −100 1
2 1
2
2
1
2
(3.6)
with different starting values of x and with different search directions.
The results are summarized in Table 3.1. It is observed from this table that
−5
0
5
−5
0
5
−40
−20
0
20
40
x
1
x
2
f (x
1
, x
2
)
Saddle point
FIGURE 3.3
Surface-contour plot of the function with saddle point.
TABLE 3.1
Unidirectional Search for a Multivariable Function
x
i
f(x
i
) S
i
α* f(α*)
(3, 0.5) 7229 (2, 1) –1.3731 88.45
(3, 0.5) 7229 (2, 3) –1.1249 1181.7
(1, 1) 0 (2, 2) 0 0
(2, 2) 401 (1, 1) –1 0
Get Optimization now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.