35
2
1-D Optimization Algorithms
2.1 Introduction
The one-dimensional (1-D) optimization problem refers to an objective func-
tion with one variable. In practice, optimization problems with many vari-
ables are complex, and rarely does one nd a problem with a single variable.
However, 1-D optimization algorithms form the basic building blocks for
multivariable algorithms. As these algorithms form a subproblem of mul-
tivariable optimization problems, numerous methods (or algorithms) have
been reported in the literature, each with some unique advantage over the
others. These algorithms are classied into gradient-based and non–gradient-
based algorithms. Some popular algorithms are discussed in this chapter.
As an example, a single-variable objective function could be
f(x) = 2x
2
− 2x + 8
This is an unconstrained optimization problem where x has to be deter-
mined, which results in minimization of f(x). If we have to restrict x within
a x b, where a and b are real numbers, then it becomes a constrained
optimization problem. If the function f(x) is either continuously increasing
or decreasing between two points a and b, then it is referred to as a monotonic
function (see Figure 2.1). In a unimodal function, the function is monotonic
on either side of its minimum point (x*). The function f(x) = 2x
2
2x + 8 is
plotted in Figure 2.2, in which we observe that f(x) is a unimodal function.
Using the property of the unimodal function that it continuously decreases
or increases on either side of the minimum point, the single-variable search
algorithms can be devised in such a way that they eliminate certain regions
of the function where the minimum is not located.
In the next section, a test problem in a solar energy system is dened. Both
gradient-based and direct search methods are discussed and tested for this
problem. Subsequently, these solution techniques will also be tested on some
more standard optimization problems. The performances of these methods
are compared toward the end of the chapter. The road map of this chapter is
given in Figure 2.3.
36 Optimization: Algorithms and Applications
f
(a) < f
(b)
a b a
x x
b
f
(a) > f
(b)
FIGURE 2.1
Monotonic increasing and decreasing functions.
–5 0 5
0
10
20
30
40
50
60
70
x
f (x)
FIGURE 2.2
Unimodal function.
1-D optimization algorithms
Gradient-based methods Direct search methods
Golden section method
Other methods
Bisection method
Newton–Raphson method
Secant method
Cubic polynomial fit
Test problem (solar energy)
Solution techniques
Comparison of solution methods
FIGURE 2.3
Road map of Chapter 2.
371-D Optimization Algorithms
2.2 Test Problem
Before we discuss the optimization algorithms, let us set a problem on which
we will be testing these algorithms. The solar energy problem is dened in
Problem 8 of Chapter 1. In this cost minimization problem, the cost is a func-
tion of the volume of the storage system and the surface area of the collector.
The volume and surface area are functions of the design variable tempera-
ture T. Let us rewrite the cost function in terms of T alone as
U
T T
=
+
204 165 5
330 2
10 400
20
, ,.
(2.1)
The variable T is restricted between 40°C and 90°C. The function U is plot-
ted as a function of T in Figure 2.4. The minimum occurs at T* = 55.08 and
the minimum value of the function is U* = 1225.166. Observe from the gure
that the cost function is unimodal. A MATLAB
®
code, exhaustive.m, is used
to plot the cost function by varying the design variable T from 40 to 90 in
steps of 0.01. One may ask why, if this method is able to locate the minimum
and is also simple, there is a need to discuss other algorithms. It may be
noted that the number of function evaluations by this particular method is
(90 40)/0.01 = 5000. For more complex problems, the time required for the
function evaluation is at a premium and it may not be practical to evaluate
the function so many times. This necessitates exploring new algorithms that
require fewer function evaluations to reach the minimum of any function.
On executing this code, the output obtained is
Minimum cost = 1225.17
Occurs at T = 55.08
40 50 60 70 80 90
1200
1250
1300
1350
1400
1450
1500
1550
T
U
FIGURE 2.4
Cost function for the test problem.

Get Optimization now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.