169
6
Constrained Optimization
6.1 Introduction
Invariably all optimization problems carry constraints, and examples can be
given from any area one can think of. The supply of a product is constrained
by the capacity of a machine. The trajectory of a rocket is constrained by the
nal target as well as the maximum aerodynamic load it can carry. The range
of an aircraft is constrained by its payload, fuel capacity, and its aerodynamic
characteristics. So how does a constrained optimization problem differ from
an unconstrained problem? In constrained optimization problems, the feasi-
ble region gets restricted because of the presence of constraints. This is more
challenging because for a multivariable problem with several nonlinear con-
straints, arriving at any feasible point itself is a daunting task.
The constrained optimization problem can be mathematically stated as
Minimize
f(x) (6.1)
subject to
g
i
(x) ≤ 0 i = 1, 2,…, m < n (6.2)
h
j
(x) = 0 j = 1, 2,…, r < n (6.3)
x
l
≤ x ≤ x
u
where x is a vector of n design variables given by
x =
x
x
x
n
1
2
170 Optimization: Algorithms and Applications
The functions f, g
i
, and h
j
are all differentiable. The design variables are
bounded by x
l
and x
u
. The constraints g
i
are called as inequality constraints
and h
j
are called equality constraints.
Consider the following constrained optimization problem.
Minimize
(x
1
− 2)
2
+ (x
2
− 3)
2
subject to
x
1
≥ 3
If we apply the rst-order optimality condition on the objective function,
the function minimum is obtained at (2, 3). However, in the presence of a
constraint, the minimum occurs at (3, 3). See Figure 6.1, where the function
contours are plotted along with the constraint. Note that the gradient of the
function (∇f) and the gradient of the constraint (∇g) are parallel to each other
at the optimum point. At other points on the constraint (say, point A), the
gradients are not parallel to each other. More of the optimality conditions
for the constrained optimization problems are discussed in the next section.
The road map of this chapter is shown in Figure 6.2. After a discussion on
optimality conditions, different solution techniques such as penalty func-
tion, augmented Lagrangian, sequential quadratic programming (SQP), and
method of feasible directions are discussed. In the penalty function method,
0.1
1
1
1
2
2
2
2
3
3
3
3
3
4
4
4
4
4
6
6
6
6
6
10
10
10
10
10
10
15
15
15
15
15
20
20
20
20
25
25
25
25
x
1
–2 –1 0 1 2 3 4 5
–2
–1
0
1
2
3
4
5
Feasible region
Infeasible region
A
Optimum point
x
2
∇f
∇f
∇g
∇g
FIGURE 6.1
Constrained optimization problem.
171Constrained Optimization
a constrained optimization problem is transformed into an unconstrained
problem by penalizing the objective function for any violation of the con-
straints. The augmented Lagrangian method is a blend of both penalty
function and Lagrangian multipliers methods. In the SQP method, the qua-
dratic subproblem is solved in every iteration where the objective function
is approximated by a quadratic function and the constraints are linearized.
Some optimization problems require constraints to be satised in every iter-
ation to ensure the meaningful value of the objective function. The method
of feasible directions ensures meeting the constraints in every iteration.
6.2 Optimality Conditions
Let us dene the Lagrange function for the constrained optimization prob-
lem with the equality and inequality constraints
L f h g
j
r
j j
i
m
i i
( , , ) ( ) ( ) ( )x x xxλ µ λ µ= + +
= =
∑ ∑
1 1
(6.4)
The optimality conditions are given by
∇
x
L = 0
∇
λ
L = 0
∇
μ
L = 0
Constrained optimization
Optimality conditions
Solution techniques
Penalty function method
Augmented Lagrangian method
Sequential quadratic programming
Method of feasible directions
Application to structural design
FIGURE 6.2
Road map of Chapter 6.
Get Optimization now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.