In Part II we discussed algorithms for solving *unconstrained* optimization problems. In this chapter we present some simple algorithms for solving special *constrained* optimization problems. The methods here build on those of Part II.

We begin our presentation in the next section with a discussion of *projected methods*, including a treatment of projected gradient methods for problems with linear equality constraints. We then consider *Lagrangian methods*. Finally, we consider *penalty methods*. This chapter is intended as an introduction to ideas underlying methods for solving constrained optimization problems. For an in-depth coverage of the subject, we refer the reader to [11].

The optimization algorithms considered in Part II have the general form

where *d*^{(k)} is typically a function of ∇*f*(*x*^{(k)}). The value of *x*^{(k)} is not constrained to lie inside any particular set. Such an algorithm is not immediately applicable to solving constrained optimization problems in which the decision variable is required to lie within a prespecified constraint set.

Consider the optimization problem

If we use the algorithm above to solve this constrained problem, the iterates *x*^{(k)} may not satisfy the constraints. Therefore, ...

Start Free Trial

No credit card required