Chapter Sixteen Optimization Model Solving

In this final chapter, we provide the interested reader with some background on the most common methods to solve an optimization model. Given the limited space, we can only hope to scratch the surface of this huge body of knowledge, and priority has been given to the essential concepts enabling the user of commercial optimization software to select a specific method among those offered.

In section 16.1, we outline the basic concepts of classical local approaches to nonlinear programming. By “local,” we mean that these methods aim at finding a locally optimal solution, which may depend on an initial solution provided by the user and is not guaranteed to be a globally optimal solution. We first deal with gradient-based as well as derivative-free methods for unconstrained optimization. Then, we outline ideas for dealing with constrained problems, such as penalty functions and Lagrangian methods. A most important topic that we discuss is duality theory. In Section 16.2, we describe a few simple approaches for global optimization of nonconvex functions of continuous variables, namely genetic algorithms and particle swarm optimization. Section 16.3 deals with the important topic of linear programming. We describe both the classical simplex algorithm and interior-point methods, and we also show how the general framework of duality theory may be applied to this specific case. In Section 16.4, we illustrate how concepts from linear programming ...

Get An Introduction to Financial Markets now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.