## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

No credit card required

# APPENDIX CA 15-MINUTE TUTORIAL ON NONLINEAR OPTIMIZATION

## C.1 INTRODUCTION

It is unlikely that the reader will ever write a program that solves a nonlinear optimization problem, but functions in R (and many other languages) exist for solving such problems and such functions, in R or otherwise, have many special issues. In particular, when a function fails to report a useful answer, the user needs to know if the problem is a bug (poorly written code), or a failure of the called function to converge to a reliable answer. The purpose of this section is to understand the difficulties in solving a nonlinear optimization problem, what error messages are likely to occur, and how to address these errors.

## C.2 NEWTON'S METHOD FOR ONE-DIMENSIONAL NONLINEAR OPTIMIZATION

Consider the following problem: min ex + x2 over all possible values of x. The obvious option is to take the derivative and set it equal to zero, f′(x) = ex + 2x = 0, but this does not have a closed form solution. When an optimization problem produces a derivative or set of first derivatives that cannot be solved as a system of k linear equations in k unknowns, it is a nonlinear optimization problem. Such problems usually require iterative solutions.

A method for solving such problems that is often learned in first-year calculus is Newton's method. The method uses the last guess to produce a next guess and stops when the guesses no longer change (very much). Each guess is closer to the final answer. In this context, Newton's ...

## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

No credit card required