6Stopping and Convergence Criteria: 1‐D Applications

6.1 Stopping versus Convergence Criteria

The optimization searches are iterative. Hopefully, they progressively move the trial solution to the vicinity of the optimum, and when close enough to x*, they stop and report their x* value.

However, it may happen that the algorithm diverges from the optimum, oscillates about it, creeps upon it very slowly, and after excessive iterations still has not come close enough. Or it may be that the convergence criteria have been set too “tight” by a user expecting too much precision, and it may take excessive iterations to come adequately close enough to x*. Or it may be that each trial solution is infeasible and repeated tries to find a better solution just continue to cross over into an infeasible region. Or it could be that the algorithm has encountered an execution error and has not been given an alternate logic to remedy it. In such cases the iterative procedure should stop and report, “Possible failure to find a solution. Giving up on the search.” This would acknowledge that continuing the search for an optimum seems futile, with those particular choices for optimizer and its initialization, coefficients, thresholds, etc.

By contrast, if the search was successful and the proximity to x* is adequately close, then stop and claim, “A solution has been found. Hooray! It has converged.”

The optimizer needs both convergence criteria to indicate the trial solution is in desirable proximity ...

Get Engineering Optimization now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.