Introduction
Every year, not to say every month, a new optimization algorithm is proposed, accompanied by the creator’s claim that it is superior to the previous ones. This type of assertion is based on test results achieved with test cases, more specifically a selection of problems to which the algorithm is applied.
In Guided Randomness in Optimization (Clerc 2015), I show that it is easy to choose a set of test cases and the way the results are addressed to seemingly “justify” this superiority.
For example, consider the test case of Table I.1, which was actually used in an article1. With such a set of problems, an algorithm has to be defined whose signature (see section A.3) is biased in favor of the center of the search space to obtain good results. At the extreme, if the algorithm is to be seen as a black box inside of which the test case is introduced and that offers a solution, there is a “magical” box that finds the perfect solution of each of the functions in a time almost equal to zero2.
Table I.1. A biased test case in a published article (see footnote 1). The names of the functions have been kept
Name |
Equation |
Sphere |
|
Rosenbrock |
|
Ellipsoid |
|
Rastrigin |
|
Get Iterative Optimizers now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.