Free Preprint: “Expected Improvement versus Predicted Value in Surrogate-Based #Optimization” Available on Cologne Open Science

The publication “Expected Improvement versus Predicted Value in Surrogate-Based Optimization”, written by Frederik Rehbach, Martin Zaefferer, Boris Naujoks, and Thomas Bartz-Beielstein, deals with the correct parameterization for model-based optimization algorithms. The publication is available on Cologne Open Science.

Abstract: Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expected improvement may perform worse than something as simple as the predicted value of the surrogate model. We benchmark both infill criteria in an extensive empirical study on the ‘BBOB’ function set. This investigation includes a detailed study of the impact of problem dimensionality on algorithm performance. The results support the hypothesis that exploration loses importance with increasing problem dimensionality. A statistical analysis reveals that the purely exploitative search with the predicted value criterion performs better on most problems of five or higher dimensions. Possible reasons for these results are discussed. In addition, we give an in-depth guide for choosing the infill criteria based on prior knowledge about the problem at hand, its dimensionality, and the available budget.