Category Archives: Benchmarking

Free Preprint: “Expected Improvement versus Predicted Value in Surrogate-Based #Optimization” Available on Cologne Open Science

The publication “Expected Improvement versus Predicted Value in Surrogate-Based Optimization”, written by Frederik Rehbach, Martin Zaefferer, Boris Naujoks, and Thomas Bartz-Beielstein, deals with the correct parameterization for model-based optimization algorithms. The publication is available on Cologne Open Science.

Continue reading

New on arXiv: “Benchmarking in Optimization: Best Practice and Open Issues”

The most recent version of the article “Benchmarking in Optimization: Best Practice and Open Issues”, which was written by Thomas Bartz-Beielstein, Carola Doerr, Jakob Bossek, Sowmya Chandrasekaran, Tome Eftimov, Andreas Fischbach, Pascal Kerschke, Manuel Lopez-Ibanez, Katherine M. Malan, Jason H. Moore, Boris Naujoks, Patryk Orzechowski, Vanessa Volz, Markus Wagner, and Thomas Weise can be downloaded from arXiv http://arxiv.org/abs/2007.03488.

Survey “Benchmarking in Optimization: Best Practice and Open Issues” available

This survey compiles ideas and recommendations from more than a dozen
researchers with different backgrounds and from different institutes around the
world. Promoting best practice in benchmarking is its main goal. The article
discusses eight essential topics in benchmarking: clearly stated goals, well-
specified problems, suitable algorithms, adequate performance measures,
thoughtful analysis, effective and efficient designs, comprehensible
presentations, and guaranteed reproducibility. The final goal is to provide
well-accepted guidelines (rules) that might be useful for authors and
reviewers. As benchmarking in optimization is an active and evolving field of
research this manuscript is meant to co-evolve over time by means of periodic
updates.

The PDF version of this survey is available here and will be published on arXiv soon.