Information about the “Benchmarking in Optimization: Best Practice and Open Issues” project
The survey “Benchmarking in Optimization: Best Practice and Open Issues” is is the first result of a joint initiative from several researchers in EC that was established during the Dagstuhl seminar 19431 on Theory of Randomized Optimization Heuristics, which took place in October 2019. Since than, we have been compiling ideas covering a broad range of disciplines, all connected to EC.
This survey compiles ideas and recommendations from more than a dozen researchers with different backgrounds and from different institutes around the world. Promoting best practice in benchmarking is its main goal. The article discusses eight essential topics in benchmarking: clearly stated goals, well- specified problems, suitable algorithms, adequate performance measures, thoughtful analysis, effective and efficient designs, comprehensible presentations, and guaranteed reproducibility. The final goal is to provide well-accepted guidelines (rules) that might be useful for authors and reviewers. As benchmarking in optimization is an active and evolving field of research this manuscript is meant to co-evolve over time by means of periodic updates.
The most recent version of the article “Benchmarking in Optimization: Best Practice and Open Issues”, which was written by Thomas Bartz-Beielstein, Carola Doerr, Jakob Bossek, Sowmya Chandrasekaran, Tome Eftimov, Andreas Fischbach, Pascal Kerschke, Manuel Lopez-Ibanez, Katherine M. Malan, Jason H. Moore, Boris Naujoks, Patryk Orzechowski, Vanessa Volz, Markus Wagner, and Thomas Weise can be downloaded from arXiv http://arxiv.org/abs/2007.03488.
Additional information can be found on the Benchmarking Network Webpage: https://sites.google.com/view/benchmarking-network