New Paper: Entropy-based Adaptive Range Parameter Selection for Evolutionary Algorithms

Screen Shot 2016-06-03 at 18.36.57

A. Aleti (Faculty of Information Technology, Monash University, Australia) and I. Moser (Faculty of ICT, Swinburne University of Technology, Australia) present a parameter control method which adjusts parameter values during the optimisation process using the algorithm’s performance as feedback. They refer to the Sequential Parameter Optimization and related techniques as follows:
Unfortunately, the settings of the parameter values are known to be problem-specific [32], often even specific to the problem instance at hand [5, 40, 39, 19], and greatly affect the performance of the algorithm [33, 6, 26, 14]. In cases where the number of parameters and their plausible value ranges are high, investigating all possible combinations of parameter values can itself be an attempt to solve a combinatorially complex problem [8, 46, 7, 34].

The article (PDF) is available on researchgate.net.

References

Here is the list of selected publications that are cited in this section:

[5] T. Bäck. The interaction of mutation rate, selection, and self-adaptation within a genetic algorithm. In Parallel Problem Solving from Nature 2,PPSN-II, pages 87–96. Elsevier, 1992.
[6] T. Ba ̈ck, A. E. Eiben, and N. A. L. van der Vaart. An empirical study on gas without parameters. In Parallel Problem Solving from Nature – PPSN VI (6th PPSN’2000), volume 1917 of Lecture Notes in Computer Science (LNCS), pages 315–324. Springer-Verlag (New York), 2000.
[7] T. Bartz-Beielstein, C. Lasarczyk, and M. Preuss. Sequential parameter optimization. In IEEE Congress on Evolutionary Computation, pages 773–780. IEEE, 2005.
[8] M. Birattari, T. Stützle, L. Paquete, and
K. Varrentrapp. A racing algorithm for configuring metaheuristics. In GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, pages 11–18. Morgan Kaufmann Publishers, 2002.
[14] A. E. Eiben and S. K. Smit. Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm and Evolutionary Computation, 1(1):19–31, 2011.
[19] J. Hesser and R. Manner. Towards an optimal mutation probability for genetic algorithms. Lecture Notes in Computer Science, 496:23–32, 1991.
[26] F. G. Lobo. Idealized dynamic population sizing for uniformly scaled problems. In 13th Annual Genetic and Evolutionary Computation Conference, GECCO 2011, Proceedings, pages 917–924. ACM, 2011.
[33] Z. Michalewicz and M. Schmidt. Parameter control in practice. In F. G. Lobo, C. F. Lima, and Z. Michalewicz, editors, Parameter Setting in Evolutionary Algorithms, volume 54 of Studies in Computational Intelligence, pages 277–294. Springer, 2007.
[34] V. Nannen and A. E. Eiben. Relevance estimation and value calibration of evolutionary algorithm parameters. In M. M. Veloso, editor, IJCAI’07, Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 975–980, 2007.
[39] J. Smith and T. C. Fogarty. Self adaptation of mutation rates in a steady state genetic algorithm. In International Conference on Evolutionary Computation, pages 318–323, 1996.
[40] C. R. Stephens, I. G. Olmedo, J. M. Vargas, and H. Waelbroeck. Self-adaptation in evolving systems. Artificial Life, 4(2):183–201, 1998.
[46] B. Yuan and M. Gallagher. Statistical racing techniques for improved empirical evaluation of evolutionary algorithms. In Parallel Problem Solving from Nature – PPSN VIII, volume 3242 of LNCS, pages 172–181. Springer-Verlag, 2004.