Carpe Diem Report.pdf
Optimization Benchmarking for A2 algorithm
Mohamed Abdel Elkhalek
The aim of COCO Platform is to benchmark continuous
algorithms. Here we deal with single objective functions,
and we benchmark two implementations of A2 aglorithm
(coded with python and C languages) dealing with the evolution strategies issues with 2 other performing algorithms
BIPOP-C and BFGS.
Benchmarking, Black-box optimization, A2
Inspired by biology and Darwin´ s , evolutions strategies
ES are stochastic optimization algorithms designed for continuous search space. Unlike the gradient based algorithms
which are local search algorithms, ES are global optimization algorithms and perform very well on difficult problems
such as badly scaled, non-continuously differentiable or even
not completely defined functions (Blackbox). [2, 12] ES are
heuristic population-based search algorithm that incorporate random variation and selection. In each iteration called
generation, ES algorithm generates offsprings from µ parents. The offsprings are generated by adding a mutation
vector to the parents. The mutation vectors are Gaussian
distributed with mean equal to zero and standard deviation
σ called step size. Then environmental selection reduces the
population to its original size. [2, 12] The important features
of ES are:
● Environmental selection: survival of the fittest, unlike the genetic algorithms where the individuals are
randomly selected, the selection process in ES is deterministic.
Various step size adaptation concepts have been imagined
since the creation of ES algorithms and perform well but
when the scaling of the parameters to be optimized is not
known, the idea of individual step sizes has to be implemented. Schwefel and Rechenberg have introduced the idea
of mutative step size control. This concept is based on the
idea that both objective parameters (solution, individual)
and strategy parameters (here step size) undergo mutation
and selection. One shortcoming of the adaptation of individual step sizes is that it is impossible for small population.
This is due to the fact that the size of the parameter variation is not taken into account. For example, the object
parameter can undergo a large variation even if the step size
variation is small because the randomly generated vector
(the one added to mutate the offspring) is large. Another
reason is that the step size variation between the offsprings
of a generation is the same as the one from between generations. This makes the individual step size irrelevant .
This problem has been addressed by introducing the derandomized mutative step-size control. The algorithm studied
A2 is a good example of the derandomization.
● Unbiasedness: the mutation is based on normally distributed vectors so the information injected at each
generation is unbiased.
In this part, we use the same notations than in  and we
make reference to some equations of this paper.
A2 is a (µ , λ) evolutionary strategy. The algorithm takes
place in three different steps : the mutation of object variables, the adaptation of strategy parameters and the selection.
● Self-adaptation:strategy parameter control, step size
The mutation is done from a parent randomly chosen (Ek )
to produce a new individual (Nk ). The equation of the mutation is :
= xi ξ + δiEk .zik + δrEk .zrk .ri
GECCO’13, July 6-10, 2013, Amsterdam, The Netherlands.
ACM ISBN TBA.
This mutation can be decomposed in 2 parts. The individual mutation (independant for each vector’s component)
and the global mutation in one direction (r).
We make this mutation on µ parents for each generation.