RapportMD.pdf


Aperçu du fichier PDF rapportmd.pdf

Page 1 2 3 45631




Aperçu texte


Introduction
In scientific applications, we can often redefine an optimization problem as the discovery of the minimum of a real valued function. Therefore, given a function f : E → F
we define the solution of an optimization problem for f in the following way :
Definition 1. x0 ∈ E is solution of the optimization problem if and only if :
∀x ∈ E, f (x0 ) ≤ f (x) (minimization problem) or f (x0 ) ≥ f (x) (maximization
problem).
To solve this problem, there exists a wide number of different methods. Many
deterministic methods are based on the differential properties of the function and
evaluate the gradient, a gradient’s approximation or other mathematical operators
to iterate the candidate solution. As instance in the gradient descent method, the
new point xn+1 is iterated from xn using :
xn+1 = xn − δn ∇f (xn ),

(1)

where δn is a variable step size and ∇ represent the gradient operator. With certain

assumptions on the function and a right choice of δ, convergence to a local minimum
can be guaranteed [1].
Particle swarm optimization algorithms have a totally different approach. They
do not require any informations about the function gradient and only use primitive
mathematic operators. Consequently they are an interesting alternative to solve optimization problems with a fairly easy implementation and a very low computational
cost [2].
Introduced by Eberhart and Kennedy in 1995 [2], the particle swarm optimization
algorithm is a heuristic method, part of the swarm intelligence methods. Its main
idea is to represent the candidate solution of the optimization problem by an evolving
population of particles deployed on the function space.
4