RapportMD .pdf



Nom original: RapportMD.pdf

Ce document au format PDF 1.5 a été généré par TeX / pdfTeX-1.40.13, et a été envoyé sur fichier-pdf.fr le 12/09/2014 à 00:00, depuis l'adresse IP 129.31.x.x. La présente page de téléchargement du fichier a été vue 664 fois.
Taille du document: 1.4 Mo (31 pages).
Confidentialité: fichier public




Télécharger le fichier (PDF)










Aperçu du document


Abstract
MDMA affects the brain by increasing the activity of at least three neurotransmitters (the chemical messengers of brain cells): serotonin, dopamine,
and norepinephrine.5 Like other amphetamines, MDMA causes these neurotransmitters to be released from their storage sites in neurons, resulting in
increased neurotransmitter activity. Compared to the very potent stimulant,
methamphetamine, MDMA causes greater serotonin release and somewhat
lesser dopamine release.18 Serotonin is a neurotransmitter that plays an important role in the regulation of mood, sleep, pain, appetite, and other behaviors.
The excess release of serotonin by MDMA likely causes the mood elevating
effects experienced by MDMA users. However, by releasing large amounts of
serotonin, MDMA causes the brain to become significantly depleted of this important neurotransmitter, contributing to the negative behavioral aftereffects
that users often experience for several days after taking MDMA.19

Figure 1: MDMA’s users in the middle of a trip.
Numerous studies in animals have demonstrated that MDMA can damage
serotonin-containing neurons;1,3 some of these studies have shown these effects to be long lasting. This suggests that such damage may occur in humans
as well; however, measuring serotonin damage in humans is more difficult.
Studies have shown that some heavy MDMA users experience longlasting confusion, depression, and selective impairment of working memory and attention
processes.20,21,22,23,24 Such memory impairments have been associated with

1

a decrease in serotonin metabolites or other markers of serotonin function.
Imaging studies in MDMA users20,22,25 have shown changes in brain activity
in regions involved in cognition, emotion, and motor function.26,27,28 However, improved imaging technologies and more research are needed to confirm
these findings and to elucidate the exact nature of the effects of MDMA on
the human brain.
It is also important to keep in mind that many users of ecstasy may unknowingly be taking other drugs that are sold as ecstasy, and/or they may
intentionally use other drugs, such as marijuana, which could contribute to
these behavioral effects. Additionally, most studies in people do not have behavioral measures from before the users began taking drugs, making it difficult
to rule out pre-existing conditions.21,29,30 Factors such as gender, dosage, frequency and intensity of use, age at which use began, the use of other drugs,
as well as genetic and environmental factors all may play a role in some of
the cognitive deficits that result from MDMA use and should be taken into
consideration when studying the effects of MDMA in humans.
Given that most MDMA users are young and in their reproductive years,
it is possible that some female users may be pregnant when they take MDMA,
either inadvertently or intentionally because of the misperception that it is a
safe drug. The potential adverse effects of MDMA on the developing fetus
are of great concern. Behavioral studies in animals have found significant adverse effects on tests of learning and memory from exposure to MDMA during
a developmental period equivalent to the third trimester in humans.31 However, the effects of MDMA on animals earlier in development are unclear;32,33
therefore, more research is needed to determine what the effects of MDMA are
on the developing human nervous system.

Contents
1 Classic Particle Swarm Optimization Algorithms
1.1 Discrete Particle Swarm Optimization . . . . . . . .
1.1.1 Optimization problem, basic definitions . . .
1.1.2 Presentation . . . . . . . . . . . . . . . . . .
1.1.3 First examples of the PSO . . . . . . . . . .
1.1.4 PSO parameters . . . . . . . . . . . . . . . .
1.2 Continuous PSO . . . . . . . . . . . . . . . . . . .
1.2.1 Presentation of the C-PSO . . . . . . . . . .
1.2.2 CPSO Parameters . . . . . . . . . . . . . .
1.3 CPSO and PSO results and limitation . . . . . . .
2

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

6
6
6
6
8
10
14
14
16
16

2 Modified PSO Algorithms
2.1 Literature overview of PSO variants .
2.2 Completely smooth C-PSO . . . . . .
2.2.1 Presentation of the CSC-PSO
2.2.2 Results . . . . . . . . . . . . .
2.3 Random Jump C-PSO . . . . . . . .
2.4 Leaders Follower . . . . . . . . . . .
3 Noisy PSO Algorithms
3.1 Noise on the velocity . . . . . . . . .
3.1.1 Presentation of the NVC-PSO
3.1.2 NVC-PSO results . . . . . . .
3.2 Noise on the position . . . . . . . . .
3.2.1 The NPC-PSO . . . . . . . .
3.2.2 NPC-PSO’s results . . . . . .
A Function benchmark

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

18
18
20
20
20
21
22

.
.
.
.
.
.

23
23
23
24
25
25
26
27

B Matlab code’s
29
B.1 PSO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B.2 C-PSO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
B.3 NVC-PSO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3

Introduction
In scientific applications, we can often redefine an optimization problem as the discovery of the minimum of a real valued function. Therefore, given a function f : E → F
we define the solution of an optimization problem for f in the following way :
Definition 1. x0 ∈ E is solution of the optimization problem if and only if :
∀x ∈ E, f (x0 ) ≤ f (x) (minimization problem) or f (x0 ) ≥ f (x) (maximization
problem).
To solve this problem, there exists a wide number of different methods. Many
deterministic methods are based on the differential properties of the function and
evaluate the gradient, a gradient’s approximation or other mathematical operators
to iterate the candidate solution. As instance in the gradient descent method, the
new point xn+1 is iterated from xn using :
xn+1 = xn − δn ∇f (xn ),

(1)

where δn is a variable step size and ∇ represent the gradient operator. With certain

assumptions on the function and a right choice of δ, convergence to a local minimum
can be guaranteed [1].
Particle swarm optimization algorithms have a totally different approach. They
do not require any informations about the function gradient and only use primitive
mathematic operators. Consequently they are an interesting alternative to solve optimization problems with a fairly easy implementation and a very low computational
cost [2].
Introduced by Eberhart and Kennedy in 1995 [2], the particle swarm optimization
algorithm is a heuristic method, part of the swarm intelligence methods. Its main
idea is to represent the candidate solution of the optimization problem by an evolving
population of particles deployed on the function space.
4

This algorithm derives directly from previously introduced algorithms used to
simulate social behavior or artificial life as the behavior of birds flocking, fish schooling or swarming population.
Indeed, a flock of birds can be simulated for instance by admitting that each bird
follows very simple individual rules [5]. To avoid collision, each bird is repulsed by
its close neighbors : it is modeled by a short range force and simulates an individual
behavior. Moreover, birds want to remain close to the flock. It is modeled by a
long range attraction toward the barycenter of the flock, a piece of informations
shared by all the birds. It is also interesting to add a heading rule which steers the
heading of each bird toward the average of the flock heading. The result of those
simple individual interactions will simulate the flock in a very complex and realistic
fashion. Furthermore, the resulting flock seems to have its own intelligence, designed
by swarm intelligence. The particle swarm optimization algorithms are very similar
to the flock’s simulator. It uses the birds as a way to probe the space functions.
Similarly to the flock algorithm where the birds exchange informations about the
flock average position and its heading, the particles of the PSO share informations
about the function value and uses those information to modify their behavior and
locate the global minima.

Figure 3: a flock of birds
In this paper, after presenting results on the classic version of the discrete and
continuous PSO algorithm, ways to improve the algorithm efficiency and the particles
mobility are presented.

5

1
1.1
1.1.1

Classic Particle Swarm Optimization Algorithms
Discrete Particle Swarm Optimization
Optimization problem, basic definitions

The Particle Swarm Algorithm (PSO) is used to solved optimization problems on a
cost function f : E → F as defined in definition 1. We also define :
Definition 2. x0 ∈ E is the global minimum and solution of the optimization problem if and only if :
∀x ∈ E, f (x0 ) ≤ f (x) (minimization problem) or f (x0 ) ≥ f (x) (maximization
problem).
Definition 3. xl ∈ E is a local minimum if and only if :
∃ γ ≥ 0 such as : ∀x ∈ A verifying |x − xl | ≤ γ we have f (xl ) ≤ f (x).
1.1.2

Presentation

Particle swarm optimization algorithm (PSO) uses a population of individuals to
explore the function space. The position of each individual, called particle, represents
a potential solution of the optimization problem. Particles are governed by equations
of motion and therefore able to move on the function space. They have also a memory
and remember the position of the best value achieved so far and update this value
by evaluating the function cost at their position. Finally, the particles share their
individual informations with the whole population and therefore have access to the
best position found so far by the entire swarm of particle. Velocity of each individual
is updated using those information.
Let us supposed that we want to find the global minimum of the function (minimization problem) :
f : RN → R
x 7→ f (x)
The PSO randomly deploys on the space function a population of Ns particles.
Each particle can be described at discrete time n ∈ N by a N-dimensional vector
Xin = (xi1 , ..., xiN ) with a a velocity vector Vin = (vi1 , ..., viN ), i = 0, 1, ..., N s. The
memory of the i-th particle is represented by the vector Xblni = (xlbi1 , xlbi2 , ..., xlbiN ).
Xbli is the position of the point with the best fitness found so far by the i-th particles
and is called the local best :
6

Definition 4. The local best of the i-th particle at time n ∈ N is the position of the
best value found by the i-th particle so far :
Xblni = Xik with k ≤ n and ∀l ≤ n f (Xik ) ≤ f (Xil ).
Definition 5. At time n, the global best Xgln of the swarm is local best with the best
fitness :
Xgln = Xblnk with f (Xblk ) ≤ f (Xbli ) ∀i ≤ N s.
Equations of motion governing the i-th particle are taking into account those
informations. Indeed, each particle is attracted by its own local best as well as by
the global best of the swarm. Consequently, the equations of motion governing the
i-th birds at the iteration n+1 are :
n
Vin+1 = α φ1 Vin + β φ2 (Xlbni − Xin ) + δ φ3 (Xgb
− Xin ),
i

(2)

Xin+1 = Xin + Vin+1 .

(3)

In the equation 2, α, β and δ are real parameters and φi , i = 1..3 are random
variables uniformly distributed on [0, 1]. α is called the self momentum, β is the
local attraction coefficient and δ the global attraction coefficient. Those coefficients
will be discused in more details in section 1.1.4.
During each iteration the particles evaluate the function at their position and update
their local best if relevant. Then the swarm best is update as the best value of all
the local best of the population. Hence, the algorithm can be briefly described by
the following pseudo code :
set randomly the population of particles
while convergence did not happen or stopping criterion met do
for each particle do
update velocity
update position
update local best
end for
update global best
end while
return global best
Finding a suitable stopping criterion is important. A possible choice is to defined
a parameter evaluating the average distance between the particles of the population
and representing the swarm’s density. Once this parameter is inferior to a certain
7

value it indicates that the swarm have converged and the simulation stops. The
particles are usually initially randomly distributed on the function’s space with a
randomly distributed initial velocity. Finally, particles are reflected on the boundaries of the search domain.
1.1.3

First examples of the PSO

Square function As a first simple example, the discrete PSO algorithm is applied to the two dimensional square function which presents a global minimum at
x = (0, 0) and is defined by :
f :

R2

R
(x, y) 7→ x2 + y 2

Figure 4: The graph of the two-dimensional square function
We are using a population of 4 particles, randomly distributed on the function
domain −10 ≤ x, y ≤ 10 with randomized initial velocity. Figure 5 displays the
convergence of the particles plotted as black circles. Each image is taken every
two increments of the algorithm and colored line represent the contour lines of the
function.
On this fairly easy example, the algorithm finds the minimum value with a precision of 10−5 in 20.98 increments (average value out of 1000 simulations).
Ackley function It is especially interesting to use the PSO on function which
presents many local minima. This idea is illustrated using the 2 dimensional Ackley
function is defined by :

f (x, y) = −a e(−b

√1
2

(x2 +y 2 ))

8

1

− e 2 (cos(c x)+cos(c y)) +a + e,

(4)

Figure 5: Convergence of the PSO algorithm used on the square function with a
population of 4 particles. Particles are plotted as black circles. Each snapshots
is taken every two increments of the algorithm from left to right and from top to
bottom.
where a = 20, b = 0.2 and c = 2.

Figure 6: Plot of the Ackley function.
Commonly used in optimization problems, it has the particularity to present
many local minima (400 inside the function domain −10 ≤ x, y ≤ 10). The global
minimum is x = [0, 0]T with f (x) = 0. Figure 7 displays the convergence of the PSO
9

using ten particles.

Figure 7: Convergence of the PSO on the Ackley function with ten particles.
On this more complex example, the particle swarm optimization algorithm finds
the global minimum in a average of 39 increments (average out of 1000 simulations).
However, the algorithm failed in 0.6% of the cases to spot the global minimum and
converge instead to one of the local minimum.
1.1.4

PSO parameters

As we can see from (2), the PSO’s parameters are α, β and δ. The real β is the
local attraction coefficient and determine the intensity of the attraction toward the
local best when δ, the global coefficient set the intensity of the attraction toward
the global best. Setting those parameters to the right value is critical to improve
the efficiency or even allow the convergence. If δ β, particles might converge
faster to the global best but lose some mobility and might not explore the function
space efficiently. One the other hand, if δ β, particle are too individual and the
swarm loses its coherence. Setting α to a suitable value is also important. Indeed
hight value give particle a important momentum which can result the particle to
ignore some part of the space function when low value can make particle behavior
inconsistent.

10

Stability analysis Taking a similar approach as in [2], the analysis is conducted on
a single particle with real parameters. Local and global bests are therefore coinciding
and assumed constant. The equations governing the particle motion are :
V n+1 = α Vin + β (Xlbn − X n ),

(5)

X n+1 = X n + V n+1 .

(6)

V n+1 = α Vin + β (Xlb − X n ),

(7)

(X n+1 − Xlb ) = α Vin + (1 − β)(X n − Xlb ).

(8)

It can be rewritten as :

The stability of the system defined by equation 7 and equation 8 depends on the
roots of the polynomial expression of equation 9 :
X 2 − (α − β + 1)X + α = 0.

(9)

Roots of equation 9 are :
p
(α − β + 1)2 − 4α
X± =
.
(10)
2
According to [2], roots should be imaginary and close to the unit circle. Hence, we
need (α − β + 1)2 − 4α to be negative and α ≤ 1. Choosing :
(α − β + 1) ±

β = α + 1,

(11)

allow us to meet the requirements and to have the imaginary roots :

X± = ±i α.

(12)

Finally, α should be taken such that :

α ' 1 and β = α + 1.

(13)

Experimental study To evaluate how well the algorithm performs, we define the
accuracy, the hit ratio and the hit time.
Definition 6. We say that the algorithm has found the global best x0 of a function
with the accuracy A ∈ R if the global best of the swarm Xgb returned at the end of
the simulation verifies : kXgb − x0 k ≤ A where k.k is the euclidian norm.
11

Definition 7. For N ∈ N number of simulation run and Nhit ∈ N, Nhit ≤ N the
number of simulation that have found the global best with the accuracy A, we define
the hit ratio hA as hA = NNhit .
Definition 8. We define the hit time of a simulation with the accuracy A as the
k
minimum of k ∈ N such that kXgb
− x0 k ≤ A.
Definition 9. We define the computional hit time of a simulation with the accuracy
A as the computional time in second used to find the global minimum with the accuracy A. It is an interesting criteria to test algorithms where iteration have different
computing times (for example two PSO with a different number of particles).
Local and global coefficient We work on the Ackley function defined in equation 4 with the search space −10 ≤ x, y ≤ 10. According to equation 13, α should
be close to the value 1 and the simulations are run with α = 0.7, 7 particles and an
accuracy of 10−5 . Variables of the simulations are β and δ. Simulation’s results are
display on figure 8 where the hit ratio is plotted as a function of the local and global
coefficient. X axis displays the global coefficient and Y axis the local coefficient. The
best values found from those simulation are β = 1.6 and δ = 1.6. It is similar to the
values in the literature. In [3] and [12], recommended values are β = 1.7 and δ = 1.7
and in cite15 authors are using β = 1.5 and δ = 1.5.

Figure 8: Plot of the hit ratio for the Ackley as a function of the local (β) and global
coefficient (δ). X axis display δ and Y axis β. Accuracy is set at 10−5 and a swarm of
7 particles is used. 100 simulations have been run for each point. The values which
return the best hit ratio are 1.6 for both the local and global coefficient.
12

Figure 9 displays the hit ratio as a function of the global swarm attraction coefficient for fixed values of β and α. Results show that the hit ratio increases with the
global coefficient δ until the value 2.5 where the ratio suddenly drop. Indeed in the
discrete case high values of the global coefficient prevent the convergence of the algorithm. Moreover the algorithm fails at finding the minimum efficiently for low values
of δ (less that 1). However figure 9 shows that the coefficient δ can be chosen freely
between 1 and 2.25 with very little consequences on the algorithm performances.
Figure 10 displays the hit ratio as a function of the local attraction coefficient for
fixed values of δ and α. The hit ratio slightly increases with the local coefficient.
However, the algorithm still performs fairly well for low value of β. As for the global
coefficient, the algorithm doesn’t converge for value over 2.5.
1
Hit Time
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0

0.5

1

1.5
2
Swarm coefficinet (local = 1.6)

2.5

3

Figure 9: Average hit ratio (blue curve) and hit time (red curve) as a function of the
global attraction coefficient δ with β = 1.6. Best value is δ = 1.6.
1
Hit Time
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0

0.5

1
1.5
2
Local attraction coefficient (Swarm = 1.6)

2.5

3

Figure 10: Average hit ratio (blue curve) and hit time(red curve) as a function of
the local attraction coefficient β with δ = 1.6

Self Attraction and number of birds With β and δ set to their best values,
simulations are run for different values of α using the same parameters. Figure 11

13

displays the hit ratio as a function of α. The optimal value is α = 0.7 ([2] recommends
0.6).
1
Hit Time
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0

0.5

1
1.5
2
Local attraction coefficient (Swarm = 1.6)

2.5

3

Figure 11: Average Hit time and Hit ratio as a function of the friction coefficient α
Finally, setting α = 0.7, β = 1.6 and δ = 1.6 simulations are run for different
numbers of particles. Figure 12 displays the results. Hit ratio quickly increases with
the number of particles. However increasing the number of particles has an impact
on the computation hit time and setting the number of particles is a compromise
between the hit ratio and the computation hit time.
1
X: 9
Y: 1

Hit Time

0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0

5

10
15
20
Local attraction coefficient (Swarm = 1.6)

25

30

Figure 12: Average computation hit time and hit ratio as a function of number of
particles
Remark 1. We assume that the parameters α, β and δ do not depend on the function. However, the optimum number of birds depends on how complex is the function
and on how wide is the search space.

1.2
1.2.1

Continuous PSO
Presentation of the C-PSO

PSO is directly inspired by the life swarm which is continuous by nature. In order to
imitate more closely the behavior of a swarming population, we smooth the algorithm
14

and introduce a continuous version of the PSO (C-PSO for Continuous Particle
Swarm Optimization). Continuous equations equivalent to the discrete equations 2
and 3 for the i-th particle are :
V˙ i = −αVi + β (Xlbi − Xi ) + δ (Xlgi − Xi ),

(14)

X˙ = Vi .

(15)

Vi represent the velocity of the i-th birds at time t, Xi its position, Xlbi its local best
and α, β and δ are respectively the friction, local and global coefficient. 14 and 15
are differential equations that will be resolved numerically using euler’s method or
runge kutta method with a adequate step dt.
Remark 2. To simplify the notation Vi (t), Xi (t) and Xlbi (t) are denoted by Vi , Xi
and Xlbi .
Remark 3. In the continuous version of the PSO, the equations governing the particles movement are no longer randomized. However, the initial position and velocity
remain randomly distributed.
The update of the local best of each bird also need to be smoothed. Indeed,
testing at each iterations if the cost function at the current position is better that
the local best and instantly updating if relevant introduces uncontinuities in the
differential equation. To tackle this issue, a continuous update of the local best as
implemented in [3] is introduced :
X˙ lbi = γ(Xi − Xlbi ) · (1 + sign((f (Xlbi ) − f (Xi ))).(16)
where γ is a real coefficient, f the cost function and sign the function defined by :


−1 si x < 0
(17)
∀x ∈ R, sign(x) = 0
si x = 0


1
si x > 0
Consequently is the fitness of the current position is worst than its local best (so
if f (Xlb ) ≥ f (Xi )) we have sign(f (Xlbi ) − f (Xi )) = −1 and :
X˙ lbi = 0.
On the other hand, if the local value is better than the local best, f (Xlb ) ≤ f (Xi ),
and therefore sign(f (Xlbi ) − f (Xi )) = 1 and
X˙ lbi = 2γ(Xi − Xlbi ),
15

and the local best is being continuously updated. In the classic version of the C-PSO
as described [3], the global best is not continuously updated and the newly found
global best instantly replaces the previous position (discrete jump). Therefore the
update of the global best is discrete. However in section 2.2 a completely smooth
version of the continuous algorithm with smooth global update is discussed. 13 shows
the trajectories of 3 particles converging on the two-dimensional Ackley function
1
using the CPSO with dt = 30

Figure 13: Trajectories of 3 particles with continuous time PSO, dt =
Ackley function

1.2.2

1
30

on the

CPSO Parameters

Smoothing the PSO algorithm introduces new parameters : dt the step size used
to solved the ordinary differential equation and γ from equation 16 which is the
coefficient on the derivative of the Local best of the particles. Continuous value of
α, γ and δ from equation 14 must also be found. Conducting a similar study as done
in section 1.1.4 on the CPSO leads to the value α = 0.3 for the friction coefficient,
β = 1 for the local swarm attraction and δ = 1 for the global swarm attraction.
Simulation’s results give the optimum value for γ = 4. Usually used step size are
between1/10 and 1/100.
Figure 14 displays the plot of the hit ratio as a function of the coefficients β and
δ for the Ackley function.

1.3

CPSO and PSO results and limitation

Simulations are conducted for both algorithm on the benchmark’s functions defined
in the annex. Main results for the PSO and the C-PSO are reported in figure 15
16

Figure 14: Plot of the hit ratio as a function of the local and global coefficient for
the Continuous PSO. X axis is β, Y axis is δ.
where Ns is the number of particles in the population, accuracy, hit ratio and hit
time are defined in section 1.1.4, Nb of sim is the number of simulations conduct and
C. time the computational hit time.
Simulations show that C-PSO performances are considerably better than its discrete version. The hit ratio achieved with the continuous version is better for all the
functions tested except for the Egg holder function. However the hit time is worst
and the computational cost more important. Results display in figure 15 will be used
as references for testing new algorithm.
Limitations Limitations arise on very complex functions presenting many local
minima. Indeed, both PSO and C-PSO have very poor results for instance on the
Egg Holder function which is defined by :
r

p

x


|x − (y + 47)| ,
(18)
f (x, y) = − (y + 47) sin
y + + 47 − x sin
2
where 512 ≤ x, y ≤ 512 and f (512, 404.2319) = −959.6407 is the global minimum.
Indeed, the best hit ratio achieve by the PSO is very low : 0.61 for the discrete
version with 400 particles. This ratio cannot be increase by adding more particles.The main issue is the convergence of the swarm population to a local minimum
instead of the global minimum. The quick convergence of the population do not give
the particles enough time to explore efficiently the function space and find the global
minimum. In the following section, modified C-PSO algorithm will be discussed to
improve the PSO performances and notably to increase the function’s space exploration.
17

Name
PSO
C-PSO
PSO
C-PSO
PSO
C-PSO
PSO
C-PSO
PSO
C-PSO
PSO
C-PSO
PSO
C-PSO

Ns
7
7
7
7
16
16
7
7
400
400
7
7
50
50

Accuracy
10−5
10−5
10−5
10−5
10−5
10−5
10−5
10−5
10−2
10−2
10−5
10−5
10−5
10−5

Nb of sim Hit ratio
1000
0.9610
1000
1
1000
0.3500
1000
0.4870
1000
0.7670
300
0.8200
1000
0.8660
300
1
100
0.6100
100
0.3400
1000
0.4840
1000
0.9830
300
0.7167
300
0.7567

Hit time
43.1290
644.9110
53.8457
367.0164
42.2164
302.5244
30.2252
234.1400
54.5902
328.5000
54.2955
154.2146
89.6605
311.3568

C. time
0.0150
0.2460
0.0168
0.0293
0.0268
0.0530
0.0083
0.0418
1.0504
7.0820
0.0161
0.0115
0.1756
0.1778

function
Ackley
Ackley
Rastagrin
Rastagrin
Rastagrin
Rastagrin
Levy
Levy
Egg Holder
Egg Holder
Shaffer
Shaffer
Griewank
Griewank

Figure 15: Result of the PSO and C-PSO algorithm on the optimization function
2500
2000
1500
1000
500
0
500

0

−500

−100 0
−300 −200
−500 −400

100

200

300

400

500

Figure 16: Graph of the EggHolder function

2
2.1

Modified PSO Algorithms
Literature overview of PSO variants

This section presents a quick overview of different variants of the particle swarm
algorithm that can be found in the literature. The aim is to introduce some of
the mains concepts and areas of research and a more exhaustive list can be found
in [11]. In the original paper presenting the PSO algorithm in 1995 [2], Eberhart
and Kennedy already introduce a modified version of the PSO called L-best/PSO
18

where the global best of the swarm is no longer a global value shared by the entire
swarm but rather a local value shared by connected individuals. Indeed, particles
can only compared their local best with the value of their closest neighbors. Hence,
their global attraction is toward the best local of the particles in the neighborhood.
According to [2], this version is less vulnerable to local minima and particles tend
to spread efficiently through the function space aggregated in groups. However the
number of iteration required to find the global minimum is drastically increased.
The topology of how the local and global best should be shared through the swarm
have been an intense area of research. Different structures of communication have
been tested [7], [8] and notably a structure called Von Neumann [9] has interesting
results. In this structure, particles are less connected (the global best is shared
by less individuals) that in the classic PSO but more than in the L-best/PSO and
an increase of performances have been noticed. Kennedy claims in [8] that highly
connected swarm population tend to perform better on simple problems when swarms
with few connections have better results on complex functions. Other approaches
have been tried as the fully informed swarm (FI-PSO) [9] where the attraction force
results from all the weighted local bests of particle’s neighbors and discrete equation
for the i-th particle is :
Vi = Vi +

N
1 X
(Xlbn − Xi ),
N n=1

Xi = Xi + Vi ,

(19)
(20)

where N is the number of neighbors used, and Xlbn is the local best of the n-th
neighbors. According to [9], the FI-PSO has better performances but relies more on
the population topology. Finally in [6], a strategy where the best global best can
be potentially picked from the local best of any particles is introduced to prevent
premature convergence.
Modifying the swarm connectivity is not the only way to improve the PSO algorithm. Adaptive PSO (A-PSO) presented in [10] defined different states, exploration,
exploitation, convergence and jumping out. Current states is estimated by evaluating
the population distribution and each state changes the algorithm parameters (attraction coefficients, friction value, etc.). APSO also includes a learning strategy that
allows the algorithm to jump out of the local minima. Authors of [10] report very
good results in term of mobility and convergence speed. In [4], after convergence to a
local minimum, the algorithm modifies the cost function by ”stretching” and elevates
the local minimum and its neighborhood. The local minimum is therefore eliminated
from the stretched function and the algorithm can use the modified function as its
new cost function.
19

Nowdays, Particle Swarm Optimization is a extremly active field of research with
many open questions. Number of articles

2.2
2.2.1

Completely smooth C-PSO
Presentation of the CSC-PSO

One limitation of the continuous PSO in term of smoothest is the discrete update of
the global best. Indeed, each iteration of the C-PSO, the global best of the swarm
is instantly updated and replaced if relevant by the best value of the local bests.
This discrete jump in the continuous differential equations governing the particle’s
movement can have destabilization effects according to [3]. To solve this issue, a
smooth update have been introduced and the global best is taken as the barycenter
of the weighted local bests of the entire swarm. The cost function is supposed positive
without losing any generality and each bird is weighted by the inverse of the cost
function at its position. The new equation describing the evolution of the swarm
global best is :
1 X 1
xlb ,
(21)
xgb =
K
f (xlbk ) k
P 1
where K =
is a normalization coefficient.
f (xlbk )
It remains a discontinuity in the local best derivative X˙ lb described by equation 16.
Indeed, the sign function has a discontinuity at the point 0. In order to completely
smooth the algorithm, we replace the sign function by the continuous function arctan.
The new equation is :
x˙ lb = γ(x − xlb ) · (1 +
2.2.2

2
arctan(f (xlb ) − f (x))).
π

(22)

Results

Performances of the CSC-PSO are tested and compared with the results of the CPSO introduced in section 1.2. Figure 18 displays the results : Hit ratio ref and Hit
time ref are the hit ratio and time of the C-PSO used as a reference and tested with
the same parameters on the same test functions (complete C-PSO results are display
in figure 15). In overall, performances are slightly worst and the hit ratio is sensibly lower for the CSC-PSO (except for the Griewank and the Rastagrin function).
However, the algorithm still performs fairly well and have the merit to be completely
smooth which allows a proper mathematical study.

20

Figure 17: Graph of the function f (x) = 0.5 + sign(x) in red and g(x) = 0.5 +
1
artan(x) in black. g(x) is used in the Completely Smooth C-PSO to present any
π
discontinuities in the equation arising from the discontinuity in 0 of the sign function.
Function Tested
Ackley
Egg Holder
Levy
Griewank
Rastagrin
Shaffer

Ns
7
400
7
50
7
7

Nb of sim Hit ratio
1000
0.9960
25
0
1000
0.7560
1000
0.7850
1000
0.6260
1000
0.3060

Hit ratio ref Hit time Hit time ref
1
441.6416 644.9110
0.2667
330.8750
1
262.0357 234.1400
0.7567
299.3376 311.3568
0.4870
283.5607 367.0164
0.4840
187.9412 54.2955

Figure 18: Results of the CSC-PSO. Hit ratio Ref is the hit ratio of the classic
version of C-PSO previously introduced and used as reference. Overall performances
are slightly worst.

2.3

Random Jump C-PSO

Random jump continuous particle swarm optimization algorithm (RJC-PSO) is a
first attempt to increase the mobility of particles and their capacity to efficiently
explore the function space. At any time, any particle has a probability to be reinitialized and reset randomly on the function domain with a random velocity. The
purpose is to spread the swarm population and force it to explore the function space
longer before converging to the global minimum. However this algorithm fails at performing efficiently. Reinitialized particles converge back to the global best without
exploring the function space.

21

C. time
0.2185
0.0291
0.1919
0.0304
0.0194

Function Tested
Rastagrin
Rastagrin
Rastagrin
Rastagrin
Rastagrin
Rastagrin
Egg holder
Egg holder
Egg holder
Egg holder
Egg holder

Ns
30
30
30
30
30
30
180
180
180
400
400

Nl
0
8
10
14
30
30
0
50
75
0
140

Hit ratio
0.948
0.968
0.974
0.962
0.714
0.714
0.1650
0.1875
0.2750
0.3400
0.5000

Hit time
274,69
293.4855
296.4103
306.99
702.33
702.33
333.5455
398.1
394.4364
328.5000
375.2800

C. time
0.0985
0.1093
0.1051
0.108
0.253
0.253
3.1730
3.73
3.6853
7.0820
7.9326

Num of simul
500
500
500
500
500
500
200
80
200
100
100

Figure 19: Results of the LFC-PSO.

2.4

Leaders Follower

Leaders and Followers C-PSO algorithm introduces two different populations of particles. One population designed as leaders have a lower value of the global attraction
coefficient as well as a lower friction coefficient. Consequently, they have a better
mobility. Their task is to probe the function space and provide useful informations
to the rest of the swarm. The other population called followers have the classic value
of the parameters. Their task is to assure a precise convergence of the algorithm to
the global best. Coefficient values used for the leading population are α = 0.1, β = 1
and δ = 0.3. Values for the following population are the usual values : α = 0.3, β = 1
and δ = 1. Figure 19 displays the results of the algorithm. Ns is total number of
particles deployed on the function space and Nl is the number of leaders. Obviously,
Nf = Ns - Nl is the number of followers.
Adding the right number of leaders in the particle population increases the algorithm performances. Indeed, with 30 particles the hit ratio of the classic C-PSO
is 0.948 on the Rastagrin function. It increases to 0.974 with 10 leaders and 20 followers. On the Egg holder function, the hit ratio is 0.2750 with 105 follower and 75
leaders instead of 0.1875 with the C-PSO. Finally on the same function the hit ratio
is 0.5 with 400 particles and 140 leaders instead of 0.34 with the C-PSO.

22

3

Noisy PSO Algorithms

In this section, function’s space exploration is improved by adding brownian motion
to the particle’s equations. We recall that the brownian motion is the stochastic
process W(t), 0 ≤ t defined by :
Definition 10. The standard brownian W(t), 0 ≤ t is the real value stochastic
process such that :
W (0) = 0,
W (t) − W (s) follows the normal law N (0, t − s),
None overlapping increments are independent random variables.
Furthermore, we call stochastic differential equation (Ito sense) the equation :
dXt = b(t, Xt ) dt + σ(t, Xt )dWt ,

(23)

where Xt is a stochastic process, b(t, Xt ) its expectation and σ(t, Xt ) its variance.

3.1
3.1.1

Noise on the velocity
Presentation of the NVC-PSO

The main idea of the Noisy velocity C-PSO is to add a noisy component with a
variable variance to the velocity differential equation. Velocity’s differential equation
for the i-th particle is turned into the stochastic differential equation :
dVi = (−αVi + β (Xlbi − Xi ) + δ (Xgb − Xi )) dt + σ(t, Xi , Xgbi ) dW,

(24)

where α, β , δ are previously defined real coefficients, Vi the i-th particle velocity,
Xi its position, Xlbi its local best, dW the Ito differential of the brownian motion
and σ its variance. The choice of a suitable σ is critical. Indeed, by adding noise
to the equation we increase the particle’s mobility and improve their capacity to
explore efficiently the function space. However, important noise tends to prevent
the convergence. Consequently, setting σ to a fixed real value does not improve the
performances. In order to have interesting result and allow the convergence, σ for the
i-th particle needs to tend toward zero when the particle is near Xgb . On the other
hand, σ should increase with the particle’s distance to the global best to increase the
particle mobility. Therefore, a suitable σ(t, Xi , Xgb ) needs to verify :
σ(Xi , Xgb ) → 0 when (Xi − Xgb ) → 0,

(25)

σ(Xi , Xgb ) → ∞ or σmax when (Xi − Xgb ) → ∞.

(26)

23

In the NVC-PSO we use the function :
σ(Xi , Xgb ) = σ0 |X − Xbs |,

(27)

where σ0 is a real parameter and |X −Xbs | the euclidian distance between the current
position of the particle and the global best of the swarm. Finally, it is interesting
in some cases to bound the function σ by a maximum value σmax to prevent chaotic
behaviors.

Figure 20: Plot of the variance of the stochastic process σ(X - Xbs ) as a function of
(Xi − Xgb ) with σ0 = 1 and σmax = 3.

Remark 4. In the algorithm we simulate the brownian motion
3.1.2

NVC-PSO results

NVC-PSO has excellent results on the benchmark functions, especially on the complex functions. Figure 21 display the results with as previously Hit ratio ref and Hit
time ref the results of the C-PSO. For the Rastagrin function with 7 particles, the
hit ratio goes from 0.461 with the classic C-PSO (no noise, σ0 = 0) up to 0.855 with
the NVC-PSO (σ0 = 0.5). In this case, adding noise almost doubles the hit ratio.
However, due to the noise, the average hit time is slightly longer (573 iterations in
average instead of 367). Results are also very good with the Egg holder function.
With 300 particles, the C-PSO has the extremely bad hit ratio of 0.27. Using the
same number of birds with the NVC-PSO (with σ0 = 0.5 and σmax = 30) gives a hit
ratio of 0.63.
Figure 23 displays the hit ratio and the hit time of the NVC-PSO as a function
of σ0 defined in equation 27. Hit ratio rises with σ0 until the optimum value σ0 = 0.5
where it start to drop. Hit time linearly rises with σ0 .
24

σ0
0.5
0.5
0.5
0.5
0.5

Ns
7
300
400
7
7

Hit ratio
0.8550
0.633
0.8133
1
0.8933

Hit ratio ref Hit time
0.4610
573.4094
0.2700
481.2632
0.340
4472.8811
0.983
211.5320
0.7567
490.6791

Hit time ref
366.6421
344.2593
328.50
154.21
311.3568

C. time
0.1173
10.3484
11.6073
0.0363
0.6211

function
Rastagrin
Egg Holder
Egg Holder
Shaffer
Griewank

Figure 21: Results of the NVC-PSO algorithm for the different functions of the
optimization benchmark. Performances are increased drastically.

0.9
Hit Time
X: 0.5
Y: 0.8667

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

0

0.1

0.2

0.3
0.4
0.5
0.6
Swarm coefficinet (local = 1.6)

0.7

0.8

0.9

Figure 22: hit ratio and hit time of the NVC-PSO as a function of σ0 . Each point
150 simulations, 7 particles.

3.2
3.2.1

Noise on the position
The NPC-PSO

Adding noise to the velocity has the advantage to keep the particle’s trajectory
relatively smooth. However, adding directly noise to the position is a more direct
25

n of sim
1000
30
300
1000
300

way to increase the population mobility. In the NPC-PSO’s case (Noise on Position
- Continuous PSO), stochastic differential equation governing the i-th particle is :
dXt = Vt dt + σ(Xt , t) dWt ,

(28)

where Xt is the position of the particle, Vt its velocity, σ the variance function of the
process and dW the brownian motion. The choice of the function σ is subject to the
same constraints and defined as previously :
σ(Xi , Xgb ) = σ0 |X − Xbs |.

(29)

The function σ can also be bounded by a maximum value σmax .
3.2.2

NPC-PSO’s results

NPC-PSO algorithm have excellent results. On the Rastagrin function with 7 particles and σ0 = 0.5 the hit ratio is 0.8920. It is sensibly better the the NVC-PSO
which had a best hit ratio 0.8550. Results on the Egg Holder function are also very
good. With 300 particles and σ0 = 0.5 the hit ratio (accuracy = 10−2 ) is 1. It is
better than the NVC-PSO which had with the same number of particles a hit ratio
of 0.63.
0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

sigma

Figure 23: hit ratio and hit time of the NPC-PSO as a function of σ0 (datas from
150 simulations with 7 particles).

26

Name
NPC-PSO
NPC-PSO
NPC-PSO
NPC-PSO

σ0
0.5
0.5
0.5
0.5

Ns
7
300
300
50

Accuracy
10−5
10−2
10−2
10−5

Number of sim Hit ratio
1000
0.8920
30
0.9667
100
1
300
0.9967

Hit time
506.6816
212.5172
209.3100
232.77

C. time
0.0869
4.5844
4.2697
0.259

Figure 24: Results for the different functions of the optimization benchmark.

Conclusion
Particle Swarm Optimization is an extremely interesting alternative to solve minimization problems on a cost function. Easy to implement, this algorithm do not use
any advance mathematic operators and have consequently a very low computational
cost. Based on the swarming theory, algorithms from the PSO family deploy a population of moving particles on the function space. Evaluating the function at their
current position and keeping track of the position of the values with the best fitness
achieved, the particles communicates with each other to track down the global minimum. The discrete version of the algorithm presented in section 1 already performs
fairly well on complex function with many local minimum. Continuous version introduced to imitate more closely the real swarm behaviors increases the performances.
However a fast swarm convergence prevents in some cases the particles to efficiently
explore the space function and the PSO algorithms have relatively poor result on
very complex functions. To tackle this issue, more complex versions of the PSO
algorithm have been presented. The RJC-PSO randomly reinitializes particles to
increase the space search. The FLC-PSO introduces two different populations with
different mobility and have encouraging results. Real improvements in performance
come from the NVC-PSO and NPC-PSO which add noise in their governing equation. The first one adds noise on the velocity and present very good result on all
the function tested. The second one adds noise directly on the position and achieved
excellent results.

A

Function benchmark

Ackley function
(−b

f (x, y) = −a e

√1
2

(x2 +y 2 ))

1

− e 2 (cos(c x)+cos(c y)) +a + e,

(30)

where a = 20, b = 0.2 and c = 2 and the domain is : −10 ≤ x, y ≤ 10. f(0,0) =
0 is the global minimum.
27

function
Rastagrin
EggHolder
Egg Holder
Griewank

Figure 25: Plot of the Ackley function
Egg Holder function
r

p

x


f (x, y) = − (y + 47) sin
|x − (y + 47)| ,
y + + 47 − x sin
2

(31)

where 512 ≤ x, y ≤ 512 and f (512, 404.2319) = −959.6407 is the global minimum.
2500
2000
1500
1000
500
0
500

0

−500

−100 0
−300 −200
−500 −400

100

200

300

400

500

Figure 26: Graph of the EggHolder function

Griewank function
r
r
x2 + y 2
x
y
f (x, y) =
− cos(
) cos(
) + 1,
4000
2
2
Domain search is −600 ≤ x, y ≤ 600. f(0,0) = 0 is the global minimum.
200

150

100

50

0

−50
600
400

1000

200

500

0
0

−200
−400

−500
−600

−1000

Figure 27: Plot of the Griewank function

28

(32)

Levy function


f (x, y) = sin2 (3πx) + (x − 1)2 1 + sin2 (3πy) + (y − 1)2 1 + sin2 (2πy) ,

(33)

Domain search is −10 ≤ x, y ≤ 10. f(1,1) = 0 is the global minimum.

Figure 28: Plot of the Levy function

Rastagrin function
f (x, y) = 20 + (x2 − 10 cos(2πx)) + (y 2 − 10cos(2πy)).

(34)

Domain search is −10 ≤ x, y ≤ 10. f(0,0) = 0 is the global minimum.

Figure 29: Plot of the Rastagrin function

Shaffer function
f (x, y) = 0.5 +

sin2 (x2 − y 2 ) − 0.5
.
(1 + 0.001 (x2 + y 2 ))2

Domain search is −100 ≤ x, y ≤ 100. f(0,0) = 0 is the global minimum.

B

Matlab code’s

B.1

PSO Algorithm

B.2

C-PSO Algorithm
29

(35)

Figure 30: Plot of the Shaffer function

B.3

NVC-PSO Algorithm

References
[1] Mordecai Avriel, Nonlinear Programming: Analysis and Methods. Dover Publishing, 2003
[2] Russel Eberhart and James Kennedy, A New Optimizer Using Particle Swarm
Theory. Proceedings Sixth Symposium on Micro Machine and Human Science;
1995
[3] Hassan M. Emara and Hossam A. Abdel Fattah, Continuous Swarm Optimization
Technique with Stability Analysis. Boston, Massachusetts, 2004
[4] K.E. Parsopoulos and M.N. Vrahatis, Recent approaches to global optimization
problems through Particle Swarm Optimization. Kluwer Academic Publishers,
2002
[5] C. W. Reynolds, Flocks, herds and schools: a distributed behavioral model. Computer Graphics, 1987.
[6] J. J. Liang, A. K. Qin, Ponnuthurai Nagaratnam Suganthan and S. Baskar,
Comprehensive Learning Particle Swarm Optimizer for Global Optimization of
Multimodal Functions. IEEE Transaction on evolutionary computation, 2006
[7] Watts and Strogatz, Collective dynamics of small-world networks. Nature, 1998
[8] James Kennedy, Small worlds and mega-minds: effects of neighborhood topology
on particle swarm performance. IEEE, 1999
[9] Kennedy, J., Mendes, R., Population structure and particle swarm performance.
IEEE, 2002
30

[10] Zhi-Hui Zhan, Jun Zhang, Yun Li, and Henry Shu-Hung Chung, Adaptive Particle Swarm Optimization. IEEE transaction on systems, man and cybernetics,
2009
[11] Riccardo Poli, James Kennedy and Tim Blackwell, Particle swarm optimization.
An overview. Springer Science and Business Media, 2007
[12] I.C. Trelea, The particle swarm optimization algorithm: Convergence analysis
and parameter selection. Information Processing Letters, 2003
[13] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence
in a multidimensional complex space. IEEE Transac-tions on Evolutionary Computation, 2002

31



Documents similaires


reportz
0
carpe diem report
icmitieee2017
multi genomic algorithms
paulbriard 2012 ls lisbon


Sur le même sujet..