0 .pdf



Nom original: 0.pdf
Titre: doi:10.1016/j.ejor.2007.10.044

Ce document au format PDF 1.4 a été généré par Elsevier / Acrobat Distiller 8.0.0 (Windows), et a été envoyé sur fichier-pdf.fr le 18/03/2013 à 11:51, depuis l'adresse IP 84.100.x.x. La présente page de téléchargement du fichier a été vue 1638 fois.
Taille du document: 221 Ko (13 pages).
Confidentialité: fichier public




Télécharger le fichier (PDF)










Aperçu du document


Available online at www.sciencedirect.com

European Journal of Operational Research 193 (2009) 73–85
www.elsevier.com/locate/ejor

Discrete Optimization

A new discrete particle swarm optimization approach for the
single-machine total weighted tardiness scheduling problem
with sequence-dependent setup times
Davide Anghinolfi, Massimo Paolucci

*

Department of Communication, Computer and Systems Sciences, University of Genova, Via Opera Pia 13, Genova, Italy
Received 23 March 2007; accepted 25 October 2007
Available online 5 November 2007

Abstract
In this paper we present a new Discrete Particle Swarm Optimization (DPSO) approach to face the NP-hard single machine total
weighted tardiness scheduling problem in presence of sequence-dependent setup times. Differently from previous approaches the proposed DPSO uses a discrete model both for particle position and velocity and a coherent sequence metric. We tested the proposed DPSO
mainly over a benchmark originally proposed by Cicirello in 2003 and available online. The results obtained show the competitiveness of
our DPSO, which is able to outperform the best known results for the benchmark. In addition, we also tested the DPSO on a set of
benchmark instances from ORLIB for the single machine total weighted tardiness problem, and we analysed the role of the DPSO swarm
intelligence mechanisms as well as the local search intensification phase included in the algorithm.
2007 Elsevier B.V. All rights reserved.
Keywords: Metaheuristics; Particle swarm optimization; Scheduling

1. Introduction
Particle swarm optimization (PSO) is a populationbased metaheuristic, introduced in Kennedy and Eberhart
(1995), originally designed for continuous optimization
problems, whose discrete version (discrete PSO, DPSO)
was recently the subject of a growing number of studies.
In this paper we propose a new DPSO approach to face
the single machine total weighted tardiness scheduling with
sequence-dependent setup times (STWTSDS) problem.
Scheduling with performance criteria involving due dates,
such as (weighted) total tardiness or total earliness and tardiness (E–T), and that takes into account sequence-dependent setups, is a reference problem in many real industrial
contexts. Meeting due dates is in fact recognized as the
most important objective in surveys on manufacturing
*

Corresponding author. Tel.: +39 010 3532996; fax: +39 010 3532948.
E-mail addresses: anghinolfi@dist.unige.it (D. Anghinolfi), paolucci@
dist.unige.it (M. Paolucci).
0377-2217/$ - see front matter 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2007.10.044

practise, e.g., in Wisner and Siferd (1995). The objective
of minimizing the total weighted tardiness has been the
subject of a very large amount of literature on scheduling
even if sequence-dependent setups have not been so frequently considered. Setups usually correspond to preparing
the production resources (e.g., the machines) for the execution of the next job, and when the duration of such operations depends on the type of last completed job, the setups
are called sequence-dependent. The presence of sequencedependent setups greatly increases the problem difficulty,
since it prevents the application of dominance conditions
used for simpler tardiness problems (Rubin and Ragatz,
1995). The choice of the STWTSDS problem as reference
application for the proposed DPSO approach has then
two main motivations: first the fact that the solution of single machine problems is often required even in more complex environments (Pinedo, 1995), and second the absence,
to the best authors’ knowledge, of any other DPSO
approach in the literature for the STWTSDS problem.
Regarding the latter point, note that the approach in

74

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

Tasgetiren et al. (2004) seems to be the only previous
DPSO application to the single machine total weighted tardiness (STWT) problem.
The rest of the paper is organized as follows. Section 2
introduces a formal problem definition and provides a general review of the relevant literature for it. Section 3 illustrates the basic aspects of the PSO algorithm, analysing
in particular the DPSO approaches previously proposed
in the literature. Section 4 then describes the proposed
DPSO approach, discussing how it can be applied to the
STWTSDS problem and highlighting the new features
introduced. Section 5 presents the extended experimental
campaign performed, which is mainly based on the benchmark set generated by Cicirello (2003) and available on the
web, whose best known results have been recently
improved by the algorithms proposed in Liao and Juan
(2007), Cicirello (2006), Lin and Ying (2006) and Anghinolfi and Paolucci (in press), and on a subset of the
ORLIB benchmark for the STWT problem. Finally,
Section 6 draws some conclusions.
2. The STWTSDS problem
The STWTSDS problem corresponds to the scheduling
of n independent jobs on a single machine. All the jobs
are released simultaneously, i.e., they are ready at time
zero, the machine is continuously available and it can process only one job at a time. For each job j = 1, . . . , n, the
following quantities are given: a processing time pj, a due
date dj and a weight wj. A sequence-dependent setup time
sij must be waited before starting the processing of job j
if it is immediately sequenced after job i. The tardiness of
a job j is defined as Tj = max(0, Cj dj), being Cj the job
j completion time. The scheduling objective is the minimization
of the total weighted tardiness expressed as
Pn
w
T
j¼1 j j .
P
This problem, denoted as 1/sij/ wP
jTj, is strongly NPhard since it is a special case of the 1// wjTj that has been
provenP
to be strongly NP-hard in Lawler (1997) (note that
the 1// Tj special case is still NP-hard (Du and Leung,
1990)). In the literature both exact algorithms and heuristic
algorithms have been proposed for the STWTSDS problem
or for a slightly different version disregarding the job
weights. However, since only instances of small dimensions
can be solved by exact approaches, recent research efforts
have been focused on the design of heuristics. The apparent
tardiness cost with setups (ATCS) heuristic (Lee et al.,
1997) is currently the best constructive approach for the
STWTSDS problem. Constructive heuristics require a
small computational effort, but they are generally outperformed by improvement approaches, based on local search
algorithms, and metaheuristics, which on the other hand
are much more computational time demanding. The effectiveness of such approaches has been largely demonstrated:
for example, Potts and van Wassenhove (1991) show as
simple pair-wise interchange methods outperform dispatching rules for the STWT problem, as well as more

recently constructive heuristics appear dominated by a
memetic algorithm in Franc¸a et al. (2001) or by a hybrid
metaheuristic in Anghinolfi and Paolucci (2007) where a
similar parallel machine case is considered. The effectiveness of stochastic search procedures for the STWTSDS is
shown in Cicirello and Smith (2005), where the authors
compare a value-biased stochastic sampling (VBSS), a
VBSS with hill-climbing (VBSS-HC) and a simulated
annealing (SA), to limited discrepancy search (LDS) and
heuristic-biased stochastic sampling (HBSS) on a 120
benchmark problem instances for the STWTSDS problem
defined by Cicirello (2003).
The literature about applications of metaheuristics to
scheduling is quite extended. In Liao and Juan (2007) an
ant colony optimization (ACO) algorithm for the
STWTSDS is proposed, which is able to improve about
86% of the best known results for Cicirello’s benchmark
previously found by stochastic search procedures in Cicirello and Smith (2005). Recently Cicirello’s best known
results have been further independently improved in Cicirello (2006) by means of a GA approach, in Lin and Ying
(2006) with three SA, GA and tabu search (TS) algorithms,
and in Anghinolfi and Paolucci (in press) using an ACO
approach; in particular, the new set of best known results
established by Lin and Ying (2006), which improved more
than 71% of the previous best known solutions, was lastly
updated by the ACO in Anghinolfi and Paolucci (2007)
that was able to improve 72.5% of the Lin and Ying
solutions.
3. Overview of the basic PSO algorithm and its discrete
variants
Particle Swarm Optimization (PSO) algorithm is a
recent metaheuristic approach motivated by the observation of the social behaviour of composed organisms, such
as bird flocking and fish schooling, and it tries to exploit
the concept that the knowledge to drive the search for optimum is amplified by social interaction. PSO executes a
population-based search procedure in which the exploring
agents, called particles, adjust their positions during time
(the particles fly) according not only to their own experience, but also to the experience of other particles: in particular, a particle may modify its position with a velocity that
in general includes a component moving the particle
towards the best position so far achieved by the particle
itself to take into account its personal experience, and a
component moving the particle towards the best solution
so far achieved by any among a set of neighbouring particles (local neighbourhood) or by any of the exploring particles (global neighbourhood). Note that a local instead of a
global neighbourhood generally increases the exploration
capability of the swarm but also the convergence time. Differently from genetic algorithms, the PSO population is
maintained and not filtered. PSO is based on the Swarm
Intelligence (SI) concept (Kennedy and Eberhart, 2001):
the agents are able to exchange information in order to

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

share experiences, and the performance of the overall
multi-agent system (the swarm) emerges from the collection
of the simple agents’ interactions and actions. PSO has
been originally developed for continuous nonlinear optimization (Kennedy and Eberhart, 1995; Abraham et al.,
2006). The basic algorithm for a global optimization problem, corresponding to the minimization of a real objective
function f(x) of a variable vector x defined on a n-dimensional space, uses a population (swarm) of m particles; each
particle i of the swarm is associated with a position in the
continuous n-dimensional search space, xi = (xi1, . . ., xin)
and with the correspondent objective value f(xi) (fitness).
For each particle i, the best previous position, i.e. the one
where the particle found the lowest objective value (personal best), and the last particle position change (velocity)
are recorded and represented respectively as pi =
(pi1, . . ., pin) and vi = (vi1, . . ., vin). The position associated
with the smallest function value found so far is denoted
as g = (g1, . . ., gn) (global best). Denoting with xki and vki
respectively the position and velocity of particle i at iteration k of the PSO algorithm, the following equations are
used to iteratively modify the particles’ velocities and
positions:
vkþ1
¼ w vki þ c1 r1 ðpi xki Þ þ c2 r2 ðg xki Þ;
i

ð1Þ

xkþ1
i

ð2Þ

¼

xki

þ

vkþ1
;
i

where w is the inertia parameter that weights the previous
particle’s velocity; c1 and c2 respectively called cognitive
and social parameter, multiplied by two random numbers
r1 and r2 uniformly distributed in [0, 1], are used to weight
the velocity towards the particle’s personal best, ðpi xki Þ,
and towards the global best solution, ðg xki Þ, found so
far by the whole swarm. The new particle position is determined in (2) by adding to the particle’s current position the
new velocity computed in (1). The PSO velocity model
given by (1) and (2) is called gbest in Kennedy and Eberhart (2001), where also a lbest model is introduced: in this
latter model the information about the global best position
found so far by the whole group of particles is replaced by
the local best position for each particle i, li = (li1, . . ., lin),
i.e., the position of the best particle found so far among
a subset of particles nearest to i. The PSO parameters that
must be fixed are the inertia w, the cognitive and social
parameters c1 and c2, and finally the dimension of the
swarm m; taking into account that in the standard PSO
for continuous optimization c1 + c2 = 4.1 (Clerc and
Kennedy, 2002), the number of parameters needed by this
metaheuristic is quite reduced.
In recent years several studies applying the PSO
approach to discrete combinatorial optimization problems
appeared in the literature; however, to the best authors’
knowledge, none of them faced the STWTSDS problem.
PSO has been applied to combinatorial optimization problems, as traveling salesman problem (TSP) (Pang et al.,
2004), vehicle routing problem (Chen et al., 2006), and
scheduling problems (Tasgetiren et al., 2004; Liao et al.,

75

2007; Lian et al., 2006a,b; Allahverdi and Al-Anzi, 2006;
Parsopoulos and Vrahatis, 2006). DPSO approaches differ
both for the way they associate a particle position with a
discrete solution and for the velocity model used; in particular, as the solutions to the kind of combinatorial problem
considered in this paper correspond to permutations, we
could classify DPSO approaches in the literature according
to three kinds of solution-particle mapping, i.e., binary,
real-valued and permutation-based, and three kinds of
velocity model used, i.e., real-valued, stochastic or based
on a list of moves. The first DPSO algorithm proposed in
Kennedy and Eberhart (1997) is characterized by a binary
solution representation and a stochastic velocity model
since it associates the particles with n-dimensional binary
variables and the velocity with the probability for each binary dimension to take value one. A variation of this DPSO
to face flow shop scheduling problems is defined in Liao
et al. (2007). Several binary solution representation models
are proposed and tested in Mohan and Al-kazemi (2001),
but no conclusive highlight was drawn. A different model
is used in Tasgetiren et al. (2004) to develop a PSO algorithm for the STWT problem and in Tasgetiren et al.
(2007) for the total flowtime minimization in permutation
flow shop problems: using a technique similar to the random key representation (Bean, 1994), real values are associated with the particle dimensions to represent the job place
in the scheduling sequence and the smallest position value
(SPV) rule is exploited to transform the particle positions
into job permutations. More recently the STWT problem
has been faced in Parsopoulos and Vrahatis (2006) by a
unified PSO (UPSO) adopting the same particle and velocity models of Tasgetiren et al. (2004) and using the SPV
rule. The UPSO (Parsopoulos and Vrahatis, 2004) combines different exploitation and exploration features of
PSO variants; in particular a unification factor u 2 [0, 1] is
used to compute the particles’ velocity as convex combination of the velocities (possibly modified by a stochastic
mutation mechanism) obtained according to the gbest
and lbest models. Tests performed on the smaller size
instances of the ORLIB benchmark without local search
intensification showed that the unified approach outperforms both the gbest and lbest PSO variants. Similar realvalued solution-particle mappings are also used for priority
or rank based representations, e.g., for job shop (Sha and
Hsu, 2006), no-wait flow shop (Liu et al., 2005), and
resource constrained project scheduling (Zhang et al.,
2006), together with appropriate rules similar to SPV or
heuristic procedures to derive feasible schedules from the
particle positions; note that in these approaches a real-valued velocity vector is adopted, with components usually
bounded in a [vmin, vmax] interval. Permutation-based solution-particle mappings are used in Hu et al. (2003) for the
n-queens problem together with a stochastic velocity
model, representing the probability of swapping items
between two permutation places, and a mutation operator,
consisting of a random swap executed whenever a particle
coincides with the local (global) best one. In Lian et al.

76

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

(2006a) particles are associated with job sequences and
velocities are implemented as crossover and mutation operators, borrowed from the genetic algorithm approach, to
minimize the makespan in permutation flowshops; similarly, the same authors propose in Lian et al. (2006b) a
DPSO approach to job-shop scheduling, showing in both
cases that the DPSO outperforms a standard GA
approach. Recently, Allahverdi and Al-Anzi (2006) compare a TS, a DPSO and the earliest due date (EDD) heuristic for the minimization of the maximum lateness in the
assembly flowshop scheduling, finding that the DPSO significantly outperforms the other approaches for the
instances characterized by tightest due dates. In that
approach particles are associated with job sequences, and
two velocities, V1 and V2, corresponding to probabilities
of changing job places in a sequence, move the particles
respectively towards their personal best and the global best
position.
The velocity models used in all the DPSO approaches
above mentioned are either stochastic or real-valued. To
the best authors’ knowledge the unique example of velocity
model based on a list of moves can be found in the DPSO
approach for the TSP in Clerc (2004). The reason why this
kind of model has not been investigated in the scheduling
literature may be explained by the main difficulty of defining new appropriate sum and multiplication operators for
Eqs. (1) and (2) to make them work in a discrete solution
space. Nevertheless, in the following section we propose a
new DPSO approach to single machine scheduling based
on both a permutation solution-particle representation
and on a list-of-moves velocity model.
4. The proposed DPSO approach
Let us first introduce some notation. In general, a
solution x to the problem of scheduling n independent jobs
on a single machine is associated with a sequence r =
([1], . . . , [n]). In addition we denote with ur:{1, . . . , n} !
{1, . . . , n}, the mapping between the places in a sequence
r and the indexes of the sequenced jobs; for example, if
job j is sequenced in the hth place of r we have ur(h) = j.
In the proposed DPSO we consider a set of m particles;
each particle i is associated with a sequence ri, i.e., a schedule xi, and it has a fitness given by the cost value Z(xi).
Thus, the space explored by the flying particles is the one
of the sequences or permutations. In the following we
introduce a metric for such a space, called sequence metric,
that is, a set of operators to compute velocities and to
update particles’ positions consistently.
4.1. The particle velocity and the sequence metric operators
Given a pair of particles p and q, we define the distance
between them as the difference between the associated
sequences (position difference), i.e., rq rp, which corresponds to a list of moves that we call pseudo-insertion
(PI) moves. We denote a PI move as (j, d), where d is the

integer displacement that must be applied to job j to direct
the particle p toward q. Roughly speaking, assuming for
example that ur(h) = j, a positive displacement d corresponds to a towards-right move that extracts job j from
its current place h and reinserts it in place min(h + d, n)
in the sequence, so generating a new sequence r 0 such that
ur0 ðminðh þ d; nÞÞ ¼ j, and a corresponding solution x 0 ;
analogously, a negative displacement d corresponds to
a towards-left extraction and reinsertion move that generates a new sequence r 0 such that ur0 ðmaxðh d; 0ÞÞ ¼ j.
The difference between the positions of two particles p
and q defines a velocity v, which consequently is a set of
PI moves; then, applying the PI moves in v to p we can
move this particle to the position of particle q. The following example would simply illustrate this concept. Let the
number of jobs n = 4 and the sequences corresponding to
the positions of two particles p and q respectively
rp = (1, 2, 3, 4) and rq = (2, 3, 1, 4); then, the velocity associated with the difference between the two positions is
v = rq rp = {(1, 2), (2, 1), (3, 1)}; here the PI move
(1, 2) denotes that job 1 must be delayed (moved
towards-right) of 2 places in the sequence to direct particle
p towards q. Note that a velocity can include at most a single PI move for a given job. The reason why we denote as
‘‘pseudo-insertion’’ such kind of moves is that, as detailed
in the following, in general the rule used to apply the PI
moves in a velocity to a sequence may fail to produce a
feasible sequence, but it may produce a so-called pseudosequence, and we need to introduce a final sequence completion procedure to correctly implement Eq. (2) in the
sequence metric.
Summing a velocity to a particle position (position-velocity sum), we move that particle to a different point in the
sequence space; in the previous example, we must obtain
the sequence rq as rp + v. The position-velocity sum operator applies one PI move composing the velocity at a time,
first to the initial sequence and then to the pseudo-sequences
successively obtained, hereafter denoted by p. Let p0 = r
the initial sequence, v = {(j, d)k, k = 1, . . . , pm} a velocity
made of pm PI moves; then r + v is computed as
pk ¼ pk 1 ðj; dÞk ;
r þ v ¼ qðpm Þ;

k ¼ 1; . . . ; pm;

ð3Þ
ð4Þ

where: in (3) is the extract-reinsert operator that generates the pseudo-sequence pk by displacing the job j of d
positions; in (4) q is the sequence completion procedure defined in the following. In general, the pseudo-sequences
produced by the extract-reinsert operator do not correspond to feasible sequences since some sequence places
may be left empty whereas some others may contain a list
of jobs. If, for example, we apply the move (1, 2) to
rp = (1, 2, 3, 4), we obtain the pseudo-sequence p =
( , 2, [3, 1], 4), where ‘‘ ‘‘ denotes that no job is assigned
to the first place of p, whereas [3, 1] represents the ordered
set of jobs assigned to the third place of p. Let us denote
with p(h) the ordered set of items in the hth place of the

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

pseudo-sequence p; with pull(s) the function that extracts
the first element from an ordered set s, and with push(i, s)
the function that inserts the element i at the bottom
of the set s. Then, in order to convert a pseudosequence into a feasible sequence, the sequence completion
procedure manages p(h) as a first-in-first-out (FIFO) list, as
reported in Fig. 1. As an example, the behaviour of such a
procedure for a pseudo-sequence p = ([1, 3], , , [4, 2]) is
shown in Fig. 2. The procedure considers one place at a
time of p starting from the first one on the left; since an ordered set of jobs is encountered in place h = 1, then the first
job is extracted and reinserted in the first following empty
position (h = 2), thus, the pseudo-sequence is updated as
(3, 1, , [4, 2]); then place h = 2 is skipped because it contains just one job. In h = 3 an empty place is encountered,
so the procedure extracts a job from the next not empty
place, here place 4 containing the FIFO list [4, 2], and reinserts it there; after this step the final feasible sequence
(3, 1, 4, 2) is obtained.
It is easy to verify that the iterated application of the
extract-reinsert operator in (3) to compute rp + v in the case
of the first example where rp = (1, 2, 3, 4), rq = (2, 3, 1, 4),
and v = {(1, 2), (2, 1), (3, 1)} directly gives the target
sequence rq since p0 = (1, 2, 3, 4), p1 = ( , 2, [3, 1], 4),
p2 = (2, , [3, 1], 4) and finally p3 = (2, 3, 1, 4).
A velocity v can be summed to another velocity v 0 producing a new velocity w = v v 0 . This is a different sum
input: π a pseudo-sequence
output: σ a feasible sequence
for each h=1,...,n
{
if |π(h)|=1 skip;
else if |π(h)|=0
{
repeat
k=h+1;
while k<n and |π(k)|=0
π(h)=pull(π(k));
}
else if |π(h)|>1
{
while |π(h)|>1
push(pull(π(h), π(h+1));
}
}
σ=π;
Fig. 1. The sequence completion procedure.

1

2

4

3

1 3

4 2
pull

3

1

4 2
pull

3

1

4

2

Fig. 2. An example of sequence completion procedure execution.

77

operator (velocity sum) that generates the resulting velocity
as the union of the moves in v and v 0 . Any job can appear
only once in the set of pseudo-moves defining a velocity;
therefore, if v and v 0 include respectively (j, d) and (j, d 0 ),
then the resulting sum w must include the pseudo-insertion
move (j, d + d 0 ). Note that if d + d 0 = 0 the move is
removed from the list.
Finally, a velocity v can be multiplied by a real positive
constant c (constant-velocity multiplication) generating a
new velocity v 0 = c Æ v. We devised the following constantvelocity multiplication rule according to which the constant
c modifies the displacement values of the pseudo-moves
included in v = {(j1, d1), . . . , (js, ds)}; in particular, this rule
produces a velocity v 0 = {(j1, rr(c Æ d1)), . . . , (js, rr(c Æ ds))},
where the function rr(a) returns the random-round of the
real number a, i.e., it randomly returns one of the two values bac and dae, corresponding respectively to the larger
integer not greater than a and to the smaller integer not
smaller than a. We introduced the use of random-round
as we considered more suitable avoiding a fixed rule to
obtain integer displacements from fractional ones.
4.2. The overall DPSO algorithm
The very high level structure of the developed DPSO
algorithm is given in Fig. 3. In the following we describe
several options for the proposed DPSO. In particular, we
introduce three velocity models, i.e., a gbest, lbest and a
unified glbest model, according to which, similarly to the
approach in Parsopoulos and Vrahatis (2006), both the
global and the local best solutions for any considered
group of particles are used in order to update the particle
velocities at each iteration. In addition, we present two
alternative procedures to execute the position-velocity
sum in Eq. (2).
4.2.1. Initialization
An initial sequence r0i , i = 1, . . . , m, (i.e., an initial solution x0i ) is assigned to each of the m particles. In particular,
we use three different constructive heuristics, the earliest
due date (EDD), the shortest processing time (SPT), and
the apparent tardiness cost with setups (ATCS) to generate
three different starting sequences. Then, a set of v0i ,
i = 1, . . . , m initial velocities is randomly generated and
initialization;
while <termination condition not met>
{
for each particle p ∈P
{
update particle velocity;
update particle position;
compute particle fitness;
}
intensification phase;
update best references;
}
Fig. 3. The overall D-PSO algorithm.

78

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

associated with the particles. Finally, the initial position for
each particle i is produced first randomly selecting one
among the first three starting sequences and then summing
the correspondent initial velocity v0i . The initial velocities
are generated as follows: the number of PI moves composing a velocity is randomly extracted from the discrete uniform distribution U[bn/4c, bn/2c]; then, for each move, the
involved job and the integer displacement are randomly
generated respectively from U[1, n] and from U[b n/3c,
bn/3c]. We adopt this procedure to sufficiently spread in
the solution space the initial set of particles and to diversify
their initial velocities. The personal particle best sequence is
initialized as pi ¼ r0i (whose associated solution is xpi) and,
for the gbest and glbest models, the particle with the global
best sequence is associated with g (whose related solution is
xg). In case of the lbest and glbest models, nc clusters Gc of
particles are formed, randomly associating each particle to
one of them; then, a local best position li (whose related
solution is xli) is associated with each particle i 2 Gc such
that li ¼ arg minj2Gc Zðx0j Þ. The quantity nc is an input
parameter of the algorithm, and if nc = 1 the lbest model
reduces to the gbest one.
4.2.2. Velocity and position update
At iteration k, each particle i first computes the following components: inertial velocity (iv), directed to personal
best velocity (pv), directed to local best velocity (lv), and
directed to global best velocity (gv), according to the
selected model as follows:
ivki ¼ w vk 1
;
i

ð5Þ

pvki
lvki
gvki

ð6Þ

¼ c1 r1 ðpi rk 1
Þ;
i
k 1
¼ c2 r2 ðli ri Þ;
¼ c2 r3 ðg rk 1
Þ:
i

ð7Þ
ð8Þ

Component ivki is always computed, whereas pvki only with
the lbest or gbest models, lvki with the lbest or glbest models, and gvki with the gbest or glbest models. Note that in (7)
and (8) we adopt the same social parameter c2; r1, r2 and r3
are independent random numbers extracted from U[0, 1].
Then the particle velocity at iteration k is updated alternatively as in (9), (10) or (11) respectively according to the selected gbest, lbest or glbest model:
vki ¼ ivki pvki gvki ;

ð9Þ

lvki ;
gvki :

ð10Þ

vki
vki

¼
¼

ivki
ivki




pvki
lvki

ð11Þ

Note that if the velocity for a particle becomes null then it
is reinitialized by a random restart. We devise two alternative procedures in order to update the particle position at
each iteration: the first update position procedure (UP1)
simply uses the following standard PSO expression:
rki ¼ rk 1
þ vki :
i

ð12Þ

Differently, the second position update procedure (UP2)
separately sums to the particle position each velocity com-

ponent required by the model adopted one at a time in the
iv, pv, lv, gv order, thus moving the particle through a set of
intermediate sequences. For example, if we use the gbest
model, denoting with is the intermediate sequences, we
þ ivki , is2 ¼ is1 þ pvki
must execute three sums, is1 ¼ rk 1
i
k
k
and ri ¼ is2 þ gvi in order to update the position of a particle. It should be noted that UP1 and UP2 typically move
a particle to two different final positions (i.e., sequences)
even if they start from the same position and apply the
same velocity components, since UP1 executes a single
whole set of pseudo-insertion moves and at the end a single
sequence completion procedure, whereas UP2 performs the
sequence completion procedure for any intermediate sequences. Finally, the schedule xki associated with the updated particle position rki is determined by a
straightforward timetable procedure, and the fitness Zðxki Þ
is computed.

4.2.3. Intensification phase
After all the particles have updated their position and
computed their fitness at an iteration k, an intensification
phase is performed consisting of a local search (LS) exploration that starts from the best solution found by the particles in the current iteration, i.e., xki such that
i ¼ arg mini¼1;...;m Zðxki Þ. This LS execution rule is called
Best in Iteration (BI). We adopt a stochastic LS (S-LS)
algorithm similar to the one in Tasgetiren et al. (2004),
which in turn is based on a variant of the variable neighbourhood search (VNS) (Mladenovic and Hansen, 1997),
whose structure is reported in Fig. 4.
The S-LS algorithm performs a random neighbourhood
exploration allowing both an alternation of random insert
and swap moves. Random moves consist of picking at random two sequence positions in the current solution and
x=x0;
restart_counter=0;
repeat
{
x1=random_insert_move(x);
exploration_counter=0;
repeat
{
neighbourhood_counter=1;
while neighbourhood_counter ≤ 2
{
if neighbourhood_counter=1
x2=random_insert_move(x1);
else
x2=random_swap_move(x1);
if Z(x2)<Z(x1)
x1=x2;
else
neighbourhood_counter++;
}
exploration_counter++;
} until (exploration_counter<n*(n-1))
x=x1;
restart_counter++;
} until (restart_counter<n/5)
Fig. 4. The S-LS algorithm.

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

inserting (swapping) the job in the first position after (with)
the job in the second one. The algorithm executes an exploration first using a series of random insert moves until no
improvement is found, and then trying a series of swap
moves: whenever a swap move is not able to find an
improved solution, then a new series of random insert
moves is started and the exploration counter is incremented. After n Æ (n 1) explorations have been completed,
the algorithm executes a random restart from the current
best solution. The maximum number of allowed random
restarts is bounded by n/5, thus the overall complexity of
the LS algorithm is O(n3). After the intensification phase,
the solution obtained by the S-LS algorithm is substituted
to the starting xki for the particle i*, whose position and fitness are updated accordingly.
4.2.4. Update of the best references
After the completion of the intensification phase, the
global or local best positions and the personal best positions for the particles may be updated. In particular,
• for the gbest and glbest models, if Zðxg Þ > minfZðxki Þ :
i ¼ 1; . . . ; mg then g ¼ arg mini¼1;...;m Zðxki Þ;
• for the lbest and glbest models, "c cluster and "i 2 Gc if
Z(xli) > min{Z(xkj ): j 2 Gc} then li ¼ arg minj2Gc Zðxkj Þ.
In addition, each particle i compares the fitness of its
associated current solution with its current personal best
and if Zðxki Þ < Zðxpi Þ then it updates pi ¼ rki .
4.2.5. Termination conditions
Several alternative termination conditions can be used
for the proposed DPSO; in particular the algorithm is
stopped when it reaches (a) a maximum number of iterations, or (b) a maximum number of iterations without
improvements, or (c) a maximum number of fitness function evaluations, or finally (d) a maximum CPU time.
5. Experimental analysis of the DPSO algorithm
We coded the DPSO algorithm in C++ and implemented it on a Pentium IV, 2.8 GHz, 512 Mb PC. We
extensively tested the behaviour of the proposed DPSO
through an experimental campaign mainly based on the
benchmark due to Cicirello (2003), which is available on
the web at http://www.cs.drexel.edu/~cicirello/benchmarks.html. This benchmark is made of a set of 120
STWTSDS problem instances with 60 jobs and it was produced by randomly generating 10 instances for each combination of three different factors, the due date tightness d,
the due date range R, and the setup time severity n, usually
referenced in the literature (Pinedo, 1995). In particular,
the instances were generated combining the following values: d 2 {0.3, 0.6, 0.9}, R 2 {0.25, 0.75}, n 2 {0.25, 0.75}.
First we used a subset of 36 instances to carry out both a
preliminary test and a configuration analysis, finally determining four DPSO trial configurations. Next, we compared

79

the performance of such DPSO configurations with the following three sets of best reference solutions for the considered benchmark: (a) a set including the overall aggregated
best known results, denoted with OBK, mostly composed
by the solutions yielded by the SA, GA and TS algorithms
in Lin and Ying (2006) with the addition of few best solutions (15 over 120) from the ant colony optimization
(ACO) algorithm in Liao and Juan (2007), taking also into
account the best solutions from the GA in Cicirello (2006);
(b) the set of best results obtained by the ACO algorithm in
Liao and Juan (2007), denoted with ACO-LJ; (c) the most
recent set of the best results produced by the authors with
an ACO approach denoted with ACO-AP (Anghinolfi and
Paolucci, in press). As the considered STWTSDS problem
is a variation of the single machine total weighted tardiness
(STWT) problem for which a well known benchmark is
available via ORLIB (http://www.ms.ic.ac.uk/info.html),
we tested the effectiveness of the proposed DPSO approach
also in absence of setups. Finally, we performed two further tests in order to evaluate the importance both of the
intensification phase and of the interaction and learning
mechanisms of the DPSO, that is, the relevance of using
a ‘‘swarm intelligence’’ in driving the exploration of the
solution space. During all the experimental campaign, with
the exception of some cases as specified in the following, we
set the number of particles m = 120 and we adopted the
same of termination criterion used (Lin and Ying, 2006)
fixing the maximum number of fitness function evaluation = 20,000,000. The details of the experimental campaign performed are given in the rest of this section.
5.1. Preliminary tests and DPSO configuration analysis
In order to identify a suitable DPSO configuration with
respect to the Cicirello’s benchmark, we randomly
extracted 3 instances for each combination of the benchmark parameters d, R and n, identifying a subset of 36
instances. Then, we divided the DPSO configuration analysis in two sequential phases: in the first phase we aimed
at defining the best basic DPSO configuration, i.e., the best
position update procedure and values for parameters w, c1,
and c2, having fixed gbest as velocity model; in the second
phase we experimented the performance of different velocity model configurations keeping fixed the best basic DPSO
settings determined in the first phase. We followed such a
procedure since we wanted to focus on the analysis of the
different velocity models, taking into account that several
pilot experiments had revealed in general a small sensitivity
of the algorithm performance on the values of the w, c1,
and c2 parameters, and that we wanted to keep the computational effort of the preliminary analysis within acceptable
bounds. In the first analysis phase we tested the two UP1
and UP2 procedures with the following sets of parameter
values
w 2 {0.2, 0.5, 1.0},
c1 2 {0.5, 1.0, 1.5},
and
c2 2 {1.0, 1.5, 2.0}, executing for each combinations 5
DPSO runs with the gbest model. We compared the
obtained results using the well-known non-parametric

80

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

Friedman’s test (Devore, 1991) performing multiple comparison in order to verify if the differences among the
results produced with various DPSO settings were statistically significant. This analysis revealed that UP2 always
dominated UP1, whereas there was no significant difference
among the tested w, c1, and c2 parameter combinations for
a fixed position update procedure. Then, we selected for the
next phase w 2 {0.5, 1.0}, c1 = 1.5 and c2 = 2.0 since UP2
with both these combinations produced the best average
outcomes. In the second phase we tested the DPSO with
the lbest and glbest models, selecting nc 2 {3, 6, 12}, and
the gbest model, executing 5 runs on the same subset of
36 instances. Then we determined for each instance the best
result that was taken as reference, and we computed the
average percentage deviation from such reference as
100 Æ ((result reference)/reference) only for the instances
with both reference and result greater than zero, setting
the deviation = 0 if both reference and result are zero.
The obtained results are shown in Table 1; here the tested
configurations are denoted as CodeX, where Code is L, GL
or G to indicate respectively the lbest, glbest and gbest
models, whereas X corresponds to the used number of
groups of particles nc (note that we set X = 1 for configuration G).
Table 1 reports for each tested configuration the average
percentage deviation (Avg dev) and the related 95% confidence (Conf), showing also the results obtained after the
elimination of outliers: as a matter of fact, since in the
objective values in the benchmark we observed differences
of several orders of magnitude (see Table 7), the elimination of the outliers would reduce the possible influence of
very slight absolute differences in the objectives for
Table 1
Comparison of the results for the tested DPSO configurations
Without outliers
Avg dev
Deviation from best (%)
G1
8.85
L3
9.40
L6
10.74
L12
11.70
GL3
9.46
GL6
9.02
GL12
9.02

Conf

Avg dev

Conf

2.24
2.52
3.03
3.21
2.54
2.42
2.29

3.79
3.82
3.44
3.73
3.76
3.79
3.90

0.66
0.67
0.59
0.62
0.63
0.63
0.64

instances with small reference values. In particular, to
exclude outliers we eliminated from the computation of
the averages the instances with a percentage deviation
not in the interval ( 40%, 40%). From Table 1 we identified as most promising algorithm configurations G1 and
GL6 due to their good overall behaviour also after the
elimination of the outliers. Nevertheless, as can also be
noted considering the 95% confidence results, the statistical
tests confirmed that there is not a dominating
configuration.
5.2. The comparison with Cicirello’s benchmark results
At the end of the previous experimental phase we identified the following four trial DPSO configurations:
c1 = 1.5, c2 = 2.0, position update procedure = UP2, glbest
with nc = 6 and gbest models combined with the selection
of the inertia parameter w 2 {0.5, 1.0}. Hereinafter we code
such configurations as M_W, where M 2 {GL6, G1} corresponds to the chosen models, and W 2 {05, 10} denotes the
w value. We executed 10 runs for each DPSO configuration
and then we computed the best and the average results for
each instance. We first compared the DPSO best solutions
with the OBK ones obtaining the results summarized in
Table 2 that reports the average percentage deviations
(Avg dev), the related 95% confidence (Conf), the percentage number of improved (Impr sol) and identical (Ident
sol) solutions found by DPSO with respect to OBK, both
including and excluding the ( 40%, 40%) outliers. Observing that the average DPSO CPU time was 22.6 seconds and
that most of the OBK solutions (in particular 105 over 120)
were obtained by the three SA, GA and TS algorithms presented in Lin and Ying (2006) with 27 seconds average
CPU time per run but with a slower PC, we can consider
the CPU times comparable (as we expected having used
the same termination criterion). Table 2 clearly shows
that the best DPSO solutions outperform on the average
the OBK ones.
The dominance of DPSO, also witnessed by the 95%
confidence results, was confirmed by statistically significance tests. In addition, note that 15% of the instances
for which the DPSO found a best solution identical to
the OBK one corresponds to zero cost instances, and
DPSO was always able to find a zero cost solution on every
run apart a few cases for instance 27.

Table 2
The comparison between the best DPSO and OBK results
Without outliers (4 over 120)
Avg dev

Conf

Best DPSO results compared to OBK (%)
G1_05
2.80
1.84
G1_10
2.80
1.72
GL6_05
2.71
1.84
GL6_10
2.31
1.24

Impr sol

Ident sol

Avg dev

Conf

Impr sol

Ident sol

67.50
70.83
66.67
69.17

18.33
18.33
18.33
17.50

1.46
1.57
1.60
1.61

0.77
0.82
0.96
0.79

65.00
68.33
65.00
67.50

18.33
18.33
18.33
17.50

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85
Table 3
The comparison between the average DPSO results and the best OBK
ones
Without outliers (1 over 120)
Avg dev

Conf

Avg dev

Conf

Average DPSO results compared to OBK (%)
G1_05
0.60
1.22
1.07
G1_10
0.64
1.22
1.11
GL6_05
0.45
1.33
0.98
GL6_10
0.53
1.45
1.15

0.82
0.80
0.81
0.80

The good behaviour of the DPSO is also verified by the
comparison of the average DPSO results over 10 runs with
the OBK best solutions reported in Table 3. In particular,
this latter comparison highlights that the DPSO performances are not only good but also quite stable and not produced by chance.
Then, since the best ACO-LJ solutions were obtained in
Liao and Juan (2007) with a different experimental setting,
i.e., determining the best solutions over 10 runs using 30
ants, and a maximum number of iterations = 1000 and a
maximum number of non-improving iterations = 50 as termination criteria, we executed a new set of runs in order to
fairly compare the DPSO results with the ACO-LJ ones. In
particular, we imposed the number of particles m = 30 and
we adopted the same termination criteria used in Liao and
Juan (2007); in addition, we also changed for this test the
local search procedure used in the DPSO intensification,
substituting the S-SL with the procedure described in Liao
and Juan (2007). Then, we executed 10 runs of the DPSO
configured with w = 1.0, c1 = 1.5, c2 = 2.0, position update
procedure = UP2, gbest model, and we summarize in the
two distinct rows of Table 4 respectively the comparison
of the best (Best DPSO) and average (Avg DPSO) results
over 10 runs with the ACO-LJ best solutions.
As we can observe, DPSO best results dominate the
ACO-LJ ones (this fact was confirmed by Friedman’s sta-

81

tistical test), and again the average DPSO behaviour is
quite appreciable considering also that we found no statistical significant difference between the DPSO average
results and the ACO-LJ best ones. The average CPU time
needed by DPSO in this test was 2.9 seconds, whereas the
one reported for ACO in Liao and Juan (2007) is 4.3 seconds with the same kind of PC. This fact still underlines
the superiority of the DPSO approach. After this test we
can appreciate the effectiveness of the proposed DPSO noting that (a) its good behaviour seems not directly connected
to the kind of local search procedure adopted and (b) that
the elimination of outliers in Tables 2–4 produces a worsening of the DPSO performances.
In Tables 5 and 6 we compared the first set of DPSO
results, obtained with the S-SL procedure and 20,000,000
fitness function evaluations as termination condition,
respectively with the best and average ACO-AP results presented in Anghinolfi and Paolucci (in press). The effectiveness of the DPSO is once again highlighted since its best
results are statistically comparable to the best ACO-AP
(note that to obtain its peak results ACO-AP needed a
65.9 second average CPU time). However, from Table 6
we could consider DPSO slightly better than ACO-AP
since at least for the GL6_05 configuration the average
DPSO results appears to statistically dominate the corresponding ACO-AP ones.
Finally, considering the average outcomes obtained
from this set of tests for the different configurations, we
can observe a slightly prevalence of the glbest model with
nc = 6 and w = 0.5, even if no relevant difference emerges
in general between the tested models so that again we cannot conclude about any dominant configurations. In Table
7 we provide a comprehensive result for the considered
benchmark, reporting for each instance the best result produced by the DPSO over any tested configuration and comparing it both with the previous ACO-AP and OBK best
result. Table 7 shows that the DPSO algorithm was able

Table 4
The comparison between the DPSO results and the best ACO-LJ ones
Without outliers
Avg dev

Conf

Impr sol

DPSO results compared with ACO-LJ best solutions
Best DPSO
4.60%
1.91%
65.00%
Avg DPSO
0.64%
2.00%

Ident sol

Outliers

Avg dev

Conf

Impr sol

Ident sol

13.33%

5
3

3.51%
0.56%

1.40%
1.46%

62.50%

13.33%

Table 5
The comparison between the best DPSO results and the best ACO-AP ones
Without outliers (3 over 120)
Avg dev

Conf

Impr sol

Comparison between the best DPSO and ACO-AP results (%)
G1_05
0.23
1.79
40.83
G1_10
0.24
1.42
31.67
GL6_05
0.63
1.22
32.50
GL6_10
0.12
0.45
35.00

Ident sol

Avg dev

Conf

Impr sol

Ident sol

26.67
26.67
24.17
27.50

0.32
0.17
0.12
0.14

0.62
0.44
0.63
0.44

39.17
30.00
30.83
34.17

26.67
26.67
24.17
27.50

82

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

Table 6
The comparison between the average DPSO results and the average
ACO-AP ones

Avg dev

Conf

Table 7
The best results for the DPSO compared with the previous known results
for the Cicirello’s (2003) benchmark

Without outliers (1 over 120)

Inst

DPSO

ACO-AP

OBK

OBK alg

Avg dev

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

531
5088
1609
6146
4339
6832
3514
132
6153
1895
3649
0
4430
2749
1250
4127
75
971
0
2675
0
0
0
1043
0
0
0
0
0
0
0
0
0
0
0
0
186
0
0
0
69102
57487
145883
35331
59175
34805
73378
64612
77771
31810
49907
94175
86891
118809
68649
75490
64575
45680
52001
63342
75916
44769
75317

513
5082
1769
6286
4263
7027
3598
129
6094
1931
3853
0
4597
2901
1245
4482
128
1237
0
2545
0
0
0
1047
0
0
0
0
0
130
0
0
0
0
0
0
400
0
0
0
70253
57847
146697
35331
58935
35317
73787
65261
78424
31826
50770
95951
87317
120782
68843
76503
66534
47038
54037
62828
75916
44869
75317

684
5082
1792
6526
4662
5788
3693
142
6349
2021
3867
0
5685
3045
1458
4940
204
1610
208
2967
0
0
0
1063
0
0
0
0
0
165
0
0
0
0
0
0
755
0
0
0
71186
58199
147211
35648
59307
35320
73984
65164
79055
32797
52639
99200
91302
123558
69776
78960
67447
48081
55396
68851
76396
44769
75317

GA3
TS3
SA3
GA3
GA3
ACO-LJ2
GA3
GA3
GA3
SA3
GA3
ALL
GA3
GA3
GA3
GA3
SA3
GA3
GA3
GA3
ALL
ALL
ALL
GA3
ALL
ALL
SA3,GA3,TS3
GA4,SA3, GA3,TS3
ALL
SA3
ALL
ALL
ALL
ALL
ALL
ALL
SA3
ALL
ALL
ALL
TS3
TS3
SA3
SA3
GA3
TS3
SA3
SA3
TS3
TS3
GA3
GA3
SA3
VBSS1
GA3
GA3
ACO-LJ2
TS3
SA3
GA3
SA3
TS3
SA3

Conf

Comparison between the average DPSO and ACO-AP results (%)
G1_05
1.12
1.45
0.55
0.90
G1_10
0.50
0.91
0.49
0.92
GL6_05
1.24
1.20
0.65
0.79
GL6_10
0.80
1.08
0.46
0.86

to find 102 best results over 120 instances (85.0%), in particular improving the best known ACO-AP results for 69
over 120 instances (57.5%) and finding the same ACOAP best known result for 33 instances (27.5%). Table 7
reports also the type of algorithm that produced the
OBK result in the OBK alg column; in such column the
superscript to the algorithm acronym denotes the reference
where the associated result is presented, i.e., (1) (Cicirello
and Smith, 2005), (2) (Liao and Juan, 2007), (3) (Lin and
Ying, 2006), and (4) (Cicirello, 2006). Note that in that column ‘‘ALL’’ is used to denote the zero cost instances for
which all the referred algorithms produced the same zero
cost result. The results reported in bold are then a new
set of best known results for Cicirello’s benchmark.
5.3. The comparison with the ORLIB benchmark
We further tested the effectiveness of the proposed
DPSO considering a well-known benchmark for the STWT
problem. The purpose of this experiment was mainly to
evaluate the robustness of our DPSO when applied to a
slightly different single machine scheduling problem. The
ORLIB benchmark for the STWT consists of three sets
of 125 randomly generated instances with 40, 50, and 100
jobs; optimal solutions are known for the 40 and 50 job
instances, whereas for the 100 job instances only the best
known ones are available. This benchmark was also used
in (Tasgetiren et al., 2004) to analyse the behaviour of a
PSO approach for STWT. We did not execute any specific
parameter tuning in order to better adapt our DPSO to
STWT, but we used the GL6 with m = 120, w = 0.5,
c1 = 1.5 and c2 = 2.0. In addition, we considered only the
subset of ORLIB benchmark composed by biggest 100
job instances. We executed 10 runs of the proposed DPSO,
finding out that our algorithm was able to determine the
best known solutions for all the 100 job instances on every
run with an acceptable average computation time (2.1 seconds). In addition, we used the same ORLIB benchmark to
analyse the behaviour of the DPSO without local search
intensification (the so-called pure DPSO). In particular,
we executed 10 runs fixing as termination criterion a maximum CPU time of 100 seconds to make this test almost
comparable with the ones in Tasgetiren et al. (2004). We
observed that DPSO was able to find on the average 26 best
solutions over 125 instances, which is a worse result than

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85
Table 5 (continued)
Inst

DPSO

ACO-AP

OBK

OBK alg

64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120

92572
126696
59685
29390
22120
71118
75102
145771
43994
28785
30734
21602
53899
31937
19660
114999
18157
383703
409544
458787
329670
555130
361417
398551
433519
410092
401653
343029
361152
406728
332983
521208
459321
410889
522630
365149
432714
352990
493069
378602
357963
450806
455152
352867
460793
413004
418769
342752
369237
260176
464136
457874
532456
503199
350729
573046
396183

92572
126696
59685
29390
22120
71118
75102
145825
45810
28909
32406
22728
55296
32742
20520
117908
18826
383485
409982
458879
329670
554766
361685
398670
434410
410102
401959
340030
361407
408560
333047
517170
461479
411291
526856
368415
436933
352990
493936
378602
358033
450806
455093
353368
461452
413408
418769
346763
373140
260400
464734
457782
532840
506724
355922
573910
397520

92572
127912
59832
29390
22148
64632
75102
150709
46903
29408
33375
21863
55055
34732
21493
121118
20335
384996
410979
460978
330384
555106
364381
399439
434948
410966
402233
344988
365129
410462
335550
521512
461484
413109
532519
370080
439944
353408
493889
379913
358222
450808
455849
353371
462737
413205
419481
347233
373238
261239
470327
459194
527459
512286
352118
579462
398590

SA3
SA3
SA3
GA3
TS3
ACO-LJ2
GA3
GA3
TS3
SA3
TS3
TS3
SA3
SA3
TS3
GA3
GA3
GA3
SA3
TS3
SA3
SA3
SA3
GA3
GA3
SA3
GA3
TS3
GA3
VBSS1
ACO-LJ2
GA3
ACO-LJ2
TS3
VBSS1
ACO-LJ2
GA3
TS3
TS3
ACO-LJ2
TS3
SA3
GA3
SA3
TS3
SA3
TS3
ACO-LJ2
ACO-LJ2
GA3
ACO-LJ2
ACO-LJ2
ACO-LJ2
ACO-LJ2
ACO-LJ2
TS3
ACO-LJ2

51 over 125 obtained in Tasgetiren et al. (2004). However,
the yielded average percentage deviation from the best
solutions was 5% (with a maximum of 324%), which is
reduced to 3% (with a maximum of 39%) if we exclude
on the average 4 ( 40%, 40%) outliers. As such deviations

83

are similar to 4% (with a maximum of 127%) reported in
Tasgetiren et al. (2004), we consider acceptable the behaviour of the pure DPSO for this ORLIB benchmark.
5.4. The evaluation of the importance of the swarm
intelligence mechanisms and intensification phase for the
DPSO
In order to finally verify the effectiveness of using swarm
intelligence mechanisms in exploring the solution space, we
developed a modified version of our DPSO, denoted as
random particle search (RPS), removing from the algorithm every memory and particle interaction mechanism.
The RPS, starting from the set of solutions initially associated with the m particles, executes at each iteration a
random position update for each particle and an intensification step with the S-LS for the particle position correspondent to the best solution found in the iteration.
Differently from the DPSO, the RPS updates the particle
positions computing a random velocity as follows: for each
particle dimension, i.e., job j in the sequence of the associated solution, a pseudo-insertion move (j, d) is determined
by stochastically generating the job displacement d from a
normal distribution N(l, r2), with mean l = 0 and standard deviation r fixed as algorithm parameter. The developed RPS can be viewed as a sort of multiple iterated
local search method that uses the velocity concept from
DPSO in order to perturb the current solutions at each iteration, but that does not include any ‘‘swarm’’ interaction
mechanism as well as PSO memory structures (personal,
local or global best). We tested three RPS configurations
characterized by a different value for the parameter r, fixing r 2 {4, 6, 18}, executing 10 runs for each configuration
on Cicirello’s benchmark, then computing for each
instance the best average result over the three RPS configurations. Then we compared the RPS results with the average DPSO solutions generated by the GL6_10
configuration finding that the RPS produced an average
percentage deviation from the DPSO of 12.15%, with a
95% confidence of 9.45% (3.33% with 1.12% confidence
excluding the outliers). From such results RPS appears
dominated by the DPSO and this fact clearly confirms
the fundamental role of the DPSO swam intelligence mechanisms. Finally, we analysed the behaviour of the pure
DPSO also for Cicirello’s benchmark executing 10 runs
with the GL6_05 configuration and fixing 100 seconds as
termination criterion. In this case the DPSO behaved badly
as the overall average percentage deviation from the DPSO
results with the same configuration but including intensification exceeded 250% (note that the average deviation is
5.33% for the instances with the tightest due dates, i.e.,
d = 0.9). This fact puts into evidence that, at least for Cicirello’s benchmark, the role of a local search phase is very
important. However, considering the outcomes of the test
using the local search procedure in Liao and Juan (2007)
and the ones produced by the RPS, i.e., by the S-LS iterated without PSO memory and interaction mechanisms,

84

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85

we believe that the effectiveness of the our DPSO approach
does not depend on the use of a powerful S-LS procedure
but on the synergy between particle swarm exploration and
local intensification.
6. Conclusions
In this paper we describe a new DPSO algorithm that we
used to face the NP-hard STWTSDS problem. To our best
knowledge, this should be the first application of a discrete
PSO metaheuristic to this class of scheduling problem. Differently from previous approaches in the literature where
PSO has been applied to scheduling problems, our DPSO
adopts a discrete model both for particles and velocities,
respectively corresponding to job sequences and list of
so-called pseudo-insertion moves, and similarly to the
UPSO in Parsopoulos and Vrahatis (2006), it allows the
selection of more velocity update models, i.e., the gbest,
lbest and the combined glbest one. The experimental tests
performed on Cicirello’s benchmark demonstrate the competitiveness of the proposed DPSO; in particular, we can
highlight the ability of the DPSO of generating excellent
average results, as well as its very limited dependency from
the parameter values, which makes the algorithm tuning
not critical. In addition, the DPSO behaviour appears good
regardless of the kind of local search procedure used, even
if the intensification phase is a fundamental component to
allow the algorithm to reach its peak performance.
References
Abraham, A., Guo, H., Liu, H., 2006. Swarm intelligence: Foundations,
perspectives and applications. In: Abraham, A., Grosan, C., Ramos,
V. (Eds.), Swarm Intelligence in Data Mining, Studies in Computational Intelligence (Series). Springer-Verlag, Berlin.
Allahverdi, A., Al-Anzi, F.S., 2006. A PSO and a Tabu search heuristics for
the assembly scheduling problem of the two-stage distributed database
application. Computers & Operations Research 33, 1056–1080.
Anghinolfi, D., Paolucci, M., 2007. Parallel machine total tardiness
scheduling with a new hybrid metaheuristic approach. Computers &
Operations Research 34, 3471–3490.
Anghinolfi, D., Paolucci, M., in press. A new ant colony optimization
approach for the single machine total weighted tardiness scheduling
problem. International Journal of Operations Research.
Bean, J.C., 1994. Genetic algorithm and random keys for sequencing and
optimization. ORSA Journal on Computing 6, 154–160.
Chen, A., Yang, G., Wu, Z., 2006. Hybrid discrete particle swarm
optimization algorithm for capacitated vehicle routing problem.
Journal of Zhejiang University Science A 7, 607–614.
Cicirello, V.A., 2003. Weighted tardiness scheduling with sequencedependent setups a benchmark library. Technical Report of Intelligent
Coordination and Logistics Laboratory Robotics Institute, Carnegie
Mellon University, USA
Cicirello, V.A., 2006. Non-wrapping order crossover: An order preserving
crossover operator that respects absolute position. In: Proc. of
GECCO’06 Conference, Seattle, WA, USA, pp. 1125–1131.
Cicirello, V.A., Smith, S.F., 2005. Enhancing stochastic search performance by value-based randomization of heuristics. Journal of Heuristics 11, 5–34.
Clerc, M., Kennedy, J., 2002. The particle swarm: Explosion, stability, and
convergence in a multi-dimensional complex space. IEEE Transactions
on Evolutionary Computation 6, 58–73.

Clerc, M., 2004. Discrete Particle Swarm Optimization. In: Onwubolu,
G.C., Babu, B.V. (Eds.), New Optimization Techniques in Engineering. Springer-Verlag, Berlin, pp. 219–240.
Devore, J.L., 1991. Probability and Statistics for Engineering and the
Sciences. Brooks/Cole Publishing Company, Pacific Grove,
CA.
Du, J., Leung, J.Y.-T., 1990. Minimizing total tardiness on one machine is
NP-hard. Mathematics of Operations Research 15, 483–495.
Franc¸a, P.M., Mendes, A., Moscato, P., 2001. A memetic algorithm for
the total tardiness single machine scheduling problem. European
Journal of Operational Research 132, 224–242.
Hu, X., Eberhart, R., Shi, Y., 2003. Swarm intelligence for permutation
optimization: a case study of n-queens problem. Proceedings of the
2003 IEEE Conference on Swarm Intelligence Symposium (SIS’03).
IEEE Press, pp. 243–246.
Kennedy, J., Eberhart, R., 1995. Particle Swarm Optimization. Proceeding
of the 1995 IEEE International Conference on Neural Network, 1942–
1948.
Kennedy, J., Eberhart, R., 1997. A discrete binary version of the
particle swarm algorithm. In: Proceedings of the International
Conference on Systems, Man and Cybernetics, vol. 5. IEEE Press,
pp. 4104–4108.
Kennedy, J., Eberhart, R.C., 2001. Swarm Intelligence. Morgan Kaufmann, San Francisco.
Lawler, E.L., 1997. A ‘pseudopolynomial’ algorithm for sequencing jobs to
minimize total tardiness. Annals of Discrete Mathematics 1, 331–342.
Lee, Y.H., Bhaskaran, K., Pinedo, M., 1997. A heuristic to minimize the
total weighted tardiness with sequence-dependent setups. IIE Transaction 29, 45–52.
Lian, Z., Gu, X., Jiao, B., 2006a. A similar particle swarm optimization
algorithm for permutation flowshop scheduling to minimize makespan. Applied Mathematics and Computation 175, 773–785.
Lian, Z., Gu, X., Jiao, B., 2006b. A similar particle swarm optimization
algorithm for job-shop scheduling to minimize makespan. Applied
Mathematics and Computation 183, 1008–1017.
Liao, C.-J., Juan, H.C., 2007. An ant colony optimization for singlemachine tardiness scheduling with sequence-dependent setups. Computers & Operations Research 34, 1899–1909.
Liao, C.-J., Tseng, C.-T., Luarn, P., 2007. A discrete version of particle
swarm optimization for flowshop scheduling problems. Computers &
Operations Research 34, 3099–3111.
Lin, S.-W., Ying, K.-C., 2006. Solving single-machine total weighted
tardiness problems with sequence-dependent setup times by metaheuristics. The International Journal of Advanced Manufacturing
Technology. Available from: www.springerlink.com.
Liu, B., Wang, L., Jin, Y.-J., 2005. An effective hybrid particle swarm
optimization for no-wait flow shop scheduling. International Journal
of Advanced Manufacturing Technology.
Mladenovic, N., Hansen, P., 1997. Variable neighbourhood search.
Computers & Operations Research 24, 1097–1100.
Mohan, C.K., Al-kazemi, B.,2001. Discrete particle swarm optimization.
In: Proceedings of the Workshop on Particle Swarm Optimization,
Indianapolis
Pang, W., Wang, K.P., Zhou, C.G., Dong, L.-J., 2004. Fuzzy discrete
particle swarm optimization for solving traveling salesman problem.
Proceedings of the 4th International Conference on Computer and
Information Technology. IEEE CS Press, pp. 796–800.
Parsopoulos, K.E., Vrahatis, M.N., 2004. UPSO: A unified particle swarm
optimization scheme. In: Proceedings of ICCMSE 2004 Conference,
Vouliagmeni-Kavouri, Greece, pp. 868–873.
Parsopoulos, K.E., Vrahatis, M.N., 2006. Studying the performance of
unified particle swarm optimization on the single machine total
weighted tardiness problem. Lecture Notes in Artificial Intelligence
(LNAI) 4304, 760–769.
Pinedo, M., 1995. Scheduling: Theory, Algorithms, and Systems. Prentice
Hall, Englewood Cliffs, NJ.
Potts, C.N., van Wassenhove, L.N., 1991. Single machine tardiness
sequencing heuristics. IIE Transactions 23, 346–354.

D. Anghinolfi, M. Paolucci / European Journal of Operational Research 193 (2009) 73–85
Rubin, P.A., Ragatz, G.L., 1995. Scheduling in a sequence dependent
setup environment with genetic search. Computers & Operations
Research 22, 85–99.
Sha, D.Y., Hsu, C.-Y., 2006. A hybrid particle swarm optimization for job
shop scheduling problem. Computers & Industrial Engineering 51,
791–808.
Tasgetiren, M.F., Liang, Y.-C., Sevkli, M., Gencyilmaz, G., 2007. A
particle swarm optimization algorithm for makespan and total
flowtime minimization in the permutation flowshop sequencing problem. European Journal of Operational Research 177, 1930–1947.

85

Tasgetiren, M.F., Sevkli, M., Liang, Y.C., Gencyilmaz, G., 2004. Particle
swarm optimization algorithm for single machine total weighted
tardiness problem. In: Proceedings of the IEEE Congress on Evolutionary Computation, Portland, vol. 2, pp. 1412–1419.
Wisner, J.D., Siferd, S.P., 1995. A survey of U.S. Manufacturing Practices
in make-to-order machine shops. Production and Inventory Management Journal 36, 1–7.
Zhang, H., Li, H., Tam, C.M., 2006. Particle swarm optimization for
resource-constrained project scheduling. International Journal of
Project Management 24, 83–92.



Documents similaires


0
reportz
icmitieee2017
dynamics quickstudy
carpe diem report
multi genomic algorithms


Sur le même sujet..