Ant colony optimization

From Wikipedia, the free encyclopedia

The ant colony optimization algorithm (ACO), introduced by Marco Dorigo in 1992 in his PhD thesis, is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. They are inspired by the behaviour of ants in finding paths from the colony to food.

Contents

[edit] Overview

In the real world, ants (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep traveling at random, but to instead follow the trail, returning and reinforcing it if they eventually find food (see Ant communication).

Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over faster, and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained.

Thus, when one ant finds a good (i.e. short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads all the ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.

Ant colony optimization algorithms have been used to produce near-optimal solutions to the traveling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.


[edit] Pseudo-code & Formulas

 procedure ACO_MetaHeuristic
   while(not_termination)
      generateSolutions()
      pheromoneUpdate()
      daemonActions()
   end while
 end procedure


Arc Selection:

An ant will move from node i to node j with probability


p_{i,j} = 
\frac
{ (\tau_{ij}^{\alpha}) (\eta_{ij}^{\beta}) }
{ \sum (\tau_{ij}^{\alpha}) (\eta_{ij}^{\beta}) }

where,

τij is the amount of pheromone on arc i,j

α is a parameter to control the influence of τij

ηij is the desirability of arc i,j (a priori knowledge, typically 1/di,j)

β is a parameter to control the influence of ηij


Pheromone Update

τi,j = ρτi,j + Δτi,j

where,

τi,j is the amount of pheromone on a given arc i,j

ρ is the rate of pheromone evaporation

and Δτi,j is the amount of pheromone deposited, typically given by


\Delta \tau^{k}_{i,j} = 
\begin{cases}
1/L_k & \mbox{if ant k travels on arc i,j} \\
0 & \mbox{otherwise}
\end{cases}

where Lk is the cost of the kth ant's tour (typically length).

[edit] Common Extensions

  1. Elitist Ant System
    • The global best solution deposits pheromone on every iteration along with all the other ants
  2. Max-Min Ant System (MMAS)
    • Added Maximum and Minimum pheromone amounts [τmaxmin]
    • Only global best or iteration best tour deposited pheromone
    • All edges are initialized to τmax and reinitialized to τmax when nearing stagnation.
  3. Rank-Based Ant System (ASrank)
    • All solutions are ranked according to their fitness. The amount of pheromone deposited is then weighted for each solution, such that the more optimal solutions deposit more pheromone than the less optimal solutions


[edit] Related methods

Genetic Algorithms (GA) maintain a pool of solutions rather than just one. The process of finding superior solutions mimics that of evolution, with solutions being combined or mutated to alter the pool of solutions, with solutions of inferior quality being discarded.

Simulated Annealing (SA) is a related global optimization technique which traverses the search space by generating neighbouring solutions of the current solution. A superior neighbour is always accepted. An inferior neighbour is accepted probabilistically based on the difference in quality and a temperature parameter. The temperature parameter is modified as the algorithm progresses to alter the nature of the search.

Tabu search (TS) is similar to Simulated Annealing, in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest fitness of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.

Harmony search (HS) is an algorithm based on the analogy between music improvisation and optimization. Each musician (variable) together seeks better harmonies (vectors).

[edit] See also

[edit] Publications (selected)

  • M. Dorigo, 1992. Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy.
  • M. Dorigo, V. Maniezzo & A. Colorni, 1996. "Ant System: Optimization by a Colony of Cooperating Agents", IEEE Transactions on Systems, Man, and Cybernetics–Part B, 26 (1): 29–41.
  • M. Dorigo & L. M. Gambardella, 1997. "Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem". IEEE Transactions on Evolutionary Computation, 1 (1): 53–66.
  • M. Dorigo, G. Di Caro & L. M. Gambardella, 1999. "Ant Algorithms for Discrete Optimization". Artificial Life, 5 (2): 137–172.
  • E. Bonabeau, M. Dorigo et G. Theraulaz, 1999. Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press. ISBN 0-19-513159-2
  • M. Dorigo & T. Stützle, 2004. Ant Colony Optimization, MIT Press. ISBN 0-262-04219-3
  • M. Dorigo, 2007. "Ant Colony Optimization". Scholarpedia.
  • C. Blum, 2005 "ant colony optimization:introduction and recent trends" physics of life review(2) 353-373

[edit] External links