- Text
- Optimization,
- Casting,
- Cooling,
- Parameter,
- Algorithm,
- Continuous,
- Vertex,
- Grid,
- Pheromone,
- Stigmergy,
- Csd.ijs.si

Ant Stigmergy on the Grid - Computer Systems @ JSI - IJS

ased **on** stigmergy that will be used **on** **the** **Grid**. **on** in decentralized systems where **the** individual parts of **the** system communicate with **on**e ano**the**r by modifying **the**ir local envir**on**ment. For example, ants communicate by laying down pherom**on**e al**on**g **the**ir trails, so an ant col**on**y is a stigmergic system. The term stigmergy (from **the** Greek stigma = sting, and erg**on** = work) was originally defined in 1959 by **the** French biologist Grassé [12]. Because of **the** nature of **the** ant-based algorithms we first have to discretize a c**on**tinuous multi-parameter problem and translate it into a graph representati**on** (search graph). Then we use a selected optimizati**on** technique to find **the** cheapest path in **the** c**on**structed search graph; this path c**on**sists of **the** values of **the** optimized parameters. For this purpose, we use an optimizati**on** algorithm, **the** routes of which can be found in **the** ant col**on**y optimizati**on** (ACO) metaheuristic [5,6,7]. We c**on**sidered **the** multilevel approach and its potential to enhance **the** optimizati**on** procedure. The multilevel approach in its most basic form involves recursive coarsening to create a hierarchy of approximati**on**s to **the** original problem. An initial soluti**on** is found (sometimes for **the** original problem, sometimes at **the** coarsest level) and **the**n iteratively refined at each level. As a general soluti**on** strategy **the** multilevel procedure has been in use for many years and has been applied to various problem areas [22]. We merge stigmergy and **the** multilevel approach into **on**e method, called **the** Multilevel **the**r metaheuristic approaches, **the** MASA admits direct parallelizati**on** schemes and parallelism can be exploited **on** **on**e or more scales [20]. We applied it **on** **the** largest scale where entire search procedures can be performed c**on**currently. Such implementati**on**, called Distributed Multilevel **on** parallel interacting ant col**on**ies [16]. 2.1 Search graph c**on**structi**on** Search graph c**on**structi**on** c**on**sists of translati**on** of **the** discrete parameter values of **the** problem into a search graph G =(V,E) with a set of vertices V and set E of edges between **the** vertices. For each parameter p d , 1 ≤ d ≤ D, parameter value v 〈d,i〉 , 1 ≤ i ≤ n d , n d = |p d |, represents a vertex in a search graph, and each vertex is c**on**nected to all **the** vertices that bel**on**g to **the** next parameter p d+1 . The so-called start vertex is also added to **the** graph. This vertex is c**on**nected to all vertices of **the** parameter p 1 and used as a starting point for all ants. This way we transform **the** multi-parameter optimizati**on** problem into a problem of finding **the** cheapest path. Once this is d**on**e, we can deploy **the** initial pherom**on**e values **on** all **the** vertices. 2.2 Multilevel approach Coarsening of **the** problem representati**on** is d**on**e by merging two or more neighboring vertices into **on**e vertex; this is d**on**e in L iterati**on**s (we call **the**m levels l = 1, 2,...,L). Let us c**on**sider coarsening from level l to level l+1 at distance d. Here Vd l = {vl 〈d,1〉 ,...,vl 〈d,n l 〉} d is a set of vertices at level l and distance d of **the** search graph G, where1≤ d ≤ D. If n 1 d is **the** number of vertices at **the** initial level of coarsening and distance d, **the**n for every level l **the** equati**on** n l+1 d = ⌈ nl d ⌉ is s l d true, where s l d is **the** number of vertices at level l which merge into **on**e vertex at level l+1. We **the**refore divide subsets, where V l d into nl+1 d V l d = d⋃ n l+1 k=1 V l 〈d,k〉, (1) ∀i, j ∈{1,...,n l+1 d }∧i≠j : V l 〈d,i〉 ∩ V l 〈d,j〉 =∅. Each subset is defined as follows: The set V l 〈d,n l+1 〉 d V l 〈d,1〉 = {v〈d,1〉,...,v l l 〈d,s l 〉}, d (2) V l 〈d,2〉 = {v l 〈d,s l d +1〉,...,vl 〈d,2s l 〉}, d (3) . = {v l 〈d,(n l+1 −1)s l d d +1〉,...,vl 〈d,n l 〉}. (4) d V l+1 d = {v l+1 〈d,1〉 ,...,vl+1 〈d,n l+1 d 〉 } (5) is a set of vertices at distance d at level l +1, where v l+1 〈d,k〉 ∈ V〈d,k〉 l is selected **on** some predetermined principle. Examples are random pick, **the** most left/right/centred vertex in **the** subset, etc. The outline of **the** coarsening procedure pseudo code is as follows: for k =1to n l+1 d do v l+1 〈d,k〉 = SelectOneVertex(V l end for 〈d,k〉 ) The goal of optimizati**on** is to find **the** minimum cost path from start vertex to a ending vertex (**on**e of **the** vertices at distance D) in **the** search graph. There are a number of ants in a col**on**y that all simultaneously start from **the** start vertex. The probability with which

————————————————————————————————————————————————- Client: graph = GraphC**on**structi**on**(parameters) for l =1to L do Coarsening(graph[l]) end for GraphInitializati**on**(initial pherom**on**e amount) Server: for l = L downto 1 do StartAllClients() while not current level ending c**on**diti**on** do while not ending c**on**diti**on** do for all ants in sub-col**on**y do if ReceivedEvaluatedPaths(client) **the**n path = FindPath(probability rule) BroadcastPaths(all o**the**r clients) Evaluate(path) end if end for end while SendEvaluatedPathsToServer(all ants) soluti**on**s = ReceiveSoluti**on**s&StopAllClients() ReceivePathsFromServer(from all o**the**r clients) bestSoluti**on** = Best(soluti**on**s) UpdatePherom**on**e(all found and received paths vertices) LocalSearch(bestSoluti**on**) Daem**on**Acti**on**(best path) EvaporatePherom**on**e(all vertices) end while Refinement(graph[l]) end for SendSoluti**on**(bestSoluti**on**) ————————————————————————————————————————————————- Figure 1. Distributed Multilevel **the**y choose **the** next vertex depends **on** **the** amount of pherom**on**e in **the** vertices. **on** until **the**y get to an ending vertex. More specifically, ant k in step t moves from vertex i to vertex j with a probability given by: ⎧ ⎪⎨ p ij,k(t) = ⎪⎩ τ α j (t) ∑l∈N i,k τ α l (t) j ∈ N i,k 0 j ∉ N i,k (6) where α is a parameter that determines **the** relative influence of **the** pherom**on**e trail τ j (t) in vertex j, and N i,k is **the** feasible neighbourhood of vertex i. The parameter values ga**the**red by each ant **on** its path are now evaluated. Then each ant returns to **the** start vertex and **on** its way it deposits pherom**on**e in **the** vertices according to **the** evaluati**on** result: **the** better **the** result, **the** more pherom**on**e is deposited, and vice versa. After all **the** ants return to **the** start vertex, we perform a so-called daem**on** acti**on**, which in our case c**on**sists of depositing additi**on**al pherom**on**e **on** currently best path and also some smaller amount **on** **the** left and right neighbor paths. Afterwards **the** pherom**on**e evaporates in all vertices, i.e., **the** amount of pherom**on**e is decreased by some predetermined percentage in each vertex. The whole procedure is repeated until some ending c**on**diti**on** is met. Because of **the** simplicity of **the** coarsening, **the** refinement itself is very trivial. Let us c**on**sider refinement from level l to level l − 1 at distance d. The outline of **the** refinement pseudo code is as follows: for k =1to n l d do for each v l−1 〈d,i〉 ∈ V l−1 〈d,k〉 do v l−1 〈d,i〉 = vl 〈d,k〉 end for end for 2.3 Distributed implementati**on** The n**on**-distributed MASA approach presented in [13] is based **on** a single ant col**on**y. However, in **the** distributed approach **the** col**on**y is split into N subcol**on**ies, where N represents **the** number of processors. Each sub-col**on**y searches for a soluti**on** according to **the** DMASA algorithm (see Figure 1). With **the** use of SendEvaluatedPathsToServer() functi**on** **the** paths with updated pherom**on**e amounts are sent from a client to **the** server which **the**n with **the** use of BroadcastPaths() functi**on** broadcasts this informati**on** to all o**the**r clients (sub-col**on**ies). The amount of updated pherom**on**e is determined by **the** Evaluate() functi**on**. On **the** o**the**r hand, **the** informati**on** (paths with updated pherom**on**e amounts) is ga**the**red from o**the**r sub-col**on**ies with **the** use of ReceivePathsFrom- Server() functi**on**. The functi**on** UpdatePherom**on**e() deposits pherom**on**e **on** found and received paths. Each sub-col**on**y has its own pherom**on**e matrix. With **the** use of **the** functi**on** UpdatePherom**on**e() a

- Page 1: Ant Stigme
- Page 5 and 6: field of the steel slab as a functi
- Page 7 and 8: 4.2 Results In this section we pres