Computing k shortest paths using modified pulse-coupled neural network

Computing k shortest paths using modified pulse-coupled neural network

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Computing...

2MB Sizes 4 Downloads 112 Views

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Computing k shortest paths using modified pulse-coupled neural network$ Guisong Liu, Zhao Qiu, Hong Qu n, Luping Ji Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China

art ic l e i nf o

a b s t r a c t

Article history: Received 6 January 2014 Received in revised form 2 July 2014 Accepted 6 September 2014 Communicated by Long Cheng

The K Shortest Paths (KSPs) problem with non-numerable applications has been researched widely, which aims to compute KSPs between two nodes in a non-decreasing order. However, less effort has been devoted to single-source KSP problem than to single-pair KSP computation, especially by using parallel methods. This paper proposes a Modified Continued Pulse Coupled Neural Network (MCPCNN) model to solve the two kinds of KSP problems. Theoretical analysis of MCPCNN and two algorithms for KSPs computation are presented. By using the parallel pulse transmission characteristic of pulse coupled neural networks, the method is able to find k shortest paths quickly. The computational complexity is only related to the length of the longest shortest path. Simulative results for route planning show that the proposed MCPCNN method for KSPs computation outperforms many other current efficient algorithms. & 2014 Elsevier B.V. All rights reserved.

Keywords: k Shortest paths Pulse coupled neural network Single-pair KSP Single-source KSP

1. Introduction For a given pair of nodes s and t in a directed weighted graph G ¼ ðV ; EÞ, the K-Shortest-Paths (KSPs) Problem is about finding the k shortest paths in a non-decreasing order with respect to their lengths. Application domain examples for KSPs include object tracking [1], sequence alignment [2], scheduling [3], dynamic routing [4], systems biology [5] and many other areas in which optimization problems need to be solved [6]. Since first proposed by Hoffman and Pavley [7] in 1950s, the KSP problem has received much attention, and several variants of the KSP problem have been studied in the literature. Many authors [6,8,12,16,17] aim to find KSPs with the solution paths not required to be simple, i.e., loops are allowed in the paths; while in other works [5,9–11,13–15], the solution paths are restricted to be simple where there is no node repeated along any solutions. From another perspective, the goal of most algorithms is to compute KSPs between two given nodes, also called single-pair KSP problem [8–11,13–17]. However, the single-source KSP problem aims to find KSPs from a given node to each other node [5,6]. The famous algorithm, EA [6] proposed by Eppstein, can be used to solve both single-pair KSP and single-source KSP problems. It computes KSPs using an implicit representation of paths, ☆ This work was supported by National Science Foundation of China under Grant 61273308. n Corresponding author. E-mail address: [email protected] (H. Qu).

with a time complexity of Oðm þ n log n þ kÞ for single-pair KSP and Oðm þ n log n þ kn log kÞ for single-source problem, where n and m are the number of nodes and edges, respectively, and k is the number of paths needed to computed. EA algorithm first runs Dijkstra's algorithm to compute a minimum spanning tree rooted at the source node, then creates two kinds of heaps for each node, which are used to construct the path graph, and finally searches the path graph to obtain k shortest paths. Jimenez and Marzal [17] presented a further optimization of Eppstein's algorithm, i.e., the “lazy variant of Eppstein's algorithm” (LVEA). It maintains the same asymptotic worst-case complexity as EA but improves its practical performance as the result of optimizations. To the best of our knowledge, in the state-of-the-art single-pair algorithms, Kn [8] is the most efficient single-pair KSP algorithm according to their experiments, which is devised as heuristics-guided, on-thefly (meaning that the full problem graph does not need to be presented in the main memory but the nodes will be generated as needed). Hence it is particularly well suited to solve very large instances of the single-pair KSP problem. From the literature, compared with single-pair KSP methods, less effort has been devoted to single-source KSP approaches; meanwhile, no parallel approach is published to solve the KSP problem, from the authors' knowledge. Neural networks are a class of classic method to process complex problems, such as combinatorial optimization problems. Since the original work of Hopfield proposed in 1985 [18], a lot of research has been done using neural networks to solve combinatorial optimization problems. The major drawbacks of traditional neural networks in

http://dx.doi.org/10.1016/j.neucom.2014.09.012 0925-2312/& 2014 Elsevier B.V. All rights reserved.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

Nomenclature ϝi θi ðtÞ §(i) Fi(t) L Lmax Pi(t) P ðkÞ i M Tmax Wmax

the set of all firing times of nj threshold function of ni the set of fires that may trigger ni to fire. feeding field of ni ratio between path length and firing time the maximum path length current parents set of ni at time t the parents of kth fire of ni total iteration number to compute Ui(t) maximum time between two fire events maximum weight between two nodes

optimization problems are the invalidity of the obtained solutions, the trial-and-error process for value setting of the networks parameters, and the low computational efficiency. As a comparison, spatial– temporal dynamics of Pulse Coupled Neural Networks (PCNNs) provide a seminal computational capability for solving optimization problems. PCNN is the result of research on the development of an artificial neuron model that was capable of emulating the behavior of cortical neurons observed in the visual cortices of cats [19]. After Johnson's work [20], there has been increased interest in using PCNNs for various applications, such as image processing [21,22], target recognition [23], motion matching [24], pattern recognition [25,26] and optimization [27–37], a review of PCNN and its applications can also be found in [38]. In 1999, Caulfield and Kinser [27] presented the idea of utilizing the autowave in PCNNs to find the solution of the maze problem. Their model could find the shortest path quickly, with minimum effort related only to the length of the shortest path as well as the complexity independency to a graph. However, many neurons are required to find the shortest path in large graphs since one pulse of the coupled neuron corresponds to a unit length of path [29]. This paper proposes a modified PCNN model to solve the following two kinds of KSP problems where loops are allowed in the solution paths: single-pair and single-source KSP problems. The proposed model is topologically organized with only local lateral connections among neurons. The generated spiking wave in the network spreads outward with travel times proportional to the connection weight between neurons. This guarantees that the wave propagates along the paths from the source to all destinations. The computational complexity of the algorithm is only related to the length of the longest shortest path, independent of the number of existing paths in the graph. Each neuron in the proposed model works as a sensor which propagates the fire event to its neighbor neuron without any comparing computations. The proposed model is also applied to generate k shortest paths for a real given graph step by step. The effectiveness and the efficiency of the approach are demonstrated through comparative simulations. The rest of the paper is organized as follows: some preliminaries are introduced in Section 2. Section 3 presents the proposed model and the theoretical analysis. The algorithms by MCPCNN for KSPs computation are stated in Section 4. Finally, simulations and conclusions are given in Section 5 and 6, respectively.

2. Preliminaries 2.1. The K Shortest Paths problem Let G ¼ ðV; EÞ be a directed weighted graph, where V is the set of nodes and E D V  V is the set of edges. Given an edge e ¼ ðu; vÞ DE, we call tail(e) to u and head ðeÞ to v. Let w : E-R 4 0 be a function

Wmin A B C fik Li(t) tfi pci Ri tik wij Yi(t) Ui(t)

minimum weight between two nodes positive constant positive constant positive constant the kth fire of ni linking field of ni latest fire time of ni current parents of ni neighbors of ni the kth fire time of ni weight from ni to nj output of ni internal activity of ni

mapping edge to non-negative real-valued weight or length. Let s, t-V denote the source and the target node, respectively. The path on G is denoted by P. Without loss of generality, we denote Pn to be the nth shortest s–t path in G. The length of a path p ¼ v1 -v2 -⋯-vn is defined as the sum of the edge lengths as follows: n1

lðPÞ ¼ ∑ wðvi ; vi þ 1 Þ

ð1Þ

i¼1

For an arbitrary pair of nodes u and v, dðu; vÞ denotes the length of the shortest path from u to v, and dðu; sÞ is abbreviated to d(u). If there is no path from u to v, then dðu; vÞ is equal to þ 1. The KSP problem is about finding the k shortest paths in a directed weighted graph in a non-decreasing order, i.e., enumerating the paths from s to t in a nonincreasing order with respect to their lengths. 2.2. Typical PCNN model A typical neuron of PCNNs consists of three parts as shown in Fig. 1: the receptive fields, the modulation fields and the pulse generator. The neuron receives input signals from neighbor neurons and external sources in the receptive fields. The receptive fields can be divided into two channels: one is the feeding inputs and the other is the linking inputs. The modulation fields generate the internal activity of the neuron. The pulse generator receives the result of total internal activity Ui and determines the firing events. Let K be the total number of iterations and k ¼ 0; 1; …; K  1 be the current iteration, then the PCNNs can be described by the following equations: N

F i ðkÞ ¼ e  αF F i ðk  1Þ þ ∑ M ij Y j ðk  1Þ þ I i

ð2Þ

j¼1 N

Li ðkÞ ¼ e  αL Li ðk  1Þ þ ∑ W ij Y j ðk  1Þ þ J i

ð3Þ

U i ðkÞ ¼ F i ðkÞð1 þ βLi ðkÞÞ

ð4Þ

θi ðkÞ ¼ e  αθ θi ðk  1Þ þ V θ Y i ðk  1Þ

ð5Þ

j¼1

 Y i ðkÞ ¼ stepðU i ðkÞ  θi ðk 1ÞÞ ¼

1

if U i ðkÞ 4 θi ðk  1Þ

0

otherwise:

ð6Þ

Here, i stands for the position of neuron in the map. N is the total number of neurons. F and L are feeding input and linking input, respectively. U is the internal activity generated by the modulation fields. Y is the pulse output. αF and αL are time constants for feeding and linking, respectively. β is the strength of the linking. M and W represent the constant synaptic weights. Ii and Ji are constant inputs. βi ðnÞ represents the dynamic threshold of the neuron i. If Ui is greater than the threshold, the output of neuron i turns into 1 and fires. Then Yi feeds back to make rise over Ui immediately and

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

3

Fig. 1. The network structure of PCNN.

the output of neuron i turns into 0. So, a pulse output is produced. It is clear that the pulse generator is responsible for the modeling of the refractory period.

3. The modified MCPCNN model 3.1. Notions and definitions

Fig. 2. An example graph.

The neural network architecture of the modified model can be regarded as a graph G ¼ ðV; EÞ, where V is the set of nodes (neurons) and E is the set of edges (connections between neurons). An example graph is shown in Fig. 2. There are many single-pair correspondences between the nodes and edges in graph G, which means the connections between neurons in the network architecture. The numbers of neurons and connections are supposed to be finite. In order to present the modified model more clearly, some notations and definitions are given in this subsection. In the network architecture, Ri denotes the set of neurons which can be reached by neuron ni directly. In other words, 8 j A Ri , there has been an edge from i to j. The connection weight between ni and nj is denoted by wij. If nj is not in the neighbor set Ri, then wij ¼ 1. Notice that the weights between neurons maybe are not symmetric, i.e., wij a wji in most cases. For example, as shown in Fig. 2, the neighbors for the each neuron can be denoted as follows: R1 ¼ f2; 4g;

R2 ¼ f1; 3; 5g;

R5 ¼ f2; 4; 6; 8g

R3 ¼ f2; 6g;

R6 ¼ f3; 5; 9g;

R4 ¼ f1; 5; 7g;

R7 ¼ f4; 8g;

R8 ¼ f5; 7; 9g;

R9 ¼ f6; 8g

The connection weights from n2 to each other neuron are 8 4:1; j ¼ 1; > > > > < 0:5; j ¼ 3; w2j ¼ 2:3; j ¼ 5; > > > > : 1; j ¼ f4; 6; 7; 8; 9g:

ð7Þ

In our model, each neuron has a single output Y(t). Definition that 8 > < 0; YðtÞ ¼ 1; > : 0;

1. A neuron is said to fire at time T Z 0, if ( ε Z 0, such T  ε r t o T; t ¼ T;

ð8Þ

T o t r T þ ε:

We denote this time as t ðkÞ to represent the kth fire time of ni. i The set of all firing times of ni is denoted by ϝi ¼ ft ðfi Þ : f ¼ 1; 2; 3; …g ¼ ftjU i ðtÞ ¼ θi ðtÞg

ð9Þ

Definition 2. We use fik to denote the kth fire of ni. Consider the fact that a fire may be triggered directly by more than one neuron simultaneously, we use P ðkÞ to denote the set of neurons that i trigger the kth fire of ni. P ðkÞ denotes the corresponding set of fires if that trigger the kth fire of ni. If f jk ðj A Ri Þ directly triggers the mth fire of ni, we call nj the parent neuron of fim, and fjk the parent fire of fim. In this case, nj A P ðmÞ and f jk A P ðmÞ . For ni, any fire f jk ðj A Ri Þ i if could potentially become the parent fire of ni. We use pci A Ri to denote the neuron that will directly determine the next fire of ni. All neurons will not fire until one of its neighbors has fired, except for the source neuron. For any given nj, if j A Ri and the kth fire time t ðkÞ of ni is determined directly by tfj which is the most i recent fire time of nj, we say the kth fire of ni is on the stimulation of nj . Then we call j the parent of the kth fire of ni. In this case, nj is f f denoted as P ðkÞ i and tj is denoted as t P ðkÞ . Sometimes the parent of ni will be changed by the fires of otheri neuron in Ri. For example, if neuron k is the parent of ni at time tfk, and ni is going to fire at time (τ) in the future. In this case, pci ¼ k. If a neighbor node j of ni fires, the firing of nj could stimulate ni to fire before time τ, then j will replace k and become the new parent of ni, i.e., pci ¼ j.

3.2. Neuron structure model The proposed model is topologically organized with only local lateral connections among neurons. The source neuron fires first, then the firing event spreads out through the lateral connections among the neurons, just like the propagation of a wave. Each neuron records its parent represented the neighbor which caused it to fire. It proves that the generated wave in the network spreads outward with travel times proportional to the connection weight between neurons. The modified MCPCNN model is shown in Fig. 3. In the model, each ni has one output Yi:  1; U i ðtÞ Z θi ðtÞ; Y i ðtÞ ¼ StepðU i ðtÞ  θi ðtÞÞ ¼ ð10Þ 0 otherwise: For i¼ 1,2,…,N, Ui(t) and θi ðtÞ are the internal activity and threshold function, respectively. t denotes the time and N ¼ jVj is the total number of neurons. The threshold function of the ni can be expressed

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

3.3. Theoretical analysis In this subsection, some theoretic results are deduced mathematically to show the perfect performance of the modified network model when it is used to solve the K Shortest Paths problem. Theorem 1. Consider neurons i, j and k, where i A Rj and iA Rk , suppose ni fires at time ti, and nj and nk will fire at future time tj and tk, respectively. If ni remains the parent of both nj and nk before nj and nk fire, that is, RPj ¼ i 8 t i o t o t j and RPk ¼ i8 t i o t o t k . If both the following conditions hold, rffiffiffiffi B 0 oA o 1; ð18Þ C rffiffiffiffi 1 C oV θ ; ð19Þ W min B

Fig. 3. The modified PCNN model.

by 8 AInit ; > > < f ðwipci Þ; θi ðtÞ ¼ > > : Vθ;

pci ¼ NULL; pci a NULL; t o t ki ;

ð11Þ

t Z t ik :

tj  ti ¼

Here, AInit and V θ are all positive constants. V θ is set to be a very large value, and f ðwij Þ is defined as a monotonically decreasing function: 8 > < A ; iA Rj ; ð12Þ f ðwij Þ ¼ wij > :1 otherwise; where A is a positive constant. The linking field and the feeding field are expressed as 8 0; pci ¼ NULL; > > < 1 ð13Þ Li ðtÞ ¼ lðY r1 ; Y r2 ; …; Y rk ; tÞ ¼ otherwise: i i i > > : w2ipc i

F i ðtÞ ¼ gðwir1 ; wir2 ; …; wirk ; tÞU 2i ðtÞ; i

i

ð14Þ

i

where r 1i ; r 2i ; …; r ki are the neighbor neurons of ni, and wir1 ; wir2 ; …; i i wirk are the link strengths from ni to its k neighbor neurons. The i internal activity Ui(t) of each neuron determines the firing events and is determined by the initial value problem of 8 pci ¼ NULL; > < UðtÞ ¼ 0; ð15Þ dU i ðtÞ > : dt ¼ F i ðtÞ þ CLi ðtÞ otherwise: Here, C is a positive constant and gðÞ is a function of the outputs and connection weights from neighbor and the time t as follows: ( 0; pci ¼ NULL; g i ðwir1 ; wir2 ; …; wirk ; tÞ ¼ ð16Þ i i i B; otherwise; where B is a positive constant. In general, the proposed model can be expressed by Eqs. (3)–(10). But there is still one special instance: if c there exists more than one neuron triggered ni to fire at time t pi , the neuron with the minimum linking strength to ni will be selected as RPi . We use Pi(t) to denote the set of neurons that can cause ni to fire at next fire time, it is clear that pci A P i ðtÞ and pci ¼ minfwik jk A P i ðtÞg:

both nj and nk will fire exactly once at sometime after ti, and the firing times of these two neurons satisfy

ð17Þ

We will record all neurons in Pi(t) in the proposed model to track back the fire traces. In fact, the number N of neurons in Pi(t) represents the number of neurons which trigger the same fire at the same time. Here, we should regard all the N fires as just one fire. As we will illustrate later that a fire represents a path, thus, N paths are merged as one path. That is the reason why all neuron in Pi(t) should be recorded.

wij ðt  t Þ: wik k i

ð20Þ

Proof. According to Eqs. (10) and (11), for any ni A Rj , i ¼ pcj , Uj is determined by 8 U ðt Þ ¼ 0; > < j i dU j ðtÞ C ð21Þ 2 > : dt ¼  BU j ðtÞ þ w2 ; ij for all t i r t o t j , we can also obtain pffiffiffiffiffi rffiffiffiffi 1 C 1  e  2 BC ðt  t i Þ=wij pffiffiffiffiffi U j ðtÞ ¼ : wij B 1 þ e  2 BC ðt  t i Þ=wij

ð22Þ

It is clear that Uj is strictly increasing from zero. If both conditions (18) and (19) hold, we can get rffiffiffiffi rffiffiffiffi 1 C 1 C o oV θ ; ð23Þ lim U j ðtÞ ¼ t-1 wij B W min B 0 oθj ðtÞ ¼

A 1 o wij wij

rffiffiffiffi C ¼ lim U ðtÞ: B t-1 j

Thus, by the continuity of 8 A > > ; U j ðtÞ o θj ðtÞ ¼ > > wij > > > > < A ; U j ðtÞ ¼ θj ðtÞ ¼ w > ij > > > > A > > > : V θ 4 U j ðtÞ 4 θj ðtÞ ¼ w ; ij

ð24Þ

Uj, there must exit t j 4 t i satisfying t o tj ; t ¼ tj ;

ð25Þ

t 4 tj :

Thus, the output of nj can be expressed as 8 0; t o t j ; > < Y j ðtÞ ¼ StepðU j ðtÞ  θj ðtÞÞ ¼ 1; t ¼ t j ; > : 0; t 4 t :

ð26Þ

j

This shows that nj fires just once in theorem conditions if pci is not change. The next step of this proof will show the correctness of firing time relationship between ni and nj in Eq. (20). Since i A Rj and i A Rk , and both the Conditions (18) and (19) hold, we can conclude from the above proof that nj and nk will fire at sometime in the future. For nj and nk, from Eqs. (22) and (25) we get pffiffiffiffiffi rffiffiffiffi 1 C 1  e  2 BC ðt  ti Þ=wij A pffiffiffiffiffi ¼ ; ð27Þ U j ðt j Þ ¼ wij B 1 þ e  2 BC ðt  ti Þ=wij wij U k ðt k Þ ¼

1 wik

pffiffiffiffiffi rffiffiffiffi C 1  e  2 BC ðt  ti Þ=wik A pffiffiffiffiffi ¼ : B 1 þ e  2 BC ðt  ti Þ=wik wik

ð28Þ

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Then

pffiffiffiffiffi 1  e  2 BC ðt j  t i Þ=wij pffiffiffiffiffi 1 þ e  2 BC ðt j  t i Þ=wij

pffiffiffiffiffi 1  e  2 BC ðt k  t i Þ=wik pffiffiffiffiffi ¼ ; 1 þ e  2 BC ðt k  t i Þ=wik

thus we get pffiffiffiffiffi pffiffiffiffiffi e  2 BC ðt j  t i Þ=wij ¼ e  2 BC ðtk  ti Þ=wik :

ð29Þ

t im  t i0 ¼ ðt i0  t i1 Þ þ ðt i1  t i2 Þ þ ⋯ þ ðt im  1  t im Þ rffiffiffiffi rffiffiffiffiffiffiffi 1 þ A B m1 1 1 ffiffiffiffi ∑ wik ;ik þ 1 rC ¼ ln 2 BC B k¼0 1A C

ð30Þ

and

ð31Þ

t jn t j0 ¼ ðt j0  t j1 Þ þðt j1  t j2 Þ þ ⋯ þ ðt jn  1  t jn Þ rffiffiffiffi rffiffiffiffiffiffiffi 1 þA B n1 1 1 ffiffiffiffi ∑ wjk ;jk þ 1 : rC ln ¼ 2 BC B k¼0 1 A C

This means that wij ðt  t Þ: t j t i ¼ wik k i This completes the proof.



Theorem 1 shows that the time between neuron pci fire and ni fire is proportional to the connection weight of these two neurons. Smaller connection weights will lead to earlier firing times. Thus, the modified network model can be used to solve K Shortest Paths problem. Theorem 2. Assume that i0 ¼ j0 is the first fired neuron, PATHi and PATHj are the propagated paths of the wave from s0 to any two other neurons ni and nj, respectively, and PATH i : i0 -i1 -⋯-ik -ik þ 1 ⋯-im , PATH j : j0 -j1 -⋯-jk -jk þ 1 -⋯-jn . LðPATH i Þ and LðPATH j Þ are the path lengths of PATHi and PATHj, respectively: m1

LðPATH i Þ ¼ ∑ wkk þ 1 ;

LðPATH j Þ ¼ ∑ wkk þ 1 :



1 ∑nk  ∑m  1 wi ;i ¼ 0 wjk ;jk þ 1 ¼ k ¼ 0 k kþ1 ¼ t jn  t j0 t im  t i0

ð34Þ

Proof. Assume that neurons nik and nik þ 1 are firing at time tik and t ik þ 1 , respectively. ik ¼ RPik þ 1 , when ik r t o ik þ 1 , i.e., nik remains the parent of nik þ 1 until nik þ 1 fires. From Eqs. (11), (12) and (15), we get dU ik þ 1 ðtÞ ¼ BU 2ik þ 1 ðtÞ þ

C ; w2ik ;ik þ 1

A : wik ;ik þ 1

ð36Þ

ð37Þ

For ik r t o ik þ 1 , by solving the differential equation for Ui, we obtain pffiffiffiffiffi rffiffiffiffi  2 BC ðt  t i Þ=wi ;i k k kþ1 1 C 1e pffiffiffiffiffi U ik þ 1 ðtÞ ¼ : ð38Þ wik ;ik þ 1 B 1 þ e  2 BC ðt  t ik Þ=wik ;ik þ 1 Thus, from Definition 1, we obtain U ik þ 1 ðt k þ 1 Þ ¼ θik þ 1 ðt k þ 1 Þ ¼

A ; wik ;ik þ 1

rffiffiffiffi B 1 þ A 1 wik ;ik þ 1 C rffiffiffiffi : t ik þ 1  t ik ¼ pffiffiffiffiffiffi ln 2 BC B 1A C

pffiffiffiffiffiffi 2 BC rffiffiffiffi : B 1 þA C rffiffiffiffi ln B 1 A C

ð33Þ

the path length can be computed by the firing time of the fire in the path multiplied by a constant ratio L where pffiffiffiffiffiffi 2 BC rffiffiffiffi : ð35Þ L¼ B 1þA ffiffiffiffi rC ln B 1A C

Then

and

This completes the proof.

t im  t i0 LðPATH i Þ ; ¼ t jn t j0 LðPATH j Þ

ð39Þ

ð40Þ

ð42Þ

ð43Þ

t im  t i0 LðPATHi Þ : ¼ t jn  t j0 LðPATHj Þ

If both Conditions (18) and (19) hold, then

θik þ 1 ðtÞ ¼ f ðwij Þ ¼

1 t im  t i0 ∑m ¼ 0 wik ;ik þ 1 ¼ kn  t jn  t j0 ∑k ¼ 10 wjk ;jk þ 1

Then, according to Eqs. (32) and (33) we can get

k¼0

ð41Þ

Thus,

ð32Þ

k¼0 n1

5

ð44Þ

ð45Þ □

According to Theorem 2, the propagation time of a wave is proportional to its passed path length, so we can calculate the length of a path by the propagation time. Lemma 1. If Conditions (18) and (19) hold, each neuron will fire at most k times after the first neuron fired. Meanwhile, except for the source neuron, if a neuron has fired m times, there must exist at least m shortest paths with exactly distinct length from the source neuron to the fired neuron. Proof. For any neuron ni, Theorem 1 assures that if any neuron nj A Ri fires, ni must fire in the future. According to Eq. (11), the threshold of the neuron will be set to a very large value V θ which cannot be reach (Condition (19)), after it has fired k times. This ensures that each neuron will fire at most k times. If a neuron fires, we can obtain the corresponding path by stepping back the fire trace according to the recorded parent fire neurons. One fire holds one path. In this way, we can obtain at least m paths from the source neuron to this neuron. Notice that the parent set Pi(t) of a fire is not necessary for just one neuron, remember that we choose a neuron with minimum connection weight as pci in Eq. (17). Hence, a fire represents at least one path and the paths deduced by different fires have their different lengths. Thus, if a neuron has fired m times, there exists at least m shortest paths from the source neuron to the fired neuron. This completes the proof. □ Lemma 2. If both Conditions (18) and (19) hold, the maximum time Tmax between any two successive fire events is rffiffiffiffi B 1 þ A W max C rffiffiffiffi; ð46Þ T max ¼ pffiffiffiffiffiffi ln 2 BC B 1A C where Wmax is the maximum connection weight between two nodes. Proof. According to Theorem 1, the time needed to propagate a fire from a neuron to another neuron is proportional to the connection

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

weight between the two neurons. Notice that the fire only propagates between neurons, thus, the maximum time between two fire events is the time needed to transfer the fire between two nodes which have the maximum connection weight. This completes the proof. □ Obviously, the conception of Tmax is also applied to terminate the network. If no fire event happens in Tmax time, we can safely terminate the network and make the proposed algorithm be completed.

The following algorithm is presented to show the working steps of MCPCNN to find the k shortest paths in a network. Furthermore, this algorithm can be executed in a parallel manner, which could greatly improve the performance of path computation. Assuming that i1 is the source node in the network, Pi(t) is the parent set of ni, the algorithm to compute KSPs from a given node to each other node can be described as follows.

Algorithm 1. Single-source KSP algorithm by using MCPCNN model. 1 Initialize the network: set constant paraments A, B, C and V θ according to Conditions (18) and (19). Calculate Tmax according to Eq. (46). Set U ik ð0Þ ¼ 0 for all k ¼ 2; 3; 4; …; N and U i1 ð0Þ ¼ θi1 ð0Þ ¼ AInit , this makes ni1 fire at time 0. 2 Run the network: for each neuron ni in the network, the following steps will be done in parallel: 3 do    Calculate P i ðtÞ and pci according to Eqs: ð47Þ and ð17Þ:    Calculate U ðtÞ according to Eq: ð15Þ: i     Calculate θi ðtÞ according to Eq: ð11Þ:    Calculate Y ðtÞ according to Eq: ð10Þ: i     if Y i ðtÞ ¼ 1 and ni fires not more than k times then     1 Let ni fire and add t to ϝi       2 Record the parent fire set P if which have triggered ni to fire: end  If no neuron fires in T max time; stoping the network: 4 while there exists a neuron firing in Tmax time After the first neuron fired, Theorem 1 ensures the neurons which can be reached from the fired neuron will fire in the future, just like the wave propagating in the network. Theorem 2 ensures that the propagation time is proportional to the length of the propagated path of the wave. From Lemma 1, each neuron will fire at most k times, and the paths in a non-decreasing order can be obtained. From Lemma 2, the network will terminate in a finite graph correctly. Thus the modified MCPCNNs model can compute k shortest paths in a finite graph perfectly.

4. K-Shortest Paths computation 4.1. The Algorithms by MCPCNN Suppose i1 is the first fired neuron in the network, and both Conditions (18) and (19) hold, the MCPCNN model can guarantee that the firing wave can be propagated from i1 to other neurons along the shortest path through local connectivity of neurons. For each neuron ni, if the wave of firing event arrives at it from nj at the same time, then we record nj as the parent of ni, denoted by P(i). Notice that P(i) may contain more than one neurons. If ni fires at kth times, there exist k shortest paths with distinct lengths from i1 to i. Thus, each neuron will either fired k times, in this case there exist k shortest paths with distinct lengths from start neuron i1 to that neuron, or fired less than k times, in which case there are less than k shortest paths from i1 to that neuron. Thus, in order to terminate the network, we wait for Tmax time, which can be computed by Eq. (46). We use Pi(t) to denote the current set of parents of ni at time t: rffiffiffiffi 8 99 8  B > > > > > > > 1 þ A <  = => < wj;i ffiffiffiffi; j A Ri rC ð47Þ : P i ðtÞ ¼ jmin t ðfj Þ þ pffiffiffiffiffiffi ln > > > > 2 BC B > > > :  ; ;> : 1A C

By making little change in Algorithm 1, we can obtain another algorithm to solve single-pair KSP problem. The main difference is the stopping criteria of the network, i.e., when no neuron fires in Twait time or the goal node has fired k times. For any ni, when a firing wave reaches it, there exists a firing sequence starting from ni1 to ni. Each neuron in this sequence holds the information about their fire's parent index. Through this parent information, the path from ni1 to ni can be determined readily: we just need to track the parent information back to the start neuron ni1 . If we calculate the paths represented by fjk (the kth fire of Nj), we start from nj, add nj to the path. Assume that fik is the only parent fire of fjk, then we add ni to the path. We continue to do so until we encounter the original fire f01 (the first fire of the source node neuron). If a fire has more than one parent fire, the path number will be increased: copying previous paths and stepping back all parent fires. An example to show how to compute KSPs using the algorithms is presented in Appendix A. 4.2. Algorithm analysis In the modified model, each neuron works as a leaky integrator which can be implemented by VLSI technologies. The VLSI implementation of pulse couple neuron network can be found in [39], which can lead to a rapid calculation of Ui(t), and fast KSPs computation. In our implementation, we use iterative method to calculate Ui(t) which is a differential equation. Assume that the total iteration number to calculate Ui(t) is M, the complexity of the proposed algorithm can be stated as O(nKM), where n is the number of neurons, i.e., the total number of nodes in a graph. K is the predefined number of shortest paths. The total iteration number to calculate the differential equation of Ui(t) is M determined by the structure of the network and the value of ΔU in each

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

iteration. Assume that Lmax is the maximum length among the computed paths, which can be expressed as Lmax ⌉: M¼⌈ ΔU

ð48Þ

We can conclude from the above expression that the proposed algorithm is more efficient when the weight of the edges is small.

5. Simulations The goal of the simulations is to verify the theoretical results and the effectiveness of the modified MCPCNN model to solve KSP problem. First we study the influence of wave speed in the modified model, and then show the performance of the proposed algorithms when applied to single-pair and single-source KSP problems. The testing graphs are randomly generated using SPRAND generators attributed to Cherkassky et al. [41]. The C code of this generator is contained in the SPLIB-1.4 library available from the personal web page of Goldberg at http:// www.avglab.com/andrew/. In our simulations, we restrict all edge weights to be integers, just to simplify the simulation of the neurons. A fixed time ∇T to iterate the network is also used, which means that the propagation time of the waves is unit length (see Eq. (49)). Therefore, all firing times of neurons can be divided by ∇T, which can easily ensure the correctness of our simulations. We set V θ ¼ 100; 000 and AInit ¼ 10 for all neurons. The internal activity U i ð0Þ is initialized as 0 except the starting neuron. We set the internal activity of the starting neuron U Start ð0Þ ¼ AInit , which will guarantee that the starting neuron will fire first. For example, when n1 is the starting neuron, we set U i ð0Þ ¼ 0 ðia 1Þ, and U 1 ð0Þ ¼ 10: rffiffiffiffi B 1 þ A 1 C rffiffiffiffi ∇T ¼ pffiffiffiffiffiffi ln ð49Þ 2 BC B 1A C The implementations are written by C þ þ and complied with gþ þ compiler, the simulations are carried out on a machine equipped with Intel Pentium CPU (3.2 GHz) and 4 GB RAM, with Ubuntu Desktop 12.04 64 bits. 5.1. Wave speed in the network The spreading speed of the wave in MCPCNN is determined by L using Eq. (35), which represents the ratio between path lengths and firing times. L can be maximized by tuning A, B and C, under the restrictions of Conditions (18) and (19). For all positive A, B, C, we construct the input space for L, as shown in Fig. 4. From Condition (19), we get C o V 2θ W 2min B. Because V θ and Wmin are all constants, so we get the boundary line in the BC plane, i.e., from axis B to line C0 , where C0 is defined as C ¼ V 2θ W 2min B. For an arbitrary point (0,b,c) between axis B and line C0 in the BC plane,

Fig. 4. Illustration of input space of A, B and C for computing L.

7

we set tan ðβÞ ¼ c=b, then we can obtain a vertical line A0 with the same values of B and C, and different A. From Condition (18), we get rffiffiffiffi C pffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ tan β; 0 oA o B

ð50Þ

0 so we can obtain ffi the valid input of A on line A where a o tan ðβÞ. pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 Let r ¼ a2 þ b , b ¼ r cos β and c ¼ r sin β, from Eq. (35), the speed L can be calculated by A, r and β as follows:

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffi 2 BC 1 r sin 2β pffiffiffiffiffiffiffiffiffiffiffi : rffiffiffiffi ¼ L¼ 2 1 þ A cot β B pffiffiffiffiffiffiffiffiffiffiffi ln 1þA 1  A cot β C rffiffiffiffi ln B 1A C

ð51Þ

Clearly, the substitutions make us understand the input space more easily, for computing L under the restrictions. So we can investigate the influence of A, r, and β by fixing another two variables. The results are shown in Fig. 5. From the results, it can be concluded that (1) For both L ¼ f ða; b; cÞ and L ¼ f ða; r; βÞ, there does not exist extreme value for L, the lower certain bound is zero and no upper bound there. A bigger L can be obtained by a smaller A, as shown in Fig. 5(a). (2) The value of L grows linearly as r is increasing from Fig. 5(b), and a bigger β can also lead to a higher speed L, as shown in Fig. 5(c). Therefore, a relative optimized speed L can be obtained by adjusting A, B and C. First, an appropriate smaller A is essential; then we should set a smaller B and relative bigger C, which can guarantee the bigger r and β can be obtained. The results in Fig. 6 demonstrate the conclusion. In order to further address this problem, we conduct the practical KSP simulation to study the speed by tuning A, B and C. The SPRAND generators are applied to generate a graph with 1000 nodes and 20,000 edges, the edge weights are limited between 1 and 30. We set V θ ¼ 100; 000 to satisfy Condition (19). The value of AInit is set to 10 for all neurons. For simplicity, we only find 3 KSPs from n1 to each other neuron. Firstly, we fix B ¼ 1 and C ¼ 1000, change A from 1 to 31. Under this scenario, we go through how long the network takes to compute the first shortest path. Simulation result in Table 1 shows that the computation time of MCPCNN is monotonically increasing with the value of A, the wave only needs 12.00 units time to spread in the network for the computation when A ¼ 1. When A increases, the spreading time is also increasing. Value B has similar effect to the spreading speed of the network, when A and C are fixed, e.g., A ¼ 1; C ¼ 1000, the same procedure has been repeated for parameter B (from 0.1 to 999.1); only 12.00 units time are needed when B is 0.1. Therefore, we can conclude that the spreading speed is more sensitive to A than that of B. By fixing A and B to 1 and 10, we can see from Table 1, the spreading speed is nearly inversely proportional to C. The practical simulations also demonstrate the above analysis about speed L, i.e., the smaller A and B will lead to a more quickly spreading wave in the network, while a smaller C will slow down the wave propagation. Fig. 7 shows the related spreading speed when the values A, B or C change. Meanwhile, if the values of A and B are in a small range, the spreading time of the wave is linearly proportional to the change of A and B, while A is more sensitive than B and C to the spreading time of the wave, as shown in Fig. 7(d). In the following simulations, for convenient calculation, we set the constant paraments A¼1, B ¼1 and C¼ 1000. Clearly, the parameters' setting meets the Conditions (18) and (19) in Theorem 1 to ensure the starting neuron fires first.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

Fig. 5. The influence of A, r, β to L. (a) The value of L on the vertical line A0 with varying A as shown in Fig. 4, β ¼ 2π=5, r¼ 10. (b) The value of L when r varies, A ¼ 0.1 and β ¼ 2π=5. (c) The value of L when β varies, A ¼ 0:1 and r ¼ 10.

Fig. 6. Computing L with varying B and C, A ¼ 0:1. The value of L is denoted by different colors. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Table 1 Firing time (ms) of the network when A, B and C changed. B ¼ 1; C ¼ 1000

A ¼ 1; C ¼ 1000

A ¼ 1; B ¼ 1

A

Time

B

Time

C

Time

1 4 7 10 13 16 19 22 25 28 31

12.00 48.25 85.41 124.25 165.80 211.47 263.52 325.93 407.15 531.41 874.81

0.10 100.10 200.10 300.10 400.10 500.10 600.10 700.10 800.10 900.10 999.10

12.00 12.42 12.91 13.47 14.14 14.95 15.98 17.35 19.37 23.01 49.78

1 3 31 61 91 121 151 181 211 241 271

1.12 0.70 0.20 0.14 0.11 0.10 0.09 0.08 0.07 0.07 0.06

5.2. Single-pair KSPs computation For solving single-source KSP problem, Ref. [5] proposed a heuristic single-source KSP algorithm to address the pathway inference problem in gene networks, but the authors only discuss simple paths according to the application, where loops are not

allowed. The proposed method has two features, one is to compute a set of shortest paths with k distinctive path length in non-decreasing order; the other is to compute all shortest paths with the lengths not larger than a given threshold. According to our knowledge, no methods are published to solve such a problem exactly. In order to show the efficiency of MCPCNN, we take the comparisons with traditional methods aiming at computing k shortest paths (may contain duplicate path lengths) where loops are allowed in solution paths. As far as we know, the EA algorithm [6] is the most famous one to solve both the kinds of KSP computations with the same constraints. LVEA [17] and Kn [8] can be regarded as the optimizations of EA, and Kn is better than LVEA according to their experiments. Therefore, we compare our method with EA and Kn on single-pair KSP problem. In this simulation, we applied three algorithms to the randomly generated graph using SPRAND generators, with the max connection weight to be 50 and fixing node number to be 1000 for each graph. The difference of the graphs is the edge number which is from 2000 to 499,500 for full connected 1000 nodes. We randomly selected 100 pairs of nodes as the source and goal, then the average runtime can be calculated for comparison. Notice that no heuristic estimations are used for Kn in our simulation since there exists no additional information of the graph for heuristic estimate. We set the number of explored nodes or the number of explored edges growing by 20 percent in each run of An for Kn algorithm, just as [8] did. From Fig. 8 we can reach the following conclusions: (1) The spreading time of MCPCNN grows slightly when k increases slightly from zero, and is not sensitive to a relatively larger k. This is because the spreading time of MCPCNN is proportional to the longest path among all computed paths by MCPCNN. The running time grows when k increases in the smaller k area and becomes steady when k becomes larger. (2) The performance of Kn and EA are nearly irrelevant to k, this can be explained by the same asymptotic time complexities of Kn and EA, Oðm þn log n þ kÞ, which is not sensitive to k. (3) MCPCNN is inferior to Kn and EA when the graph is very sparse (the edge number is less than 20 times of node number). A denser graph will benefit the better performance of MCPCNN than Kn and EA. The sparser a graph is, the lesser the computation time needed. A longer path length will lead to more spreading time of MCPCNN. (4) As the optimization of EA, Kn outperforms EA. Ref. [8] has reached the similar conclusion.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

9

Fig. 7. The spreading speed with varying A, B and C. (a) A changes with fixed B and C. (b) B changes with fixed A and C. (c) C changes with fixed A and B. (d) Sensibility comparison of A, B and C.

Fig. 8. Runtime comparison between MCPCNN and Kn for single-pair KSPs computation.

Fig. 9 shows the spreading time of MCPCNN for single-pair KSP computation on graphs with 1000 nodes and varying edges. We can see that the spreading time of MCPCNN grows slightly when k increases slightly from zero, and the time is not sensitive to k when k is larger, which we have explained above. The spreading time of MCPCNN is drastically reduced as the graph becomes more

complex. This is because a complex graph has the relative shorter longest path between nodes which can lead to less spreading time of MCPCNN. Since MCPCNN can compute KSP in a parallel way, the complexity of graphs will not increase the runtime of MCPCNN, only the path length and the wave speed will determine the runtime of MCPCNN.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

It should be clear that when MCPCNN is applied to single-pair KSP problem, the algorithm will terminate either when the goal neuron has fired k times, in which case there are just k shortest paths found out with distinctive length, or when no neuron fires in Twait time, in which case there exist less than k shortest paths with distinctive length. Notice that whichever node fires, there exists a path from this neuron to the source neuron. In other words, in the process of computing k shortest paths from the source to the goal, the algorithm has also found many shortest paths from source node to many other nodes. Thus, the proposed algorithm has more advantages for computing k shortest paths from the source node to each other node (single-source KSP computation).

5.3. Single-source KSPs computation As we mentioned before, Kn and LVEA can be regarded as the optimizations of EA, and Kn is the most efficient one. All of them can be used for single-source KSP problem. Although the authors

Fig. 9. The spreading times of the wave on different edge-number graphs.

of Kn did not state how to perform it to solve single-source KSP problem, we extend Kn for further comparison. Our extension of Kn (named e-Kn) can be stated as follows: first we run An to compute minimum shortest path tree, there is no need to stop An and resume it later like solving single-pair KSPs, because we need to compute KSPs from a source node to each other node. Then, we create tree heaps for each node in the graph. Finally, by using these tree heaps, we keep computing KSP node by node. Furthermore, all graph data needs to be loaded into the main memory for e-Kn due to single-source KSPs computation, the feature of on-thefly used in Kn for single-pair KSP problem is no longer needed for such problem. As we need to compute a complete (contain all nodes) minimum spanning tree, in this case, the performance between An [43] and Dijistra's algorithm [44] has exactly no difference. Thus, we do not need to use heuristic search strategy in e-Kn when solving single-source KSP problem. We can conclude from the above analysis that the time complexity of e-Kn is Oðm þ n log n þ nkÞ, including Oðn log nÞ for An. The effort required for the construction of heaps is Oðn log nÞ, O(nk) to compute k shortest path for each other node. We can also deduce the space complexity of e-Kn, i.e., Oðm þ n log n þkÞ. We use graph data randomly generated by SPRAND generators attributed to Cherkassky et al. [41]. Similar to the above simulation, we fix the node number to 1000 with different edge numbers. Six source nodes are randomly selected for each graph to calculate the average time for comparison. We intend to compute all k shortest paths from these source nodes to each other node. Simulation result is shown in Fig. 10. We can see that the runtime of e-Kn is nearly proportional to k. It can be explained by the asymptotic time complexity of e-Kn, Oðm þ n log n þnkÞ, which is nearly proportional to k. When k increases by 1, e-Kn needs to process one more step to search the path graph for KSPs computation for each node, and n more steps are required in total, which shows that the complexity of e-Kn depends on n and is proportional to k. As shown in Fig. 10, the runtime of MCPCNN is much smaller than e-Kn. The larger the edge number is, the more the advantage of MCPCNN achieves.

Fig. 10. Runtime comparison between MPCNN and e-Kn for single-source KSPs computation.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

11

5.4. Route planning case on New York graph

Fig. 11. Propagation time of MPCNN for single-source KSPs on different node-toedge ratios.

Fig. 12. Run time comparison in the New York map.

Route planning problem and its method have been extensively researched in recent years. Ref. [42] presented a very comprehensive overview of route planning algorithms. The original route planning problem is to find an optimal, or even sub-optimal, route from a start to a goal location. KSP algorithms are used when alternative routes are required or when additional constraints on routes need to be satisfied. Notice that in route planing, loops in the route are usually not desired. Algorithms for finding k shortest loopless paths like Yen's algorithm [14] seem to be more suitable for this application. However, this restriction makes the problem significantly harder [6]. For example, the computational complexity of Yen's algorithm is Oðknðm þn log nÞÞ, which is very far from the complexity of Oðm þ n log n þ kÞ, obtained by EA [6], LVEA [17] and Kn [8] without the restriction. For this reason, applying algorithms for finding k shortest loop-contained paths and discarding those routes which contain loops represents an efficient solution for this problem. In our experiments, we use the road map of the New York City, which is available on the homepage of the 9th DIMACS Implementation Challenge [40]. This map consists of 264,346 nodes and 733,846 edges. We apply MCPCNN and Kn to the graph in order to find the first 100 optimal routes between two randomly selected nodes. As heuristic estimate for Kn, the cosine law that computes the airline distance between two points can be found in [8]. We assume that the earth radius is 6350 km, which is slightly smaller than the minimal actual value, in order to ensure that the airline distance heuristic used in Kn is admissible. In this simulation, we set V θ ¼ 10; 000, A¼ 0.1, B ¼1 and C ¼1000, which satisfy the Conditions (18) and (19). The value of AInit is set to 10 for all neurons (Fig. 11). We then compute the mean effort required for each algorithm to find the 100 optimal routes. The results are illustrated in Fig. 12, which shows the mean runtime required by using Kn and MCPCNN. It is clear that both Kn and MCPCNN are not sensitive to k, while MCPCNN is much more efficient than Kn in our

Fig. 13. Actual path obtained using MCPCNN in New York graph. (a) The 10 shortest paths by actual longitude and latitude coordinates. (b) The shortest path (red line) by MCPCNN presented in real map. (c) The best path (blue line) obtained using GoogleMap. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

12

experiment. MCPCNN just requires one-tenth time compared with Kn. Similar analysis can be found in Section 5.2. Notice that in our experiments, An was not resumed in Kn, and that is why the runtime curve of Kn is not like a step function. The above simulation shows the promising efficiency of MCPCNN, due to its parallel computation feature. As a matter of fact, MCPCNN will work much more faster when the proper paraments A, B and C are set under the Conditions (18) and (19), as the analysis in Section 5.1. Fig. 13 reports the actual path computations between 629 W Lake Ave, Rahway, NJ, USA and 5456 Arthur Kill Rd, Staten Island, NY, USA obtained in our experiments. Ten best paths are plotted in Fig. 13(a), which seems as one path due to their high similarities. According to the actual longitude–latitude coordinates, the obtained shortest path can be shown in the real graph in Fig. 13 (b). We also give the best path by Google Map in Fig. 13(c), just for the comparison to show the usability of our method.

Fig. A1. An example graph to illustrate the process of Algorithm 1.

computational complexity is only related to the length of the longest shortest path. Each neuron in the model works as a sensor which can propagate the firing event to its neighbors without any comparative computations. Simulative results show the superior performance of MCPCNN to others.

6. Conclusions We presented a modified neural network model to solve the two kinds of k shortest paths (not required to be simple) problems: single-pair and single-source KSPs computations. The MCPCNN model is topologically organized with only local lateral connections among neurons. We have proved that the generated spiking waves in the proposed network could spread at a constant speed, which guarantees that the waves can propagate along the shortest path from a given node to each other node. By using the parallel pulse transmission characteristic of pulse coupled neural network, this method is able to find k shortest paths quickly. The

Acknowledgments The authors wish to thank the referees for their valuable comments and suggestions. This work was supported by the Fundamental Research Funds for the Central Universities under Grant ZYGX2013J076, and partially supported by National Science Foundation of China under Grants 61273308 and 61175061.

Appendix A. An example using the proposed algorithm By using this simple example, we just show how our algorithms work using manual computing method. As shown in Fig. A1, we set n0 as the start neuron and n0 fires firstly. We intend to find 3 shortest paths from n0 to each other node. We set A ¼ 1; B ¼ 1; C ¼ 100 and V θ ¼ 1000, which satisfy the Conditions (18) and (19). We execute Algorithm 1 and consider the dynamic of each neuron step by step as follows: ðf Þ ðf Þ ðf Þ ðf Þ 2 2 1. At time 0, n0 fires, thus t ð1Þ 0 ¼ 0, then R0 ¼ f1; 2; 3g, dU 1 ðt  t pc1 Þ ¼  U 1 ðt  t pc1 Þ þ 18:9, θ 1 ðtÞ ¼ A=w0;1 ¼ 0:43, dU 2 ðt t pc2 Þ ¼  U 2 ðt t pc2 Þ þ ðf Þ ðf Þ ð1Þ 2 34:6, θ2 ðtÞ ¼ A=w0;2 ¼ 0:58, dU 3 ðt  t pc Þ ¼  U 3 ðt  t pc Þ þ 25:0, θ3 ðtÞ ¼ A=w0;3 ¼ 0:50, for t Z t 0 ¼ 0, and the current parents of the above 3 3 neuron are pc1 ¼ 0, pc2 ¼ 0, pc3p¼ffiffiffiffiffiffi 0, and P 1 ðtÞp¼ffiffiffiffiffiffiffiffi f0g, ffi P 2 ðtÞ ¼ pf0g, ffiffiffiffiffiffiffiffiffiP 3 ðtÞ ¼ f0g. ð1Þ ð1Þ 1 2. When t ¼ t 2 ¼ t 0 þ 2ðW 0;2 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:1760, then

1 U 2 ðtÞ ¼ w0;2

pffiffiffiffiffi ð1Þ rffiffiffiffi ð1Þ C 1 e  2 BC ðt 2  t0 Þ=w0;2 pffiffiffiffiffi ð1Þ ¼ θ2 ðtÞ ¼ A=w0;2 ¼ 0:5882: Þ=w0;2 B1 þe  2 BC ðt 2  tð1Þ 0

Thus, Y 2 ðtÞ ¼ 1, and then R2 ¼ f1; 3; 4g, dU 1 ðt  t ðfpcÞ Þ ¼ U 21 ðt  t ðfpcÞ Þ þ 123:4, θ1 ðtÞ ¼ A=w2;1 ¼ 1:111, dU 3 ðt  t ðfpcÞ Þ ¼  U 23 ðt  t ðfpcÞ Þ þ 25:0, 1 1 3 3 θ3 ðtÞ ¼ A=w0;3 ¼ 0:5000, dU 4 ðt  t ðfpcÞ Þ ¼  U 22 ðt  t ðfpcÞ Þ þ 6:925, θ4 ðtÞ ¼ A=w2;4 ¼ 0:2631, for t Z t ð1Þ 2 ¼ 0:1760, and the current parents of 4 c c 4 c the above neuron are p1 ¼ 2, p3 ¼ 0, p4 ¼ 2, and P 1 ðtÞ ¼ f2g, P 3 ðtÞ ¼ f0g, P 4 ðtÞ ¼ f2g After fire, pc2 ¼ NULL, set θ2 ðtÞ ¼ Ainit and U 1 ðtÞ ¼ 0. pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi 3. When t ¼ t ð1Þ ¼ t ð1Þ þ 1ðW 0;3 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:2071, then U 3 ðtÞ ¼ θ3 ðtÞ ¼ A=w0;3 ¼ 0:5000 3 0 2 ðf Þ ðf Þ ð1Þ 2 c Thus, Y 3 ðtÞ ¼ 1, n3 fires and then R3 ¼ f4g, dU 4 ðt  t pc Þ ¼  U 4 ðt  t pc Þ þ 69:44, θ4 ðtÞ ¼ A=w3;4 ¼ 0:8333, for t Z t 3 ¼ 0:2071, so p4 ¼ 3, and 4

4

P 4 ðtÞ ¼ f3g After fire, pc3 ¼ 2, and dU 3 ðt  t ðfpcÞ Þ ¼  U 23 ðt  t ðfpcÞ Þ þ 82:64, θ3 ðtÞ ¼ A=w2;3 ¼ 0:9090. 3 3 ffi pffiffiffiffiffiffi pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð1Þ 1 4. When t ¼ t ð1Þ 1 ¼ t 0 þ 2ðW 0;1 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:2382, then U 1 ðtÞ ¼ θ 1 ðtÞ ¼ A=w0;1 ¼ 0:4348. Thus Y 1 ðtÞ ¼ 1, n1 fires and R1 ¼ f2g dU 2 ðt t ðfpcÞ Þ ¼  U 22 ðt t ðfpcÞ Þ þ 5:669, θ2 ðtÞ ¼ A=w1;2 ¼ 0:2381, for t Z t pc2 ¼ t 1ð1Þ ¼ 0:2382, so pc2 ¼ 1 and 2 2 P 2 ðtÞ ¼ f1g After fire, pc1 ¼ 2, and dU 1 ðt  t ðfpcÞ Þ ¼  U 21 ðt  t ðfpcÞ Þ þ 123:4, θ1 ðtÞ ¼ A=w2;1 ¼ 1:111. 1 1 ffi pffiffiffiffiffiffi pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð1Þ 1 5. When t ¼ t ð2Þ 1 ¼ t 2 þ 2ðW 2;1 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:2692, then U 1 ðtÞ ¼ θ 1 ðtÞ ¼ A=w2;1 ¼ 1:111 c Thus, Y 1 ðtÞ ¼ 1, n1 fires and R1 ¼ f2g, dU 2 ðt  t ðfpcÞ Þ ¼ U 22 ðt  t ðfpcÞ Þ þ 5:669, θ2 ðtÞ ¼ A=w1;2 ¼ 0:2381, for t Z t pc2 ¼ t ð1Þ 1 ¼ 0:2692, so p2 ¼ 1, and 2 2 P 2 ðtÞ ¼ f1g. After fire, pc1 ¼ NULL, set θ1 ðtÞ ¼ Ainit and U 1 ðtÞ ¼ 0. Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

13

Table A1 The results of single-source Algorithm 1 applied to Fig. A1. Firing sequence

Firing neuron[fire index]

Firing time

Parent neuron[index]

1 2 3 4 5 6 7 8 9 10 11 12 13

0[1] 2[1] 3[1] 1[1] 1[2] 3[2] 4[1] 4[2] 1[3] 4[3] 2[2] 2[3] 3[3]

0 0.1760 0.2071 0.2382 0.2692 0.2899 0.3310 0.4142 0.5488 0.5695 0.6420 0.6730 0.7556

NULL[NULL] 0[1] 0[1] 0[1] 2[1] 2[1] 3[1] 3[2] 4[1] 2[1] 4[1] 1[1] 2[2]

Table A2 The obtained 3 KSPs in Fig. A1 (node 0 is source neuron). Destination node

Path sequence

Path

Path length

1

1 2 3 1 2 3 1 2 3 1 2 3

0-1 0-2-1 0-3-4-1 0-2 0-3-4-2 0-1-2 0-3 0-2-3 0-3-4-2-3 0-3-4 0-2-3-4 0-2-4

2.3 2.6 5.3 1.7 6.2 6.5 2.0 2.8 7.3 3.2 4.0 5.5

2

3

4

pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð1Þ 1 6. When t ¼ t ð2Þ 3 ¼ t 2 þ 2ðw2;3 = BC Þ ln ð1 þA B=C Þ=ð1  A B=C Þ  0:2899, then U 3 ðtÞ ¼ θ 3 ðtÞ ¼ A=w2;3 ¼ 0:9090 ðf Þ Thus, Y 3 ðtÞ ¼ 1, and n3 fires and R3 ¼ f4g, dU 4 ðt  t pc Þ ¼  U 24 ðt  t ðfpcÞ Þ þ 69:44, θ4 ðtÞ ¼ A=w3;4 ¼ 0:8333, pc4 ¼ 3. 4 4 After fire, pc3 ¼ NULL, set θ3 ðtÞ ¼ Ainit and U 3 ðtÞ ¼ 0. p ffiffiffiffiffiffi p ffiffiffiffiffiffiffiffi ffi p ffiffiffiffiffiffiffiffi ffi 7. When t ¼ t ð1Þ ¼ t ð1Þ þ 1ðW 3;4 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:3313, then U 4 ðtÞ ¼ θ4 ðtÞ ¼ A=w3;4 ¼ 0:8333 4 3 2 Thus, Y 4 ðtÞ ¼ 1, n4 fires, then R4 ¼ f1; 2; 3g, dU 1 ðt  t ðfpcÞ Þ ¼  U 21 ðt  t ðfpcÞ Þ þ 22:67, θ1 ðtÞ ¼ A=w4;1 ¼ 0:4761, dU 2 ðt t ðfpcÞ Þ ¼  U 22 ðt t ðfpcÞ Þ þ 1

1

2

2

c c c 11:11, θ2 ðtÞ ¼ A=w4;2 ¼ 0:3333, for t Z t pc2 ¼ t ð1Þ 1 ¼ 0:3313, the current parents of the above neuron are p1 ¼ 4, p2 ¼ 4, p3 ¼ 4 and

P 1 ðtÞ ¼ f4g, P 2 ðtÞ ¼ f4g, P 3 ðtÞ ¼ f4g. After fire, pc4 ¼ 3, and dU 4 ðt  t ðfpcÞ Þ ¼  U 24 ðt  t ðfpcÞ Þ þ69:44, θ4 ðtÞ ¼ A=w3;4 ¼ 0:8333. 4 4 ffi pffiffiffiffiffiffi pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð2Þ 1 8. When t ¼ t ð2Þ 4 ¼ t 3 þ 2ðW 3;4 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:4142, then U 4 ðtÞ ¼ θ 4 ðtÞ ¼ A=w3;4 ¼ 0:8333 c Thus, Y 4 ðtÞ ¼ 1, n4 fires, andp Rffiffiffiffiffiffi After ¼ ffiffiffiffiffiffiffiffi 2, and dU 4 ðt  t ðfpcÞ Þ ¼  U 22 ðt  t ðfpcÞ Þ þ 6:925, θ4 ðtÞ ¼ A=w2;4 ¼ 0:2631. 4 ¼ f1; 2; 3g, p ffiffiffiffiffiffiffiffiffi fire, p4 p ffi 4 4 ð3Þ ð1Þ 1 9. When t ¼ t 1 ¼ t 4 þ 2ðW 4;1 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:5488, then U 1 ðtÞ ¼ θ1 ðtÞ ¼ A=w4;1 ¼ 0:8333 ðf Þ ðf Þ 2 c Thus, Y 1 ðtÞ ¼ 1,n1 fires, then R1 ¼ f2g, dU 2 ðt  t pc Þ ¼  U 2 ðt  t pc Þ þ11:11, θ2 ðtÞ ¼ A=w4;2 ¼ 0:3333, for t Zt pc2 ¼ t ð1Þ 1 ¼ 0:3313, so p2 ¼ 4 and 2 2 P 2 ðtÞ ¼ f1g, P 1 ðtÞ ¼ f2g. After fire, n1 fired k¼ 3 times, thus, pc1 ¼ NULL, set θ1 ðtÞ ¼ V θ ¼ 1000, and n1 will not change. pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi 10. When t ¼ t ð3Þ ¼ t ð1Þ þ 1ðW 2;4 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:5695, then U 4 ðtÞ ¼ θ4 ðt ð1Þ Þ ¼ A=w2;4 ¼ 0:2631 4 2 4 2 Thus, Y 4 ðtÞ ¼ 1, n4 fires, then R4 ¼ f1; 2; 3g, θ1 ðtÞ ¼ V θ ¼ 1000, dU 2 ðt  t ðfpcÞ Þ ¼  U 22 ðt  t ðfpcÞ Þ þ 11:11, θ2 ðtÞ ¼ A=w4;2 ¼ 0:3333, dU 3 ðt  t ðfpcÞ Þ ¼ 2

2

3

 U 23 ðt  t ðfpcÞ Þ þ 2:872, θ3 ðtÞ ¼ A=w4;3 ¼ 0:1694, for t Z t pc2 ¼ t 1ð1Þ ¼ 0:3313, thus we get pc1 ¼ NULL, pc2 ¼ 4, pc3 ¼ 4 and P 1 ðtÞ ¼ fNULLg, 3

P 2 ðtÞ ¼ f4g, P 3 ðtÞ ¼ f4g. After fire, n4 fired k¼3 times, thus pc4 ¼ NULL, set θ4 ðtÞ ¼ V θ ¼ 1000, n4 will not change. pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi 11. When t ¼ t ð2Þ ¼ t ð1Þ þ 1ðW 4;2 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:6420, then U 2 ðtÞ ¼ θ2 ðtÞ ¼ A=w4;2 ¼ 0:3333. 2 4 2 Thus, Y 2 ðtÞ ¼ 1, and R2 ¼ f1; 3; 4g, θ1 ðtÞ ¼ V θ ¼ 1000, dU 3 ðt t ðfpcÞ Þ ¼  U 23 ðt t ðfpcÞ Þ þ 82:64, θ3 ðtÞ ¼ A=w2;3 ¼ 0:9090, θ4 ðtÞ ¼ V θ ¼ 1000, for 3

3

c c c t Z t ð2Þ 2 ¼ 0:6420, so p1 ¼ NULL, p3 ¼ 2, p4 ¼ NULL, and P 1 ðtÞ ¼ fNULLg, P 3 ðtÞ ¼ f2g, P 4 ðtÞ ¼ fNULLg

After fire, pc2 ¼ 1, and dU 2 ðt  t ðfpcÞ Þ ¼  U 22 ðt  t ðfpcÞ Þ þ5:669, θ2 ðtÞ ¼ A=w1;2 ¼ 0:2381. 2 2 ffi pffiffiffiffiffiffi pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð1Þ 1 12. When t ¼ t ð3Þ 2 ¼ t 1 þ 2ðW 1;2 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:6730, then U 2 ðtÞ ¼ θ 2 ðtÞ ¼ A=w0;2 ¼ 0:3333 Thus, Y 2 ðtÞ ¼ 1, then R2 ¼ f1; 3; 4g, θ1 ðtÞ ¼ V θ ¼ 1000 dU 3 ðt  t ðfpcÞ Þ ¼  U 23 ðt  t ðfpcÞ Þ þ 82:64, θ3 ðtÞ ¼ A=w2;3 ¼ 0:9090, θ4 lðtÞ ¼ V θ ¼ 1000. For 3 3 c c c t Z t ð2Þ 2 ¼ 0:6730, the current parents of the above neurons are p1 ¼ NULL, p3 ¼ 2, p4 ¼ NULL, and P 1 ðtÞ ¼ fNULLg, P 3 ðtÞ ¼ f2g, P 4 ðtÞ ¼ fNULLg After fire, n2 fired k¼ 3 times, thus pc2 ¼ NULL, set θ2 ðtÞ ¼ V θ ¼ 1000. n2 will not change.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

14

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð2Þ 1 13. When t ¼ t ð3Þ 3 ¼ t 2 þ 2ðw2;3 = BC Þ ln ð1 þ A B=C Þ=ð1  A B=C Þ  0:7556, then U 3 ðtÞ ¼ θ 3 ðtÞ ¼ A=w2;3 ¼ 0:9090, Thus, Y 3 ðtÞ ¼ 1, and n3 fires, then R3 ¼ f4g, U 4 ðtÞ ¼ V θ ¼ 1000, pc4 ¼ NULL. After fire, n3 fired 3 times, thus pc3 ¼ NULL, we set θ3 ðtÞ ¼ V θ ¼ 1000. n3 will not change. 14. When t ¼ 1:366, there has been no neuron firing in T max ¼ 0:6109 time, so the network can be terminated. In the example, we described how the network worked rather than the calculation of the parent neuron and pci . The final firing results and the obtained 3 shortest paths are listed in Tables A1 and A2, respectively. In Table A1, the first two columns are fire sequence and firing neuron; the last two columns record firing time and the parent neuron, which is the neuron stimulating it to fire. The number in bracket is the fire sequence to the neuron. In Table A2, the shortest paths from node n0 to each other node are listed. Notice that we obtain shortest paths from the fire information recorded to each fire, which is from the fire of a neuron along the parent fires of the current fire to the original fire. For example, we use the following process to obtain the shortest paths P1 from node 0 to node 4, we start from fire f41, set path P1 as empty and add 4 to P1, because f31 is the parent fire of f41, add 3 to P1, then add 0 to P1 as f01 is the parent fire of f31, and f01 is the original fire then we finish our path obtained, the result P1 is 0-3-4, the length of P1 can be computed by the firing time of f41. Remember that by Theorem 3.3, the path length is proportional to the fire time of the fire, so the ratio L can be computed by Eq. (44), then we computed L¼ 9.657 and the length of P1 ¼3.2 in the example.

References [1] J. Berclaz, F. Fleuret, E. Turetken, et al., Multiple object tracking using k-shortest paths optimization, IEEE Trans. Pattern Anal. Mach. Intell. 33 (9) (2011) 1806–1819. [2] B. Ozer, G. Gezici, C. Meydan, et al., Multiple sequence alignment based on structural properties, in: The Fifth International Symposium on Health Informatics and Bioinformatics (HIBIT), 2010, pp. 39–44. [3] W. Xu, S. He, R. Song, et al., Finding the K shortest paths in a schedule-based transit network, Comput. Oper. Res. 39 (8) (2012) 1812–1826. [4] X. Wan, L. Wang, N. Hua, et al., Dynamic routing and spectrum assignment in flexible optical path networks, in: Optical Fiber Communication Conference, 2011, Optical Society of America, New York. [5] Y.K. Shih, S. Parthasarathy, A single source k-shortest paths algorithm to infer regulatory pathways in a gene network, Bioinformatics 28 (12) (2012) 49–58. [6] D. Eppstein, Finding the k-shortest paths, SIAM J. Comput. 28 (2) (1998) 652–673. [7] W. Hoffman, R. Pavley, A method of solution of the Nth best path problem, J. ACM 6 (1959) 506–514. [8] H. Aljazzar, S. Leue, Kn: a heuristic search algorithm for finding the K shortest paths, Artif. Intell. 175 (2011) 2129–2154. [9] A. Sedeno-Noda, An efficient time and space K point-to-point shortest simple paths algorithm, Appl. Math. Comput. 218 (20) (2012) 10244–10257. [10] A. Sedeno-Noda, J.J. Espino-Martin, On the K best integer network flows, Comput. Oper. Res. 40 (2) (2013) 616–626. [11] J. Hershberger, M. Maxel, S. Suri, Finding the k shortest simple paths: a new algorithm and its implementation, ACM Trans. Algorithms 3 (4) (2007) 45. [12] H.H. Yang, Y.L. Chen, Finding K shortest looping paths in a traffic-light network, Comput. Oper. Res. 32 (3) (2005) 571–581. [13] E.Q. Martins, M.M. Pascoal, A new implementation of Yen's ranking loopless paths algorithm, Q. J. Belg. Fr. Ital. Oper. Res. Soc. 1 (2) (2003) 121–133. [14] J.Y. Yen, Finding the K shortest loopless paths in a network, Manag. Sci. 17 (11) (1971) 712–716. [15] J.Y. Yen, Another algorithm for finding the k shortest-loopless network paths, in: Proceedings of the 41st Meeting on Operations Research Society of America, vol. 20, 1972, p. B/185. [16] V.M. Jimenez, A. Marzal, Computing the k shortest paths: a new algorithm and an experimental comparison, in: The Third International Workshop on Algorithm Engineering (WAE 09), 1999, pp. 15–19. [17] V.M. Jimenez, A. Marzal, A Lazy Version of Eppstein's Shortest Paths Algorithm, Lecture Notes in Computer Science, vol. 2647, 2003, pp. 179–190. [18] J.J. Hopfield, D.W. Tank, “Neural” computation of decisions in optimization problems, Biol. Cybern. 52 (3) (1985) 141–152. [19] R. Eckhorn, H.J. Reitboeck, M. Arndt, P.W. Dicke, Feature linking via synchronous among distributed assemblies: simulations of results from cat visual cortex, Neural Comput. 2 (1990) 293–307. [20] J.L. Johnson, D. Ritter, Observation of periodic waves in a pulse-coupled neural network, Opt. Lett. 18 (15) (1993) 1253–1255. [21] L. Ji, Z. Yi, L. Shang, X. Pu, Binary fingerprint image thinning using templatebased PCNNs, IEEE Trans. Syst. Man Cybern. Part B. 37 (5) (2007) 1407–1413. [22] Z. Wang, Y. Ma, J. Gu, Multi-focus image fusion using PCNN, Pattern Recognit. 43 (6) (2010) 2003–2016. [23] S.H. Ranganath, G. Kuntimad, Object detection using pulse coupled neural networks, IEEE Trans. Neural Netw. 10 (1999) 615–620. [24] X. Zhang, A. Minai, Temporally sequenced intelligent block-matching and motion-segmentation using locally coupled networks, IEEE Trans. Neural Netw. 15 (5) (2004) 1202–1214. [25] R.C. Muresan, Pattern recognition using pulse-coupled neural net-works and discrete Fourier transforms, Neurocomputing 51 (2003) 487–493.

[26] V. Ravi, C. Pramodh, Threshold accepting trained principal component neural network and feature subset selection: application to bankruptcy prediction in banks, Appl. Soft Comput. 8 (4) (2008) 1539–1548. [27] H.J. Caulfield, M. Kinser, Finding the path in the shortest time using PCNNs, IEEE Trans. Neural Netw. 10 (3) (1999) 604–606. [28] X. Wang, H. Qu, Z. Yi, A modified pulse coupled neural network for shortestpath problem, Neurocomputing 72 (13–15) (2009) 3028–3033. [29] X. Gu, L. Zhang, D. Yu, Delay PCNN and Its Application for Optimization, Lecture Notes in Computer Science, vol. 3173, 2004, pp. 413–418. [30] J.A. Bednar, A. Kelkar, R. Miikkulainen, Modeling large cortical networks with growing self-organizing maps, Neurocomputing 44–46 (2002) 315–321. [31] H. Ritter, T. Martinetz, K. Schulten, Topology-conserving maps for learning visuo-motor-coordination, Neural Netw. 2 (1989) 159–168. [32] D.V. Lebedev, J.J. Steil, H.J. Ritter, The dynamic wave expansion neural network model for robot motion planning in time-varying environments, Neural Netw. 18 (2005) 267–285. [33] H. Qu, Z. Yi, A new algorithm for finding the shortest paths using PCNNs, Chaos Soliton Fractal 33 (2007) 1220–1229. [34] H. Qu, S.X. Yang, A.R. Willms, Z. Yi, Real-time robot path planning based on a modified pulse coupled neural network model, IEEE Trans. Neural Netw. 20 (11) (2009) 1724–1739. [35] H. Qu, S.X. Yang, Z. Yi, X. Wang, A novel neural network method for shortest path tree computation, Appl. Soft Comput. 12 (10) (2012) 3246–3259. [36] H. Qu, Z. Yi, S.M. Yang, Efficient shortest path tree computation in network routing based on pulse coupled neural networks, IEEE Trans. Syst. Man Cybern. Part B 43 (3) (2012) 995–1010. [37] X. Li, Y. Ma, X. Feng, Self-adaptive autowave pulse-coupled neural network for shortest-path problem, Neurocomputing 115 (2013) 63–71. [38] Z. Wang, Y. Ma, F. Cheng, et al., Review of pulse-coupled neural networks, Image Vis. Comput. 28 (1) (2010) 5–13. [39] Y. Ota, VLSI structure for static image processing with pulse-coupled neural network, in: Procedings of Industrial Electronics Society, vol. 4, 2012, pp. 3221–3226. [40] C. Demetrescu, A.V. Goldberg, D.S. Johnson, The Shortest Path Problem: Ninth DIMACS Implementation Challenge, vol. 74, American Mathematical Society, Providence, Rhode Island, 2009. [41] B.V. Cherkassky, A.V. Goldberg, T. Radzik, Shortest paths algorithms: theory and experimental evaluation, Math. Program. 73 (1996) 129–174. [42] P. Sanders, D. Schultes, Engineering Fast Route Planning Algorithms, Lecture Notes in Computer Science 4525 (2007), 23–26. [43] J. Pearl, Heuristics: Intelligent Search Strategies for Computer Problem Solving, Addison-Wesley Publishing Company, Massachusetts, 1984. [44] E.W. Dijkstra, A note on two problems in connexion with graphs, Numer. Math. 1 (1959) 269–271. Guisong Liu received his B.S. degree in Mechanics from the Xi'an Jiao Tong University, Xi'an, China, in 1995, the M.S. degree in Automatics and the Ph.D. degree in Computer Science both from the University of Electronic Science and Technology of China (UESTC, Chengdu, China), in 2000 and 2007, respectively. Now he is an associated professor in Computational Intelligence Laboratory, the School of Computer Science and Engineering, UESTC. His research interests include computational intelligence, pattern recognition and machine learning.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i

G. Liu et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Zhao Qiu received his B.S. degree in Chemistry from Wuhan Textile University, Wuhan, China, in 2011. He is currently pursuing his M.S. degree in Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China. His research interests include neural networks and combinatorial optimization.

15 Luping Ji received his B.S. degree in Mechanical & Electronic Engineering from Beijing Institute of Technology, Beijing, PR China, in 1999. Then, he received his M.S. and Ph.D degrees, respectively, in Computer Application & Technology, 2005 and in Computer Software & Theory, 2008 from University of Electronic Science and Technology of China, Chengdu, PR China. Currently, he is working as an associate professor in School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, PR China. His current research interests include Neural Networks and Pattern Recognition.

Hong Qu received the B.S. degree, the M.S. and Ph.D. degrees in computer science from the University of Electronic Science and Technology of China, Chengdu, China, in 2000, 2003 and 2006, respectively. From 2007 to 2008, he was a Postdoctoral Fellow at the Advanced Robotics and Intelligent SystemsLab, School of Engineering, University of Guelph, Guelph, ON, Canada. Currently, he is a professor in the School of Computer Science, University of Electronic Science and Technology of China. His current research interests include neural networks, deep learning, robot and optimization.

Please cite this article as: G. Liu, et al., Computing k shortest paths using modified pulse-coupled neural network, Neurocomputing (2014), http://dx.doi.org/10.1016/j.neucom.2014.09.012i