Optimal path determination in a graph by hopfield neural network

Optimal path determination in a graph by hopfield neural network

NeuralNetworks,Vol.7, No. 2, pp 397--404,1994 Copyright© 1994ElsevmrSoenceLtd Pnnted m the USA.All rtghtsreserved 0893-6080/94$6 00 + 00 Pergamon CO...

661KB Sizes 0 Downloads 63 Views

NeuralNetworks,Vol.7, No. 2, pp 397--404,1994 Copyright© 1994ElsevmrSoenceLtd Pnnted m the USA.All rtghtsreserved 0893-6080/94$6 00 + 00

Pergamon

CONTRIBUTED ARTICLE

Optimal Path Determination in a Graph by Hopfield Neural Network S. CAVALIERI, A. D I STEFANO, AND O. MIRABELLA Umverslta' di Catania ( Recetved 20 July 1992; revtsed and accepted 4 October 1993) Abstract--Recurrent stable neural networks seems to represent an mteresting alternattve to classical algortthms for the search for optimal paths in a graph In this paper a Hopfield neural network ts adopted to solve the problem of findmg the shortest path between two nodes of a graph. The results obtained point out the validtty of the solution proposed and tts capabihty to adapt Itself dynamically to the variattons m the costs of the graph, acquiring an "awareness" of its structure

Keywords--Optimization, Path searching in a graph, Hopfield net. 1. I N T R O D U C T I O N

lelism inherent in the neural approach and their accretive behaviour, that is, the fact that they are capable of converging on a finite number of solutions that are as close as possible to the optimal solutions being sought. However, the neural approach does not ensure convergence deterministically; it has to be sought for by means of a delicate, appropriate characterization of the problem and its surrounding conditions. This paper presents the application of the Hopfield network to the problem of finding the shortest path between two nodes in a graph (Cavalieri, Di Stefano, & Mirabella, 1993), for the solution of which the literature provides examples of polynomial complexity (Dijkstra, 1959; Floyd, 1962). The difficulty of finding an effective definition and weighting for the numerous coefficients associated with the Hopfield network, which are essential to achieve good behaviour, have so far limited application to optimization problems. The solution to the Travelling Salesman Problem (the determination of an optimal path connecting all the nodes in a graph) proposed by Hopfield and Tank (Hopfield & Tank, 1985) is an interesting example of application of the Hopfield network to optimization on graphs, NP complete. This solution will be taken as a term of comparison with the approach proposed here. The advantage of the neural network is not to be looked for in the computational complexity of the parallel architecture that realizes it. The potential of a neural network, in fact, is assessed in terms of the inherent parallelism and implementability of the nodes and communication by means of analog/digital components that greatly affect the effectiveness of the so-

As is well known, arbitrary relationships between objects that occur in a great number of problems in the real world (road and railway networks, networks of computers, etc.) can be naturally represented by means of directed or undirected graphs. A graph is a set of objects called nodes, which are related to each other by weighted or unweighted arcs. The weight of the arc, which is also defined as the cost function, can express several parameters of the problem. Nodes and arcs assume a different meaning according to the scenario being represented: a railway network, for example, can be represented with a graph in which the nodes are the stations, the arcs are the stretches of railway between them, and the costs are the lengths of these stretches. The problems related to a number of practical applications are reduced to fundamental problems in graphs, such as the visiting and search for closed or open optimal paths (where an optimal path is a sequence of nodes connected by arcs with a minimum global cost). The algorithms available for the solution of these problems have different levels of complexity that are almost always high. Recurrent stable neural networks represent an interesting alternative to classical algorithms for the solution of optimization problems because of the paral-

Requests for reprints should be sent to A. Di Stefano,Istituto dl Informatlca e Telecomunicazlom,Facolta' di Ingegnena, Universita' di Catanm, V.ieA. Doria 6, 95125 Catania, Italy. 397

398

S Cavaherl. .4 Dt Stefano, and 0 Mlrabella

lution. So the problem of searching for the optimal path, which presents polynomial complexity, can be conveniently dealt with using a Hopfield-based approach. We will show the advantages of Hopfield solution offers and will point out the validity of the solution proposed, which manages to acquire an "awareness" of the structure of the graph, adapting itself dynamically to variations in its costs. 2. T H E H O P F I E L D NEURAL N E T W O R K The Hopfield network is topologically characterized by the presence of feedback between each pair of neurons (Fig. 1 ). Below we will give a brief description of the network, specifying, where necessary, the characteristics of the neural model used during the tests carried out. Attention will be focused on the continuous Hopfield network model (Hopfield, 1984 ). Each feedback is associated with a weight that expresses the influence of one neuron on another, and each of them is supplied with an external bias current. The feedbacks and bias currents determine a global input for each neuron that will be indicated as U~ (where x is the index of the generic neuron ). It is equal to:

U~ew = U TM + AU~ where AUx = At.

Wx,'OUT, + Ix -

7

where • is a user selected decay constant, At is the time step size, OUT, is the output of the generic neuron l, I, is the bias current of each neuron, and W~, is the weight for the connection between the neurons x and ~. The output of each neuron is then obtained through a transfer function, which is equal to the hyperbolic tangent function. The output of the generic neuron x is therefore: 1+ tanh ( - ~ ) OUTx -

2

I1

OUT 1

12

w.

OUT2

Ix OUTx I.

OUT. FIGURE 1. Hopfield neural network.

Outxi

x\i 1 2 3 4 5 6 7 8 9 10

1 0 0 0 0 0 0 0 0 0 0

2

3

4

5

6

7

8

9 1

10

0 1

0 0

0

0

0 0 0 0 0

0

0

0

0

0 0

10

FIGURE 2. An example of the Hopfleld output state when node 1 is the source and node 8 is the destination.

where u0 is a user selected parameter that controls the effective steepness of the transfer function. This parameter has been fixed so that the outputs supplied by each neuron will assume values as close as possible to binary ones. Updating of neurons can be either synchronous or asynchronous; we will assume that it is synchronous. Although nonstable recurrent networks have interesting properties (e.g., they are being studied as models of chaotic systems), m several applications, such as those relating to assocmtive memory or problems of optimization, stability is a necessary condition. Cohen and Grossberg ( 1983 ) have shown that a recurrent network is stable if the weights matrix is symmetrical and has only zeros on the main diagonal. Of course this is only a sufficient condition: there are, in fact, stable systems that do not satisfy these conditions. In the case of stable recurrent networks it is possible to verify that the energy function (Lyapunov function) given by 1

E = - ~. E E 14"~., OUT~ OUT, - E I,.OUTx necessarily admits local mlmma, corresponding to some vertices of the n-dimensional hypercube (where n is the number of neurons) defined by the condition OUTx = 0 or 1 ( Hecht-Nielsen, 1990). This guarantees that when an input vector is applied, the state of the network moves from one vertex to another until a stable state is reached. The methodology usually adopted to solve a specific optimization problem by means of the Hopfield network (which is the method used by the authors in determining the optimal path between two nodes in a graph) entails constructing an energy function m the form given by the Lyapunov function, starting from the surrounding conditions of the problem itself. In this way the local minima of the energy function determined correspond to its possible solutions. From a comparison between the energy function constructed and the Lyapunov function, it is possible to obtain the parameters of the neural network (the weights of the connections and external bias currents) according to the surrounding conditions imposed. Convergence on

Optimal Path Determination

399

a certain state thus corresponds to determination of a solution to the problem. 3. H O P F I E L D N E T APPROACH FOR O P T I M A L PATH D E T E R M I N A T I O N FIGURE 4. Optimal path between node I and node 8.

The capacity of recurrent stable neural networks to solve optimization problems suggests applying them to the determination of the optimal path between a source node and a destination node in a graph. The solution based on use of the Hopfield network (Hopfield, 1982, 1984) consists of obtaining, from the stabilized output of the neural network, information about the optimal path between two particular nodes in the graph. An alternative approach to this Hopfield solution could be realized by a network that simultaneously provides all the optimal paths between the source node and all the other nodes, or all the optimal paths between each possible pair of nodes in the graph. This approach was immediately discarded because it would have required a much more complex neural network in terms of the number of neurons.

3.1. The Neural Network Model Indicating the number of nodes in the graph as n, a Hopfield neural network with t/2 neurons was considered. The neurons were logically subdivided into n groups of n neurons each. Henceforward we will identify each neuron with a double index, xi (where the index x = l . . . . . n relates to the group, whereas the index i = 1. . . . . n refers to the neurons in each group), its output with OUTx,, the weight for neurons xi and y j with Wx,,yj, and the external bias current for neuron xl with Ix,. According to this convention, the Lyapunov energy function becomes:

~ : __~.~ 2 ~

~

Wx,,/OUTx,.OUT,~

termined is such that node x in the graph is connected with node i. The matrix shown in Figure 2 provides a clearer representation of the output state. Each row in the matrix refers to a node in the graph, whereas each column refers to the next node to which the node is connected, according to the optimal path. The example considered and shown by the matrix in Figure 2, is related to the determination of the optimal path connecting node 1 to node 8 in the graph made up of 10 nodes (Fig. 3). As can be seen, rows 1 and 8 of the matrix have a 1 corresponding to columns 9 and 8, respectively. This corresponds to a path connecting node 1 to node 8 through node 9 (Fig. 4). Optimization of the path by a Hopfield network requires the definition of a suitable energy function, based on the conditions surrounding the problem. Below we will describe these surrounding conditions and the relative energy function terms. The results given will relate to determination of the optimal path connecting the generic node s to the node d. 1. As each node can only be connected to one other, there can be at most one output in each node, that is, every row can have at most one 1. 2. Likewise, as no node can be crossed twice, the number of ones in each column is at most equal to 1. These two conditions are identical to those formulated in the TSP. This means that the energy function will be characterized by the two terms present in this approach: A

~-. E E j÷, 2: OUTx,.OUT~ + _8. 2 E, X

-

E E x

I~,'OUT~,.

I

~ OUTx," OUTy, V~X

(1)

1

If the output of the generic neuron xi, OUTx,, assumes a value of 1, it indicates whether the optimal path de-

3. As the source node s must be connected to another one, there will have to be a 1 in row s; the corresponding term in the energy function is:

6 !

3

1

3 C

4. It is necessary to force the destination to be node d. For this purpose it is sufficient for node d to have no outputs (all zeros in row d, as can be seen in the matrix shown in Figure 2, where d = 8); this requirement is met by the energy function term:

( OUT4. FIGURE 3. Structure of the graph and its interconnection costs considered.

5. As the generic ith node cannot be connected to itself, the diagonal formed by the outputs OUT,, will have

400

S Cavahen, A Dl Stefano, and 0 Mlrabella

to have all zeros (again seen in the matrix, considering t = 1. . . . . 10). This condition entails the presence of the term: E ~ ' E ~ OUT,, OUTjj. t

]#t

6. As the optimal path can only cross the source node s once, it ~s necessary to set all the elements of the column s to zero (as can be seen in the matrix, where s = 1 ). This corresponds to introducing the sum:

7. Finally, it is necessary to introduce a term that will take into account the distance to be covered. It should be pointed out that whereas in the TSP this term assumed that all the nodes had to be touched, here the term will have to allow for the exclusion of some nodes from the optimal path. The approach followed to determine this expression is similar to the one used to solve the TSP: the term represents the overall length of each valid path connecting the source and destination nodes. Validity consists of discarding all the closed paths (i.e., all those connecting the source nodes with two different nodes), all those in which a node is crossed more than once, and those in which each node reconnects with itself (i.e., OUT,, = 1 ). The expression of this term thus becomes: F

2" ~

E E dx," OUTx,- OUT,j +

Z dxa OUTxa

F

2" ~ ~

(4 )

the weights matrix is made symmetrical without altermg the surrounding condition relating to distance. It should be pointed out that, although the stability condition is not satisfied because of the presence of nonnull elements m the main weights diagonal, in the tests carried out it was found that term (4) made a substantial contribution to the determination of optimal solutions. In any case, the other three conditmns did not involve stability problems. The weights conditions determined on the basis of the comparison between the energy function obtained by summing all the terms seen previously [including the symmetrization term (4)] and the Lyapunov function, are expressed by: af x = y and t :P J

,4

-B

l f x 4 : y a n d t =1

C

lfx=yandx=sandt=j

-2. C

lf_~ = y and x = s and t 4: j

-D

lfx=yandx=dandl=j

-2

I4"~,.w= (3)

~ dw" OUTj, -OUTej

V J ~ l~J j~'¢ tq~V

(2)

t # r J÷~,

F

and 6 determined the presence of nonnull elements on the main diagonal and the distance condition (2) made the matrix asymmetric. Although it was impossible to make any corrections according to the first three conditions because a correct solution to the problem would be unreachable, it was possible to do so for the distance condition. In fact, by adding the following term:

D

-E F'dxv

(5) (6)

If x = y and x = d and t 4: J

ifx 4: yand x = t a n d y = J lfx4:landt4:sandj4:l andy=tandj4:x

where each term dx,.OUT~,'OUT Uin eqn (2) represents the generic component of each valid path between node s and node d. This term cannot take into account the last weight of the path because OUTaj -- 0 for allj. Term (3) considers the last link of the path. From a comparison between the Lyapunov function expressed by eqn ( 1 ) and the energy function constructed by summing all the terms seen previously, the weights of the neural network were determined according to the optimization problem being considered. The weights matrix is constructed according to the coefficients of the terms OUT~,.OUTyj, such as eqn (2). The remaining terms, such as eqn (3), only contribute to calculation of the bias current of each neuron. The weights matrix thus obtained was asymmetric and had elements other than zero in the main diagonal, and so the stability conditions established by Grossberg and Cohen were not satisfied. From an analysis of the surrounding conditions it emerged that conditions 3, 4,

- F" d~v

lf x - j and / :P s and j 4: t

and y # j and t 4: y -G

if l = j and t - s and x = y

-2.G

ift=jandl=sandx4:y.

(7)

As can be seen, the coefficients - C , - D , and - G [terms (5), (6), and (7)] cause the weights matrix to present nonnull elements in the main diagonal. 3.2. Some Notes on the Advantages of Using a Hopfield Network The massively parallel architecture of the Hopfield network and the hardware simplicity of each of its processing units are two key points in the use of this type of architecture to solve the optimization problem. Assessment of the performance of the Hopfield-based solution must take its potential hardware realization into account. This is the basic point of departure from

401

Opumal Path Determmatton

iteration a r e N 2 multiplications between the inputs and the weights of each connection, and a sum of the previous multiplications. The massively parallel architecture and the hypothesis adopted, according to which activation of all the neurons is synchronous, allows us to state that the processing time in each single iteration coincides with the time corresponding to each processing unit, that is:

T, =f(NZ).

FIGURE 5. Optimal paths from node 7.

In other words, the processing time T, for each single iteration depends on the number of neurons in the Hopfield network. Indicating the number of iterations required for the network to reach full convergence as K, the overall time T necessary for a solution to be reached is equal to:

T=K.T,. In a VLSI implementation, where each processing node is realized through operational amplifiers (Amit, 1989; Graf& Jackel, 1989), each time value T, is practically independent of the number of inputs (i.e., N2). In addition, in Aiyer, Niranjan, and Fallside (1990) it is shown that the number of iterations needed for a Hopfield network to reach a stable solution in an optimization problem does not depend on its complexity (i.e., the number of neurons in the network). From these two considerations it can be deduced that the processing times of the solution proposed are practically independent of its complexity, being only linked rather to the operating speed of the analog hardware used, the performance of which depends on the technological evolution of VLSI components.

FIGURE 6. Optimal paths from node 10.

the criteria for assessing a classical approach, based on computational complexity rather than the speed of execution of the calculation support used. With reference to the optimization algorithms for the search for the minimum path in a graph, it has been shown that their computational complexity is O(N 2) (where N is the number of nodes in the graph; Aho, Hopcroft, & Ullman, 1990). In this section we will demonstrate that the processing times of a Hopfield network implemented in VLSI are practically independent of its complexity (i.e., the number of nodes in the graph). In the neural approach proposed, each processing unit making up the Hopfield network is characterized by N 2 inputs. The operations it has to perform in each

4. COMMENTS ON RESULTS OBTAINED In order to verify the convergence on an optimal solution ( or as close as possible to an optimal one) reached

12 10

10

10

10

lO

o

iiL o 1 /

II

I

I

I

I

I

I

I

I

2

3

4

5

6

8

o

10

Optimal Path

~

Valid Paths

~

FIGURE 7. Paths from source node 7.

Wrong Paths

402

S CavaheH, .4 Dt Stefano, and 0 Mlrabella TABLE 1 Hopfleld Coefficients

A B C D E F G

Source Node 10

Source Node 7

200 200 200 10 50 250 500

200 200 400 10 10 250 600

Miller (Van den Bout & Miller, 1988): convergence of the Hopfield network strictly depends on the choice of the coefficients present in the weights expression. In addition, determination of these values is extremely complex because there is no systematic method. In our problem, for all the sources considered, the number of valid and optimal paths varied greatly according to the choice of the coeflicxents. It was also seen that when the destination node varied with respect to the same source, it was necessary to vary the coefficients even further in order to determine an optimal solution. However, through continuous adjustments to the coefficients it was possible to fix values for each source in such a way as to ensure a high number of valid solutions. The single set of values found for each source also made it possible to make the neural network converge on optimal or almost optimal paths for all destinations. This means that tuning the whole network consists of idenUfying a suitable set of coefficient values for each source. These values will be assigned to the network separately, as the source changes. The results obtained are discussed with reference to sources 7 and 10, chosen because of their different positions m the graph shown m Figure 3 (node 10 is in a central position, node 7 is on the edge). These results are then compared with the opUmal paths shown in Figures 5 and 6. Ten tests were carried out for each destination. The coefficients tuned for these nodes are shown in Table 1. Figure 7 shows the results obtained for source 7. As can be seen, for each destination the number of wrong paths is very low. In addition, for most destinations (nodes 1, 2, 4, 5, 6, 8, 10) the percentages of optimal paths are very high. Only for some nodes (nodes 3 and 9 ) were paths mainly obtained with a global cost close to optimum, as can be seen in Table 2, which specifies all the paths indicating their overall cost as compared with the optimal cost.

TABLE 2 Soume Node7:Path OvemllVersus O~imalCost

Node

Valid Paths

Cost

Optimal Cost

3 4

7-6-10-5-4-3 7-6-10-5-1-4 7-6-10-5-1-2-3-4 7-6-10-8 7-8-9

11 15 15 10 11

9 11 11 8 10

8 9

Number of Vahd Paths 10 1 3 1 9

by the neural approach presented, several tests were carried out relating to calculation of the optimal path between certain source nodes and the remaining nodes in the graph. In the tests performed, the authors made use of the Anza Plus neural simulator (Anza Plus, 1989), made up of a neural accelerator board plugged into an IBM PC and software management of the board (user interface subroutine library, UISL). The paths that connect each node with the remaining nodes in the graph in Figure 3, and characterized by the indicated interconnection costs, were calculated by the Hopfield network shown previously. From the very first tests carried out, a problem immediately emerged that had already been pointed out by Hopfield and Tank (Wasserman, 1989) and then by Van den Bout and 12 10

10

108

10

g 7

7

7

6

6 4

6 4

4

2

3

3

1

_

3

0 1

2

3

~ 1 Optimal Path

4 [~

5 Valid Paths

6

7 ~

8

Wrong Paths

FIGURE 8. Paths from source node 10.

9

403

Optimal Path Determinatton TABLE 3 Source Node 10: Path Overall Versus Optimal Cost

Node

Valid Paths

Cost

Opt=mal Cost

1 3 6 9

10-5-4-3-2-1 10-5-1-2-3 10-7-6 10-6-7-8-9

17 10 6 14

3 6 1 7

Number of Valid Paths 3 7 4 7

TABLE 4 Benchmark Node 7: Path Overall Versus Optimal Cost

Node

Valid Paths

Cost

Optimal Cost

2

7-8-9-2 7-8-10-1-9-2 7-8-10-1-4-3 7-8-10-5-4 7-8-10-1-5 7-8-10-1-5-6 7-8-10-1-2-9

13 15 12 9 7 9 12

8 8 9 8 5 3 8

3 4 5 6 9

Number of Vahd Paths 1 1 10 4 8 5 10

Figure 8 refers to the paths obtained for source node 10 and Table 3 specifies the paths close to the optimal one and their overall cost, comparing it with the optimal cost as before. These results are as good as the one relevant with source 7. However a slightly higher percentage of wrong paths was found. This may be ascribed to a less efficient tuning of the coefficients of the neural network. The identification, for each source, of a set of coefficients valid for any destination provides the network with an "awareness" of the relations (arcs and costs) between the nodes in a graph. From an applicational point of view this awareness is not particularly useful because it is bounded to a specific graph with a fixed number of nodes and costs of the arcs between them. In many applications, although the structure of the graph remains fixed, there can be variations in the costs of the arcs. In packet-switching networks, for instance, the framework of the graph corresponds to the structure of the network (only variable over long periods of time), whereas the costs are linked to the traffic conditions on the lines (which vary continuously). The subsequent investigation aimed at ascertaining whether, once the optimal coefficients had been determined for a fixed source and graph configuration, the network would continue to provide optimal (or close to it) results even when the costs of connections between nodes varied. Obviously variations in costs have to be confined within suitable ranges to prevent variations (between 0 and oo ) from corresponding to modifications in the graph, which is here considered to be fixed. To this purpose, we used node 7 as a benchmark, setting the neural network with the same coefficients that gave

the results shown in Figure 7. Variations were made in the costs (the graph with the changed costs is shown in Fig. 9) that radically modified the optimal paths between the pairs of nodes with respect to the previous cost scenario (as seen from a comparison between Figs. 5 and l0 with reference to source node 7). The same sequence of tests as before was carried out. The results are shown in Figure 11 and Table 4. From comparison with Figure 7 it emerges that although no further adjustment has been made to network parameters, the network continues to converge on a high number of optimal paths. This shows that the awareness acquired by the network is only linked to the framework of the graph and is relatively independent of the values of the costs of the arcs, thus greatly increasing the applicability of the approach proposed.

CONCLUSIONS In this paper the authors have presented a solution to the shortest path problem in a graph using a Hopfield network. Convergence on optimal solutions is guaranteed with an acceptable percentage of error. This is inevitable in approaches of this kind, as has already been pointed out in similar cases (see, for example, the TSP, Hopfield & Tank, 1985), and is due to the high number of local minima the energy function presents. One of the most interesting results obtained during the experimental tests is the network's capacity to acquire an awareness of the graph topology alone, and its ability to adapt itself to variations in the costs of the arcs in the graph.

44

FIGURE 9. Structure of the graph shown in Figure 3 with a variation in the costs.

2

FIGURE 10. Optimal paths from node 7.

404

S Cavahert, A Dl

Slefano,

and 0 Mtrabella

12 10

10

10

10

10

10 7

8

8

6

5 44

4

2 1

2-

2

1

O-

1

2 1

3

Optimal Path

4 ~

5

6

Valid Paths

8

g

10

i i i i i ] Wrong Paths

FIGURE 11. Paths from source n ~ e 7.

O n c e m o r e the tests p o i n t e d o u t the difficuity in t u n i n g the p a r a m e t e r s o f the Hopfield network, which is a critical aspect o f the a p p r o a c h . This difficulty can, however, be overcome by developing suitable a u t o m a t i c i n s t r u m e n t s that will iteratively verify convergence as the network p a r a m e t e r s vary, a c c o r d i n g to predefined steps. F r o m this p o i n t o f view the existence o f classical a l g o r i t h m s to d e t e r m i n e the shortest p a t h m a y provide valid s u p p o r t in controlling the validity o f the paths found, step b y step. T h e convergence capacity d e p e n d e n t on the g r a p h framework alone shown by the Hopfield network, along with the possibility to a u t o m i z e p a r a m e t e r a d j u s t m e n t , suggest taking investigation further, in the a t t e m p t to see whether this convergence can be i n d e p e n d e n t o f the structure o f the graph a n d only d e p e n d on the n u m b e r o f nodes present in it.

REFERENCES Aho, A V, Hopcrofi, J. E., & Ullman, J. D (1990) Data structures and algortthms (pp 202-213) Reading, MA Addison-Wesley. Alyer, S V B., Niranjan, M, & Fallslde, F (1990) A theoretical lnvestlgaUon into the performance of the Hopfield model, IEEE Transacttons on Neural Networks, 1, 204-215 Amit, D. J. (1989) ModelhngBram Functton (pp 461-480) Cambridge University Press.

Anza Plus User's Grade and Neurosoftware Documents Release 2 2, 15 May, 1989 Cavaherl, S, D1 Stefano, A, & Mlrabella, O. (1993) Hopfield neural network for routing, International Workshop on Artificial Neural Networks (IWANN '93 ), Sitges, Spmn, June 9-11 Cohen, M. A, & Grossberg, S G. (1983) Absolute stablhty of global pattern formation and parallel memory storage by competitwe neural networks, IEEE Transactton on Systems, Man and Cybernettcs, 13, 815-826. Dijkstra, E. W (1959) A note on two problems in connechon with graphs, Numensche Mathemattk, 1, 269-271 Floyd, R. W. (1962) Atgonthm 97: Shortest path, Communtcatton ACM, 7, 701 Graf, H. P., & Jackel, L D (1989) Analog electronic neural networks clrcmts, IEEE Ctrcutts and Devtce Magazine, July, 44-55 Hecht-Nlelsen, R. (1990) Neurocomputmg (pp. 147-155). Rea&ng, MA: Addison-Wesley. Hopfield, J J (t982) Neural networks and physical systems with emergent collectivecomputational abfllUes,ProceedmgsNattonal Academy of Sctences, 79, 2554-2558 Hopfield, J. J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons, Proceedmgs Nattonal Academy of Scwnces, 81, 3088-3092 Hopfield,J J., & Tank, D. W. ( 1985 ) Neural computation of decision in optimization problem, Bwlogtcal Cybernetws, 52, 141-152. Van Den Bout, D. E, & Miller, T K. (1988) A traveling salesman objectwe functton that works, IEEE Proceeding Internattonal Conference on Neural Network. II, 299-304, San Diego, CA Wasserman, P. D (1989) Neural computmg--Theory and practtce (pp. 106-109) New York. Van Nostrand Reinhold.