Neural computation network for global routing

Neural computation network for global routing

Neural computation network for global routing P-H Shih, K-E Chang* and W-S Feng Global routing is a crucial step in circuit layout. Under the constra...

669KB Sizes 0 Downloads 52 Views

Neural computation network for global routing P-H Shih, K-E Chang* and W-S Feng

Global routing is a crucial step in circuit layout. Under the constraint of the relative positions of circuit blocks enforced by placement, the global routing develops an effective plan such that the interconnections of nets can be completed efficiently. This problem has been proven to be NP-complete, and most of the currently available algorithms are heuristic. The paper proposes a new neural-computation-network architecture based on the Hopfield and Tank model for the global-routing problem. This network is constructed using two layers of neurons. One layer is used for minimizing the total path length and distributing interconnecting wires evenly between channels. The other layer is used for channel-capacity enforcement. This network is proven to be able to converge to a stable state. A set of randomly generated testing examples are used to verify the performance of the approach. A reduction in total path length of about 20% is attained by this network. global routing, neural network, Hopfield and Tank model, circuit layout

During the hierarchical physical design of an integrated circuit (IC), a complex circuit is recursively decomposed into compbnents until a manageable small block called a cell is obtained. Then, the smallest cells are designed and connected together. The wires used for connection are called nets, and the endpoints of nets lying on individual cells are called points. Routing is the process of connecting these cells together, which is usually accomplished in three steps:

• Channel definition: Partition the areas reserved for routing into channels. • Global routing: Assign the interconnections of each net to the proper channels. • Detailed routing: Physically determines the exact locations of nets within channels. An overview of these steps of the routing process is shown in Figure 1. This paper discusses the global-routing problem. This problem has been proven to be NP-complete 1.

Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan * Department of Information and Computer Education, National Taiwan Normal University, Taipei, Taiwan Paper received: 25 May 1990. Revised: 6 August 1990

volume 23 number 8 october 1991

Several algorithms have been proposed to handle this problem 2 s. These algorithms can be categorized into two groups. The first of these is that of the sequential router 4, which tries to route one net at a time, and, during its routing, tries to find the shortest path possible. The other group is that of the global-view router, which tries to route all the nets simultaneously. Simulated annealing is a typical example 2'3. In this paper, a new neural-network architecture based on the Hopfield and Tank model is proposed to solve this problem. This network takes all interconnection requirements into consideration at once. The use of the collective computational capability of neural networks to solve computer-aided design problems has been demonstrated to be an effective approach. For example, for the module orientation/rotation problem and the circuit partition, satisfactory results have been reported 6'7. Although these results were obtained by software simulation, several other research reports have proven the feasibility of building a neural network with hardware technology ~'9. With the hardware implementation and the inherent parallel structure of

@ a

.

.

b

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. . . .

C

d

Figure 7. Three steps of routing; (a) result from placement process, (b) channel definition, (c) global routing, (d) detailed routing

0010-4485/91/080539-09 © 1991 Butterworth-Heinemann Ltd

539

the operation of a neural network, a significant gain in speed can be expected.

NEURAL-NETWORK MODEL The mathematically formal description of neural activity can be traced back to the works of McCulloch and Pitts ]°. In their model, a living nerve cell is abstracted as a neuron, and all the neurons are connected together by synapses. These connections construct a neural network. Each neuron receives a weighted sum of the incoming signals (excitatory signals are taken as positive terms, and inhibitory signals as negative), and sends an output. The output is determined by the weighted sum of the input signals and a threshold value. If the sum is higher than the threshold, the output is 1. Otherwise, the output is 0. A characteristic of this model is that the inputs and outputs of neurons are all in binary form, that is, only values 0 and 1 are considered. In recent years, Hopfield and Tank have shown that certain neural networks can be used to solve optimization problems, and that their response time is only a few characteristic time constants of the circuits 1~'~2. After this result, several applications of neural networks were reported6,11 13 This paper focuses discussion on the neural application of global routing. The Hopfield and Tank model is composed of several fully interconnected neurons. The input and output relationships for each neuron can be described by a monotonously increasing sigmoid function, and the neuron provides integrative summation of the currents from the connections to other neurons and a connection to an external bias. Specifically, let u i and v, be the input and output of a neuron i, respectively. The motion equation of neuron i can then be described as

Ci(du~/dt) = ~',j T,iv i -- ui/Ri + I~

(1)

v i = gi(ui)

(2)

where C, is the input capacitance of neuron i, and R~ is the input resistance resulting from the input resistance of neuron i, and leakage currents to other n e u r o n s . Tij is the effect of the output of neuron j on neuron i, and /i is an external bias. The state motion of a network can be described by a set of this equation. A characteristic of this model is that the value of each neuron varies continuously, that is, analog computation is used. A schematic representation of this model is shown in Figure 2. In this representation, neurons are modeled as dual-output operational amplifiers. Note that, if the output of neuron j is excitatory on neuron i, the synapse connection T,i is made to the normal output of neuron j. This connection is shown as a solid box in the figure. If it is inhibitory, the connection is made to the negated output. This connection is shown as a tinted box in the figure. Hopfield 4 has proven that, if the 1" matrix is symmetric, the network converges to a stable state in which the outputs of all the neurons are either 1 or 0.

540

-V1

:

:

I

:

1)1

T

!

I

IT

T

-~i

I)i

Figure 2. Hopfield and Tank model

Further, it was also proven that, if the diagonal elements of 1" are all 0, and the gain curve of the neuron is narrow, the stable state of the network is a local minimum of the following equation: E=

vi-

I31

E is called the computation energy of this network. For the solution of an optimization problem by a neural network, the following steps should be considered: • Step 1: Formally describe the optimization problem using an energy function. That is, define an energy function for the problem such that, as the minimum value of this function is reached, the local optimum of the problem is found. • Step 2: Use a specific neuron to represent each possible state of the variables of the problem. • Step 3: Substitute the representation in Step 2 into the function defined in Step 1, and translate the result into the form of Equation 3. • Step 4: Map the T,j and /, in Equation 3 into the Hopfield and Tank model. Implement or simulate this network, and find its stable state. • Step 5: Depending on the representation defined in Step 2, the local optimum of the problem to be solved is found.

NETWORK CONFIGURATION FOR GLOBALROUTING PROBLEM Simplified model During the discussion, the following simplified model is assumed: • The relative positions of circuit blocks are assigned, and their physical positions are not fixed. • Only 2-point connections and L-shaped paths are allowed. • A grid model is used.

computer-aided design

During the top-down design of a complex circuit, the positions and shapes of building blocks are only roughly estimated. These blocks are ready to be moved, and even to be reshaped, to make more spaces for routing purposes, or to squeeze out wasted spaces. On the basis of this design strategy, the first assumption above is reasonable• Any n-point net can be decomposed into n - 1 connections with two points, which are here called 2-point connections. Further, the interconnection of a multipoint net is completed if the connections of its corresponding 2-point connections are determined. Therefore, it is feasible to consider only 2-point connections. As given, there is a set of m multipoint nets denoted as net~, net 2 .... , net m. Each net i has t/~ points to be connected. Then, the total number of 2-point connections to be routed is t / = ~-'i(rh- 1)

-

-

1) + N×(N x -- 1)

;

(5)

Problem definition The first objective of global routing is to make the path length as short as possible. For the path to be made short, the possibility of path sharing should be taken into consideration. That is, if two connections are electrically at the same potential, they can be made to share some paths, and then reduce the total path length. For example, two connections under consideration are shown in Figure 4a. If connections A~ and A 2 are decomposed apart from the same net, they can be routed as shown in Figure 4b to share a portion of a path. Mathematically, this objective can be described as follows:

volume 23 number 8 october 1991

-

"!i~iii:i:¥:!:~:!:i:i:i.~i:i:i:~:~::

........................................ ~......................... ,.......................... ".:.....:.:.:.:.:.:.:.:.:.::.:.:.:.:.:. ...[~ "~'~!;~i:~i'~i"~i '~ '~"~~"~~~"~"~"~,~"~"~"~'~"~"~ "~~*~*¢~!~iyi[.. ~i:~:~:~:¢~i~:~:~:~:~:~:~:~:~:i:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:::::::::::::::::::::::::::::::::::::::

....

.

.: ...........................

|••••••`•••••••••••••:•••••••••••••••••••••••••••:••••••••••••••:•••••••

~ .............

~ ............

~.....:.:

, , , , , • ~ • ~ : ~ ' ~ : ~ : ~ : ~ : ~ : . : ~ . ~ : ~ : ~ : ~ : ~ : ~ : ~ : ~ : ~ : ~ • ~ ; ~ ;

..

,

[~;~v;~.~:~:~:~:~:!~.:~:~:.~:~:~:~:~:~:~:~:~:~:~.:.*.:|...: .................. ~ ........ ..:........> ............. ~ ............ ~ ...... |.: .:. :. :, :. :¢. :. :, :. :. :. :. :, :. :. :. :. :,., :, > :. :. :. :. :. :. :. :.: ,:..,........, ,., ................ ............ ......, ......,.|

.......

; ......

- ......

:. . . . . . .

:. . . . . . . . . . . . . . .

a

b Figure 3. Building grid model from row-based layout; (a) layout with cells, (b) grid model obtained

Another objective of global routing is to make the distribution of paths as uniform as possible, and hence reduce the likelihood of overflow. That is, the density, which is the maximal number of horizontal segments crossing the same vertical line, of individual channels should be kept as uniform aspossible. Let Pi denote the density of channel i. This objective can be formally described as minimize ~ i ~ _ , i ( P i -

minimize ~-~ (path length of net~)

.

(4)

In addition, only L-shaped paths are used, and this means that only one turn of a path is allowed. During the physical layout of a path, different layers are used for wiring the vertical and horizontal paths. For two paths placed at different layers to be connected, a via should be introduced. The occurrence of 'vias' in the layout causes a decrease in the circuit performance and an increase in the routing space. As the turn of a path usually introduces a via, the reduction of the number of turns of a net minimizes the number of vias in the layout. In the grid model, the points of nets are lumped into the vertices of a mesh grid, and a link connecting two vertices represents the channel between them. The weight of a link is the capacity of the corresponding channel. Then, the routing problem is that of finding a path constructed by links of this mesh grid such that the vertices of a net are connected. For a structured package, this model is reasonable. Figure 3 shows an example• Figure 3a shows a row-based layout, where several nets and their corresponding points are depicted. These nets and points are mapped into the dark circles and heavy line segments in Figure 3b. Let the grid used have N~ by N× vertices. The total number of links available is Z = Nx(Ny

::::::::::::::::::::::::::::::::

-- -[~i~::~::~::~i~i~!~::i! rcu C !;t._(:el.! s..:'..~T.:~:~:~:.[

(6)

Pi )2

(7)

For the objective to be satisfied, the density of each individual channel should be made as low as possible. This claim is stated as a lemma below. Its proof is in Appendix A. Lemma "/: A positive constant quantity K is to be

decomposed into m nonnegative components denoted as x 1, x 2. . . . . x m. If and only if ~,ix 2 is a minimum, then T,iT,i(x i -- xi) 2 is a minimum. • On the basis of the lemma stated above, the second objective in Equation 7 is described as follows: minimize ~-~,iP~

(8)

which also means minimize (p*, i = 1, 2,..., Z)

(9)

541

Neuron N., 1 with value 1 means the connection takes the shape o f . . .

AI

Neuron N n ,2 with value 1 means the connection takes the shape o f . .

;i

;;A2

or

-_

I

i

A1

i

b

a

"-

Figure 5. Definition of neurons; (a) neuron Nn, l, (b) neuron Nn,,

function

" ;A2

A~,flis defined

as follows:

• If the paths taken by ~ and /~ are not overlapped, then the cost is A~,I~ = path length of ~ + path length of as there are no relationships between them. • If the paths taken by ~ and/~ are overlapped, then

a

o if ~ and /~ belong electrically to the same net, the cost is A~,/~ = path length of ~ + path length of/~ - path length of overlap

A1

";A2

as they can share the overlapped path. o if ~ and l~ do not belong electrically to the same net, the cost is A~,/~ = path length of ~ + path length of/~ + (path length of overlap) 2

A1

Here, the overlap is squared, to satisfy the objective of Equation 8. With the cost function defined above, the requirements of Equations 6 and 9 give a routing result that is minimize ~ i ~ j A/,I

(10)

;=A2 Network configuration

b Figure 4. Two electrically connected connections can share a path; (a) two connections under consideration, (b) path chosen to share portion of path

For any 2-point connection n, there are two possible different shapes that can be taken by it. If it is connected as one of the shapes shown in Figure 5a, it is called to be in State 1. Otherwise, it is called to be in State 2. The vertical connection is considered to be a degenerative case of State 1, and the horizontal connection is considered to be a degenerative case of State 2. A cost function is used to describe the merit of each individual state of a 2-point connection. That is, the cost function describes how well a state of a 2-point connection satisfies the two objectives in Equations 6 and 9. For two connection states 0~ affd /~, the cost

542

For a 2-point connection n, two neurons denoted as Nn,1 and Nn.2 are introduced to represent its connecting shape. The meaning of this pair of neurons is shown in Figure 5. If the connection is in State 1, the neuron Nn,1 has the value 1. Otherwise, it is in State 2, and the neuron Nn,2 has the value 1. For a layout with fl 2-point connections (see Equation 4), 2r/ neurons are required to represent the possible states of the layout. Using the values of neurons, the requirements of global routing can be described as follows: • Each connection can take one route only. No duplicate routes are permitted. • Each connection should be connected. • The length of all connections should be kept as short as is possible. • The distribution of connections should be kept as uniform as is possible. Let

computer-aided design

This equation is equal to 0 if and only if at least one of Nn,1 and Nn, 2 is 0. This meets the first requirement described above. Further, define E2 = (~.~n~sNn,s

--

(12)

l/) 2

As this equation reaches its minimum, the number of neurons with value 1 is equal to the number of connections. If each connection takes one route only (the first requirement above), this condition means that each pair of neurons used to represent the state of a connection has exactly one neuron with value 1. This meets the second requirement described above. Finally, define ~_~n~'~n,:~n~_~s~'s, Nn,sNn,,s,A(n,s),(n,,s ,)

E3 =

(13)

where A(n,s),(n,s, I is a cost function that takes the conditions of path sharing and net distribution into consideration, just as described in the previous section. If E3 is minimized, the minimum value of the cost function is chosen, and the third and fourth requirements listed above are satisfied. On the basis of the previous discussion, the energy function is E = AE1 + BE2 Jr CE3

(14)

where A, B and C are constant weighting factors. Mapping this equation into Equation 3, T(n,s),(n,,s, ) =

-- A~nn,(1

-- C(1 I(n,s )

=

--

-- ~ss') -- B 5nn,)A(n,s),(n,,s,)

B~/

Figure 6. Network for global routing (15) (16)

where ~ij is the Kronecker delta, which is defined as c~ij = {10

i=j otherwise

The T matrix and the external bias I being known, the neural-network configuration for the global-routing problem can be constructed. The result is shown in Figure 6 (ignore the portion of the network surrounded by the tinted area, which is discussed in the next section).

same role, in the network can be grouped together to form a structure called a layer. For those networks with multiple layers, the layers are named, from the input of the network to the output, as the first layer, the second layer, and so on. The network obtained in the last section is a single-layer network. For simplicity of notation, the outputs of the first-layer neurons are numbered as v 1, v2,..., v2~, from left to right. With this notation, the o- function in Equation 18 can be restated as ~(j, i) =

Channel overflow One fundamental requirement of routing is that the channel density should be lower than, or, at most, equal to, the channel capacity, which is defined as the highest allowable channel density along the channel. Let pmax denote the capacity of channel i. Formally, this constraint can be stated as ~-'n~,s¢(n,s,i ) ~ p~11ax

(17)

for all i, where ¢ is a function that is defined as follows: ¢(n,s,i) = fl 0

if connection n with State s passes through channel/ otherwise

(18)

Before the discussion, a new terminology is introduced. Neurons that have the same function, and play the

volume 23 number 8 october 1991

1

if the connection and state represented by vj passes through channel i

0

otherwise

(19)

An additional layer of neurons is used to enforce the channel-capacity constraint. As shown in Figure 6, the portion of the network surrounded by the tinted area is used for this purpose. Here, each channel is assigned a neuron to record its status. In total, Z neurons (see Equation 5) are added in this layer, whose outputs are denoted as Yl, Y2. . . . . Yz' In the network, the 2r/ outputs from the first layer of neurons are fed into the second layer of neurons. The connection matrix S between the first layer and the second layer is defined by the 0" function described above. That is, Sii = - ~7(j, i)

i = 1,..., Z; j = 1. . . . . 2~/

(20)

543

Additionally, the given channel capacity is fed into each corresponding neuron in the second layer as an external bias. Then, the input to the neuron i in the second layer is z, = Pl ..... + ~iS,/v,

(21)

For the requirement of Equation 17 to be satisfied, Equation 21 should be kept nonnegative. A penalty function is used to enforce this condition, that is, the transfer function of the neurons in the second layer. The input-output relationship of the second layer of neurons is nonlinear, and is characterized by the following function:

{0_ f(z) =

Dz

z~>O otherwise

and their corresponding point positions. Meanwhile, a minimum spanning tree algorithm is used to decompose a multipoint net into 2-point connections ~. Then, the cost function described in the problem-definition section is set up on the basis of the 2-point connections obtained. Finally, Equations 15 and 20 are used to build up the connection matrices T and S, respectively. A direct translation of Equations 23-25 is used for network simulation. That is, the simulation loop can be described as follows. Simulation loop

Set initial values of uis, vis, zis and yis Repeat For each neuron i in the first layer, i = 1, 2 ,..., 2r/ begin

(22)

where D is a constant weighting factor. The outputs of the second layer of neurons are fed back to the inputs of the first layer, and the connection matrix there is the same as the $ matrix. The feedback from the second layer of neurons being taken into consideration, the motion equation of the first layer of neurons (see Equations 1 and 2) is written as Ci(dui/dt) = ~ i Tiivi - ui/Ri + li + EiSjivj

du, = (Y'qSj,yi + Y,, Tqvj- u,/R, + li)dt/C, ui =

dv i = gi(ui) -- V i V i = v i -t- d v i

end For e a c h n e u r o n

(23)

vi = gi(ui)

(24)

yi = f(zi)

(25)

Under normal conditions, the inner loop of this network tries to find a solution for the global-routing problem that meets the requirements of minimum total path length and uniform distribution of interconnecting paths. At the same time, the neurons in the second layer are kept inactive. That is, Yi = 0 for all i. Also, the last term in Equation 23 vanishes. However, if any channel i overflows, i.e. Yi < 0, its corresponding neuron in the second layer sends an inhibitory signal through the outer loop to those connection states that cause this overflow, and forces them to find other ways. This means that the last term of Equation 23 has a negative effect on the movement of neuron i, and makes it converge more slowly, or even change its direction of convergence. This signal is kept active until the problem of channel overflow is corrected. Further, the degree of emergence of this signal is proportional to the degree to which the channel-capacity constraint is violated. This network converges to a stable state. This is the statement of the following theorem. The proof is in Appendix B.

ui -t- d u i

end For each neuron i in the first layer, i = 1, 2 . . . . . 2r/ begin

i in t h e s e c o n d

layer, i = 1, 2 . . . . . Z

begin

z, = pma, + ~js~,vj Yi = [(zi)

end Increase the gain of neurons by a small factor until Idyll < ~, for all i.

where 8 is a very small positive constant factor used for testing the convergenCe of the network. Several parameters should be determined during the simulation. The first set of these are the constant weighting factors A, B and C in Equations 15 and 16, and D in Equation 22. The setting of these parameters is not trivial. The proper values for constants A and B are first set, and then an automatic binary search tool is used to determine the values of C and D. The result shown in Figure 7 is obtained by the use of the following values: A = 100 B = 100 C=10 D = 50 dt -- 0.001

Theorem 7: The network constructed by the T and S matrices illustrated in Figure 6 converges to a stable state. • SIMULATION

RESULTS

A software simulation has been implemented to verify the performance of this network. The authors' implementation includes a preprocessor and a Hopfield and Tank model simulator. All the programs are implemented in c, and run on a Sun 386i workstation. In the preprocessor, a random-number generator is used to determine the number of points in each net,

544

a

b

Figure 7. Channel density of Example 7; (a) randomly generated, ( b ) after processing

computer-aided design

As this network converges to a stable state, half of the neurons in the first layer have the value 1, and the other half should have the value 0. Hence, the initial values of the neurons in the first layer are assigned to be 0.5. However, a fully balanced network does not converge in any direction. Therefore, a small noise is added to the initial values to break this balance. These values are assigned by vi = 0.5 + (random(11) - 5)/100

i = 1, 2..... D1 (26)

where random(i)is a random-number generator whose results fall into the range [0, i - 1 ] . Meanwhile, the initial value of zi is set at pmaX, and the initial value of yi is set at 0, for i --- 1, 2. . . . . Z. Four options for the gi function are used in this simulation:

• g'l(u)={lo u<.oU>O •

~i2(U) = 1 /(1 -'l- e -u'G)



gi3(u) = (e u'c -- e



gi4(u) = arctan(u*G)/~ + 0.5

u'G)/(eu'C + e -u'G)

G is the gain of the neurons. The function gil provides a fast, but mostly infeasible, solution. In some instances, it even drives the system into oscillation. The solutions obtained by g~2 and gi3 are almost the same. However, as an exponential function is used in these functions, the system will be made to overflow if the value of u*G is too large. The results presented here are obtained by using gi4. A 10 x 10 grid is used for the authors' simulation. The results are shown in Table 1. Within the table, the fields labeled 'length' refer to the total path length, and the fields labeled 'highest density' refer to the highest density found in all the links. The values of the fields labeled 'initial' are randomly generated, and those labeled 'after processing' are obtained after the processing of the network. Example 1 is also shown in Figure 7. In this figure, the line width of a link is

proportional to its corresponding channel density. From this figure, it can be seen that the distribution of the wires is made much more uniform. Further, it is interesting to observe that the network results are strongly example-dependent. That is, the percentage of the path-length reduction and the reduction in the highest channel density is not related to the number of nets.

CONCLUSIONS As the simulation was implemented in software, the computation time was long. However, it was proven that the Hopfield and Tank model can be implemented in a VLSI chip. If it is implemented in hardware, the speed is improved significantly. Further, a VLSI technique was developed to implement programmable interconnection matrices in a neural network. By the use of the programmable interconnection matrices, a generalized network can be built, and the connections can be adapted according to the problems in hand. The current applications of neural networks to the solution of optimization problems are implemented individually, and mostly by observation. That is, a method used to transfer a problem into a neural network is not suitable for the transference of another problem. Thus, an interesting problem is that of finding, if one exists, a systematic method for this work. Further, the network described here can be used to obtain a quick and rough solution, and then other methods can be used to refine this solution in certain smaller regions. One drawback of the Hopfield and Tank model is that it can find local optima only. That is, the solutions found are not guaranteed to be the best solutions. However, for most practical problems, the finding of a good solution in a reasonable time is more important than the finding of the best solution in an inestimable time. In recent years, a general scheme used for finding global optima and their combination with a neural network has been proposed 16. The architecture described

Table 1. Experimental results Example

1 2 3 4 5 6 7 8 9 10 11 12

Number of nets*

Initial

After processing

n-p

2-p

Length

Highest density

Length

Reduction %

Highest density

20 19 18 17 25 28 30 25 18 15 40 48

36 34 33 30 42 48 54 40 32 25 65 73

116 118 103 95 124 136 148 122 105 86 183 214

10 11 9 9 10 14 16 13 10 7 18 23

82 90 92 85 100 112 124 101 85 67 146 165

29 24 11 11 19 18 16 17 19 22 20 23

4 5 4 4 5 8 10 8 6 3 12 16

*The n-p field representsthe number of multipoint nets. The 2-p field representsthe number of 2-point connections.

volume 23 number 8 october 1991

545

here is still applicable t o this new model. The only modification is the i n p u t - o u t p u t relationship of the first layer of neurons. Neural networks are attracting widespread attention as a potential candidate for the next generation of computers. In this paper, a 2-layered neural network is applied to the solution of an optimization problem in computer-aided design. The convergence of this network is proven, and the performance of the network is verified. The result is quite encouraging.

collective computational properties like those ol two-state neurons' Proc. Nat. Acad. Sci. USA Vol 81 (1984) pp 3088-3092 1,5 Duo, N Graph Theory with Applications lo Engineering and Computer Science Prentice-Hall, USA (1974)

16 Aarts, E and Korst, J Simulated Annealing and Boltzmann Machines John Wiley (1989)

BIBLIOGRAPHY REFERENCES 1 Garey, M and Johnson, D Computers and Intractability: A Guide to the Theory of NPCompleteness Freeman, USA (1979) 2 Vecchi, M P and Kirkpatrick, S 'Global wiring by simulated annealing' IEEE Trans. Comput.-Aided Des. Vol 2 No 4 (1983) pp 215-222 3 Sechen, C VLSIPlacement and Global Routing Using Simulated Annealing Kluwer, USA (1988)

Dijkstra, E 'A note on two problems in connection with graphs' Numerische Mathematik Vol 1 (1959) pp 269-271

APPENDIX A Proof of L e m m a 1 ' =~> ' ,

~j~i(xi:

4 Kuh, E S and Marek-Sadowska, M 'Global routing' in Ohtsuki, T (Ed.) Layout Design and Verification

6 Libeskind-Hadas, R and Liu, C L 'Solutions to the module orientation and rotation problems by neural computation networks' Proc. 26th Design Automation Conf. (1988) pp 400-405 7 ¥ih, J g and Mazumder, P 'A neural network design for circuit partitioning' Proc. 26th Design Automation Conf. (1988) pp 406-411 8 Mead, C Analog VLSIand Neural Systems AddisonWesley, USA (1989)

9 Gaff, H P, Jackel, L D and Hubbard, W E 'VLSI implementation of a neural network model' IEEE Comput. Vol 21 No 3 (1988) pp 41-49 10 McCulloch, W S and Pitts, W H 'A logical calculus of ideas immanent in nervous activity' Bull. Math. Biol. (1943) pp 115-133 11 Tank, D W and Hopfield, J J 'Simple 'neural' optimization networks: an A / D converter, signal decision circuit, and a linear programming circuit' IEEE Trans. Circuits & Syst. Vol CAS-33 No 5 (1986) pp 533 541

12 Hopfield, J J and Tank, D W "Neural' computation of decisions optimization problems' Biol. Cyber. Vol 52 (1985) pp 141-152

13 Shih, P H and Feng, W S 'Neural computation for global routing' Proc. lASTED Int. Symp. Modeling, Simulation and Optimization (1990)

Z, Z,

+

Z, L x/- 2 Z, L x,x

= m ~, x~ + m ~_.j x~ - 2 ~_,i xi ~.j xj = 2 m ~ i x ~ - 2K 2

Elsevier (1986) pp 169-199

5 Karp, R M, Leighton, R L, Rivest, R, Thompson, C D, Vazirani, U and Vazirani, V 'Global wire routing in two dimensional arrays' Ann. Syrup. Foundations of Computer Science Vol 24 (1983) pp 453-459

xj) 2

As 2K 2 is a constant, the final result is a minimum if Zix~ is a minimum. Let

L :

Zm-lx i=1

+

2

Xm

= m E, x:/+ m ~-.i x~ - 2 E, x, ~-.i x, = 2m ~ i x~ - 2K 2 Then,

OQ/~x 1 = 2x 1 + 2(K - ~_,im_l1x j ( - 1 ) = 2(xl -- Xm) By the same procedure,

8QISx 2 = 2(x2 - xm) 8Q/8.Xm 1 = 2(xm_l - xm/ From the derivatives shown above, it is known that the extreme of Q occurs under the condition that X 1 --

Xm

:

0

X 2 --

Xm

:

0

Xm

1 --

Xm

:

0

That is, X I :

X 2 ~-- ...

:

Xm

Further, 92Q/Sx~ : 2 > 0

i:

1. . . . . m - - 1

The conclusion is that the extreme found is a minimum.

14 Hopfield, J J 'Neurons with graded response have

546

computer-aided design

APPENDIX B

Then, the time derivative of Equation 28 is

Proof of Theorem 1

dE~dr

For the proof that a network can converge, an energy function E must be found for the network, and then the fact that any state change of the network decreases E must be proven. As E is decreased to 0, the network does not change further. Also, its convergence is proven. Consider the following energy function:

= -- ~i li dvi/dt + ~,j I/Riu i dvi/dt -- ~i ~i Tiivj dvi/dt - ~.,i f(zi)~i SiJ dvi/dt = -- ~_,, dvi/dt(li-

ui/R ` + ~j T,ivj + ~iYiSi i)

Substituting Equation 23 into the bracketed term above, dE / dt

= -- ~,,i dvi/dt Ci dui/dt E = -- ~,~l,v, + ~,I/R,

= - ~,,i Ci(dvi/dr)2 dg - 1 (vi)/d t

g-l(v)dv

-- 1/2 ~i,j Tiivivi -- ~,iF(zi)

(27)

As Ci is positive, and g - 1(vi) is a monotonous increasing function, this sum is nonnegative. This means that

(28)

dE/dt <~ 0 i.e. dE/dt = 0 implies that dvi/dt = 0, for all i. Thus, any change in E due to the state change is always in the negative direction. Eventually, E must find a minimum, and stop. Thus, the iteration of the network must lead to a stable state. •

where f(zi) = dF(zi)/dt , and zi = p~aX + giSjivi. Equation 27 can be rewritten as

E = -- ~,ilivi + ~il/Ri

L'i

g-l(v)

-- 1/2 ~i,i Tiivivi -- ~-,i F(zi)

volume 23 number 8 october 1991

(29)

dv/dtdt

547