NEUROCOMPUTING ELSEVIER
Neurocomputing 14 ( 1997) 209-222
Chaotic neural network with nonlinear self-feedback and its application in optimization ZhouChang-song ap*, Chen Tian-lun
1
a,b9c, Huang Wu-qun a
aDepartmenr of Physics, Nankai University, Tianjin 300071. China b CCAST (World Laboratory), Beijing, 100080, China ’ Institute of Theoretical Physics, Acadmia Sinica, P.O. Box 2735, Beijing. 100080, China
Received 7 July 1995; accepted 3 1 January 19%
-
Abstract When a special nonlinear self-feedback is introduced into the Hopfield modeI, the network becomes a chaotic one. Chaotic dynamics of the system can prevent its state from staying at local minima of the energy indefinitely. The system then gets the ability to transfer chaotically among local minima, which can be. employed to solve optimization problems. With autonomous adjustment of the parameters, the system can realize the global optimal solution eventually or approximately with transient chaos. Simulations on the Traveling Salesman Problem (TSP) have shown that the proposed chaotic neural network can converge to the global minimum or its approximate solutions more efficiently than the Hopfield network. Keywords: Chaos; Nonlinear self-feedback; Optimization; Local minima
1. Introduction After Hopfield and Tank’s work in 1985 [4], the Hopfield model has been extensively applied in optimization. The advantage of this new approach in optimization is that it exploits the massive parallelism of neural network. However, the results obtained are not so satisfying [9]. The reason is that the Hopfield neural network is a stable system with gradient descent mechanism. It has many local minima, and has no scheme to escape the local minima whenever it is trapped in them. If one can limit the number of local
* Corresponding author. ’ The project is supported by National Basic Research Project ‘Nonlinear Science’ and tbe National Nature Science Foundation of China. 0925-23 12/97/$17.00 0 1997 Elsevier Science B.V. All rights reserved P/f SO925-23 12(96)00030-6
Z. Chang-song
210
et cd./ Neurocomputing
14 (1997) 209-222
minima or introduce some mechanism to help the system to escape commonplace local minima, one can expect a better opportunity of converging to the optimal solution or its approximation. In this paper, we introduce a special nonlinear self-feedback into the Hopfield network to construct a chaotic neural network model, and use this model to solve the Travelling Salesman Problem (TSP). The model is described in Section 2. In Section 3 we show the energy function of TSP. The dynamic properties of the model are discussed in Section 4. In Section 5, a method of autonomous control of parameters is introduced to improve the optimization performance with transient chaos. The results are shown in Section 6 and discussed in Section 7.
2. Chaotic neural network
model
The energy function E(V) of an optimization model can always be written as E(V) = - ;~wjjv;v, i.j The dynamical
equations
problem
- CI;V,.
like that of the Hopfield
(1)
i
of the time-discrete
Hopfield
neural network are described
as U;( t + 1) = CW;,V,( t) + I;,
(2)
V,(r+ 1) =f(ui(r+
(3)
l)),
where U, and V, are the local field and state of ith neuron respectively; Wjj is the synaptic connection and I; is the external input; f is a sigmoid transfer function. It can be proved that the system can converge to stable states, which are the minima of the energy, as long as connection matrix W is symmetric and positive-defined. In order to obtain a network with richer dynamics, we introduce a nonlinear self-feedback term into the network, and Eq. (2) becomes
q(t+
1) =~wijvj+Ii+g(Ui(t)
- UJ-
1)).
(4)
The self-feedback term in Eq. (4) is switched on at r = 2. The form of nonlinear function g( x) is chosen under the following consideration. (1) It should not change the fixed points of Eqs. (2) and (3); this demands that g(O) = 0. But the stability of these fixed points may be changed. (2) 6 = I Uj( t) - Ui(t - 1) ( may be viewed as the speed at which the system approach to a fixed point. Large 6 means the system is not near a fixed point. To make sure that the system contains the trend to go near a fixed point, g(F) should be small enough at large 6. (3) At intermediate 6 values, the system drops into some region around a fixed point; the latter term of Eq. (4) is expected to make the system stay in this region for some period of time, then jump out and tend to another region around another fixed point.
Z. Chung-song et ul./Neurocomputing
Fig. 1. A plot of g(x)
14 (1997) 209-222
211
with p, = p2 = 5.
In this paper, g(x) is taken as (!i)
g(~)=~,~exp(-~J~l)~
where p, and p2 are adjustable parameters. Fig. 1 shows a figure of g(x) with p, = 5, and R = I/p,. Noting that the self-feedback is a nonlinear response of Sq = r/l(t) - Ui(t - 1) but not SV; = V;(r) - Vi(r - I), the saturation of the transfer function f can be utilized, for a large Sqi can only lead to a small SVi in the saturation parts of f. The properties of g( X>enable the system to reach those states corresponding to the minima of E, but will not stay at them indefinitely as in the Hopfield model. The dynamical behavior of the model will be discussed in Section 4.
pz = 2, where g, = g(R)=p,/ep,,
3. The travelling
salesman
problem
Supposed that N cities lie within unit square. L(i, j) is the link between the ith and the jth city (L(i, j) and L( j, i) are considered the same), and Di, is the distance between them. We use the following coding scheme [lo]: each neuron Vii (i < j> corresponds to a link L(i, j), and L(i, j) is taken in the solution if Vii = 1, while is not if Vij = 0. This coding scheme allows us to limit the links with some heuristic algorithm, which reduces the number of neurons. In this work, we consider the nearest neighbor links between cities on random triangle lattice [2,3]. The triangle lattice is constructed as follow. We start with an arbitrary city, and find out its nearest neighbor. These two points are linked. When a link is given, the next step is to find a triangle taking this link as one of its sides. We note that each link may belong to two triangles, one on each side, and if a side is chosen, the task is to locate the third point of the triangle on that side. The method is to draw a family of arcs of circles, each passing through the two endpoints of the link. As the arcs is enlarged, the first city it sweeps through is the desired point. This procedure goes on till no more triangles can be found, resulting in a
2. Chnng-song et al./Neurc~nmputing
212
14 (1997) 209-222
Fig. 2. (a) The links of a N = IO problem under the nearest restriction, (b) the optimal tour whose length is 2.685.
graph with convex boundary. Links on the triangle lattice have no crossing with each other, and are among those of shortest length. For random distributed cities and large N, a city has an average of 6 neighbors, which means that the number of the neurons is about 3N, compared with N(N - 1)/2 without above restriction, or N* for the coding scheme proposed by Hopfield and Tank [4]. There is also no crossing in the tours under this restriction. Actually, the restriction constructs a subspace of solutions around the optimal one. Our aim in this work, however, is not at showing the advantage of the restriction. Detailed study of the coding method as well as the restriction will show up elsewhere. For a problem of N = 10 (the same as in [41), its 20 links are shown in Fig. 2(a). For another problem of N = 30, the links are generated by a computer program, in which the random lattice satisfies periodic boundary condition in both directions. Some links break down when a period is cut out. The resulted graph no longer has convex boundary, as illustrated in Fig. 3(a). The number of links is reduced from 435 to 68. Some links can be added to the graph to make it into a convex one, which is not done here, because we note that such links have long distance. Two classes of assemblies are defined for the convenience of the discussion. AT = ( j 1L( i, j) exists, and j > i} ,
(6) (7)
A; = ( j I L( i, j) exists, and j < i} .
With the above coding scheme and restriction, a possible energy function is E= E, +AE,
(8)
+BE,,
Fig. 3. (a) The links of a N = 30 problem under the nearest restriction,(b)
a tour of D = 4.612.
2. Chung-song
et ul./Neurocomputing
14 (1997) 209-222
213
where N-l
E, = c i= 1
2
(
c
Vij+
jEA!+
c
Q-
2
,
(9)
jE A;
N-l E2 =
C i
C
VijDij,
(10)
jSA,+
and M is the number of the neurons. E, requires that each city has two neighbors in the solution; E2 is the the cost (length) of the solution, and E3 guarantees that Vii = 0 or 1. However, E, = 0 and E3 = 0 can not guarantee that the solutions are feasible tours (Hamiltion cycle). Subtour with several cycles also reaches E, = E, = 0. It is an essential difficulty with this coding scheme, and there is no simple and practical constraint which is easy to express in neural network. So the neural network with above energy function is not easy to reach a feasible tour, especially when there is no the nearest restriction; some other algorithms are needed to merge several cycles obtained by the neural network into a single cycle tour [IO]. In this paper, chaos acts as a scheme to search for feasible tours in the solution space. We derive the synaptic connections and external inputs of the network by comparing the energy function (8) with the general formula (1). The dynamic equations of the neural network with nonlinear self-feedback are then obtained as follows:
C Vik(‘)+ C kcA:
‘kiCt)
kEA,-
+2BVij(t)-ADij+8-i3+g(~ij(t) Vij(t+1)=[1+tanh(pUij(r+1))]/2.
+
C keA;
VJk(‘)
+
-Uij(t-
C kEA;
‘kj(l))
l)),
(12) (13)
4. Network dynamics One can study the dynamics regimes of the model in great detail. Since our purpose in this work is mainly to show the potential application of the model to optimization problems of various scales, here we would like to describe the dynamical property qualitatively rather than quantitatively. To study different behavior of the network, we perform it on the N = 10 problem with p = 25, A = 0.2, B = 0.5. We only investigate the dynamics with respect to the parameters p, and p2, because the self-feedback is the new ingredient of the model. The dynamics regimes are illustrated by bifurcation diagrams for U,, with respect to p, and pz in Figs. 4(a) and 5(a) respectively. Fixed points, periodic orbits and complex
Z. Chung-song rr ul./Neurocomputin,q
214
14 (19971 209-222
w
Fig. 4. (a) Bifurcation diagram for M,, with respect to p ,. The diagram
in the small frame
is the detail of that
part. (b) Bifurcation diagram for the energy E.
oscillation can be detected in these figures. Whether oscillatory behavior is chaotic or quasiperiodic can be distinguished by a two-dimensional section of the attractor, which is a circle for quasiperiodic motion. In this way, we find that in a vast region of the parameter space ( p,, p2), the motion is chaotic but not quasiperiodic. As an example, here a chaotic time series of II,, and a section of the attractor are plotted in Fig. 6.
Fig. 5. (a) Bifurcation diagram for U,, with respect to p2. Look into the small frame for the detail of that part. (b) Bifurcation diagram for the energy E.
2. Chung-song
et ul./Neumcomputing
. r
14 f 1997) 209-222
-5
215
0 U
34
Fig. 6. A chaotic time senes of U,, and a section ((I,,
vs. U,*) of the chaotic attractor at
p, = pz = 5.
It should be noted that the behavior of the state V is greatly different from that of U. V may keep almost unchanged while V is changing, because of the saturation of rhe transfer function f. Only when V moves within a large enough range can V leave a fixed state to wander in the phase space. An indicator for the motion of the state is rhe energy E. Corresponding to Figs. 4(a) and 5(a), diagrams for E are plotted in Figs. 4.b) and 5(b), showing that E almost keeps constant while U is oscillating periodically ( p2 > 5 in Fig. 5(a)) or chaotically ( p, < 5 in Fig. 4(a)). During the wandering, the system may reside at some states for some period of time. These states are actually the local minima of the energy because they have E, = E, = 0, and the system will stay there indefinitely if the self-feedbacks are cut off, i.e. let p, =: 0 when it reaches such a state. The wandering processing is illustrated by the time series of E, in Fig. 7. So the system can visit a lot of local minima and generates a chaotic orbit of them.
Fig. 7. Time series of E,. E, = 0 indicates p, = pz = 5, (b) p, = 7, p2 = 5.
that the system
visits a local minimum
of the energy
E. (a)
216
Z. Chang-song
et al./Neumcomputing
14 (1997) 209-222
In summary, when pz is fixed, the system can transfer from one minimum to another in a chaotic fashion at large p, . When p, is reduced to such an extent that ZJ can only move chaotically or periodically in a small region around one of the fixed points, then the system can not escape the corresponding minimum, but presents fixed state. The system will finally reach a real fixed point if p , is further decreased, and the behavior of the network is almost the same as that of the Hopfield network. When p, is kept constant while p2 is gradually increased, the behavior of the system changes almost in the same way as just described. It should be pointed out that the above properties are very common for systems of different size and structure. However, the exact values of parameters p, and p2 at which the bifurcations take place are different from each other. It seems impossible to get an exact rule for these values in different system. Generally, chaotic transition among local minima can be obtained provided that p, is not too small and p2 not too large, otherwise the system will come to one of the local minima and stay there for a very long time or forever. However, if p, is too large and p2 too small, the evolution of the state become very random in fashion, visiting local minima frequently with short residence time (see Fig. 7(b) as an example). At the intermediate values of p, and p2, the system can visit a number of minima and has an evident residence at them. This behavior has some similarity to intermittency (see Fig. 7(a)). We refer to these three roughly sorted kinds of behavior in term of random, intermittent, and frozen state respectively for the convenience of the following discussion.
5. Searching for optimal solution with autonomous
control of parameters
A better tour contains those links with shorter Dij. To reduce the probability of taking larger Djj link in the solution, the corresponding neuron gets stronger drive from self-feedback which is changed into the form Djjg(Uij(t> - Ujj(r - I>). Two similar tours T and T’ with close length are only locally different from each other; correspondingly, the states V and V’ of the network are only different at few neurons. In contrast, the larger the difference of two tours is, the more neurons are different. When the network reaches a longer feasible tour, its state is expected to have a greater change so that it has better chance to reach a shorter feasible tour. If the network comes to a short tour, it is hoped to search more carefully to visit a shorter tour, with only few neurons changing their states. It is more encouraging if the system can be stabilized at the optimal solution. To satisfy above demand, the system is let to operate in the random region at first, then is moved towards or enter the frozen region if it visits a feasible tour near some desired length. We introduce the following autonomous control of bifurcation parameters to realize this idea. When the system reaches a minimum and the solution is a feasible tour, then P2=P+1/(4-Q2,
(14
PI =‘y*P2(G-Q2.
(‘5)
Z. Chung-song et uI./Neurocomputing
14 (1997) 209-222
217
Fig. 8. An example of transient chaos of li,,.
Otherwise P2=P7
PI
=‘yIP2.
(17)
Now there are four parameters p, cx,, L, CY 2. p and IX, are chosen to let the system to operate in the random region, so it has a proper frequency to transfer between the minima. L and CX~are chosen so that the network can be stable when it reaches a feasible tour with length close to L, but remains in the random region if the reached tour is much longer than L. So the system may finally be stable at the optimal solution if I, is very close to it. Before being stable, the behavior of the network is chaotic, and in fact, it is a process of transient chaos. Fig. 8 shows an example of transient chaos of u,,. Eqs. (14) and (15) seem to be a very simple way to control the system to work in different region. Exploring and stabilizing the optimal minimum with this scheme may fail for two reasons. The first is the difficulty in choice of L and CY~to destabilize, ideally, all the minima but the optimal one. The other is that a local minimum of lower energy may occur far from the optimal one, with a very high energy barrier stands between them. However, this scheme at least enables the system to escape some commonplace minima and contains the tendency to reach deeper ones.
6. The results In this section, some simulations are carried out to examine the performance of the model. 6.1. The dependence of the Hopfield network on parameters
To begin with, the Hopfield model is simulated on the problem of N = 10 to investigate its dependence on the parameters A and B with constant B = 25. Realizing
218
Z. Chan,c-song
et al./Neurocomputing
14 (1997) 209-222
that B shows up in the diagonal of the connection matrix W, so it affects the stability of the fixed points. Let A = 0.2, and it is seen that the system presents oscillatory behavior but not fixed points if B is less than about 0.1. While when B 2 0.2, many initial conditions can approach to fixed points, and others tend to some periodic orbits, which have fixed value of E with E, # 0 but some integer and E, = 0. With the same 30 random initial conditions, we find that the final states are the same for different B (we examined B = 0.2, 0.5, 0.8, 1.0) but the speed of convergence is slightly higher with larger B if the final states are fixed points. Now let B = 0.5, we study the effect of A. Also with the same 30 initial conditions, we find that performances are almost the same for A = 0 - 0.7. When A a 0.8, the system comes near the same minimum as A G 0.7, but has a slight periodic oscillation around it. The oscillation is disappear if B is increased, e.g. B = 1. So the performance of the network does not depend on the parameter very much, which is different from the traditional Hopfield model [4,9]. The reason may be that the nearest neighbor restriction has much stronger effect on the topology of the energy function than other factors. This problem deserves a separated study.
6.2. Solving the problem
of N = IO
For this problem, the optimal solution is shown in Fig. 2(b), whose length is known as D,,,i, = 2.685. The chaotic network is performed with the following values for the parameters: A = 0.2, B = 0.5, p = 25, and p = 5, a, = 3, L = 2.65, cx2 = 350, noting that L is close to D,,,i,, and CQ is large enough that the second optimal solution D = 2.746 is unstable. The behavior of the network is compared with the Hopfield network with the same values of A and B. Both the two networks are simulated with the same 30 random initial conditions, and performs 2000 iterations for each simulation. Among the 30 simulations, the Hopfield network does not converge 7 times and converges to subtours 8 times. In the other 15 simulations the network reaches feasible tours, and the shortest one is D = 2.778. The chaotic network performs much better. It converges to the optimal solution 24 times, and does not converge only 6 times, but visits an average of 44 feasible tours during the 2000 iterations. However, the chaotic network, which needs time to wander among the minima and search for the optimal one, has a low speed of convergence. The average number of iteration to converge is 241, compared with 14 for the Hopfield network. As has been stated, when B is too small the Hopfield network lose the ability of convergence to local minima. Performance of the chaotic network under this condition is also examined. Let B = 0.1 and other parameters keep unchanged. For 10 simulations, each iterating 2000 steps, the system does not visit any feasible tour in 3 simulations, while visits an average of 3 feasible tours in the other 7 simulations. Although it does not converge to the optimal solution the chaotic network still operates better from the viewpoint of finding feasible tours.
Z. Chang-song
et ul./Neurocomputing
14 (1997) 209-222
219
6.3. Solving the problem of N = 30 For this problem of 30 random cities, A, B and p are chosen the same as the problem of 10 cities. Let p = 10, CN,= 7, which are chosen rather arbitrary to make the network operate in the random region. Since we have no u prior idea about the optimal tour, the choice of L and o2 is not so easy as the case in Section 6.2. Simulations are carried out to examine the effect of L and oz. The first step is to fix (Yz to some value (here oZ = 12), and let the system develop from the same initial state with different values of L. As shown in Fig. 9(a), if L is large, the system is frozen at the minimum of D = 5.542, which is the first one visited by the system. The system converges to some shorter tours when L is decreased. If L is too small (here L < 4.2), the network does no converge during the 2000 iterations. Similarly, the effect of o2 is studied with fixed value of L (L = 4.2). a2 determines approximately how much the length of a obtained tour is longer than L, as shown in Fig. 9(b). Too small 012 can not enable the system to escape the first minimum it traps into. The frozen tours get closer to L as IX* is increased. When Q 2 is still increased, great change of the state takes place when the system visits a feasible tour (see Eq. (15)) and convergence may not occur during 2000 iterations (01~ = 1.5, 16, 17, 18, 19 in Fig. 9(b)), unless the tour is short enough. Again the performance of the network is compared with the Hopfield network. Let L = 4.2 and CY*= 12. The two networks begin their iteration from the same 20 random initial states, lasting for 2000 steps. Among these 20 instances, the Hopfield network obtains 4 feasible tours, and I5 subtours, needing an average of 35 iterations to converge. It does not converge in 1 instance. The chaotic network, however, behaves better. It obtains 12 feasible tours. The average of the time needed to converge is 936. In the other 8 instances, it does not converge, but it visits an average of 7.5 feasible tours during the 2000 iterations.
-1-T-1::1”:1 0
2
4
6
8
10
a
12
14
16
20
2
Fig. 9. (a) The length of feasible tours obtained with different L when (Yz = obtained with different o( z when .L = 4.2.
18
I2
(b) the length of feasible tours
Z. Chang-song et al./Neurocomputin~
220
14 (1997) 209-222
The length D of the feasible tours obtained by the two networks is shown in the following table: Hopfield network
D
4.895
5.462
5.389
5.814
4.612 4.895
4.708 4.749
4.908 4.865
4.816 4.832
Chaotic network
D
4.847 4.95 1
4.914 4.914
It is seen that the chaotic network not only has much higher efficiency to converge to feasible tours, but also improves the tours. The average length of the obtained feasible tours is D, = 5.390 for the Hopfield network, and D, = 4.834 for the chaotic network. The percentage of average improvement S are calculated as s=
Dtl -D,
x 100% = 10.32%.
D* Fig. 3(b) shows the tour of D = 4.612. It looks good although we do not know the optimal tour.
7. Discussion We introduce a nonlinear self-feedback mechanism into the Hopfield model to construct a chaotic neural network, overcoming the difficulty of the Hopfield model which has no ability to escape local minima as it is trapped in. This network can visit the minima of the energy, but does not stop there indefinitely due to the chaotic motion of the local fields CJij.Whether or not the system will transfer between the minima is easy to control by adjusting the parameters p, and p2 of the self-feedback. A method of autonomous control of these parameters may enable the system to search for the global minimum solution or its approximations by transient chaos. Applying the network in solving N = 10 and N = 30 TSP problems has shown that the network has higher efficiency to converge to the optimal tour or tours close to it. Especially, when the energy constrains are not sufficient for getting feasible tours, the chaotic motion of state enables the system to search successively for the the minima, and has better chances to visit good feasible tours. Many neural network models [1,5,6,7] which can display chaotic dynamics have been proposed, mainly concentrating on a description of dynamical structure. Some authors have also concern the functional role of chaos in neural network information processing [8,11]. Freeman suggests a functional role of chaos, that rabbits cannot memorize a new odor without chaos. Tsuda points out that the cortical chaos may serve for dynamically linking true memory as well as a memory search. Our work, making use of chaos as optimal minimum search, is also a meaningful try to explore the potential application of chaotic dynamics in neurocomputing. The properties of .the nonlinear self-feedback enable the model to be implemented to varieties of the energy minimum problems. It
Z. Chung-song et ul./Neurocomputing
122I
14 (1997) 209-222
seems also encouraging to apply this chaotic neural network model to other neurocomputing processes, such as pattern recognition, associative memory. In another similar model [ 121, chaos acts as a mechanism to generate temporal sequence of memory. There the response properties of the network are very similar to the model proposed by Tsuda [Sl. The process of searching with transient chaos is similar to simulated annealing. However, the former is deterministic and the latter is stochastic. Another line of future research is to compare these two schemes. The coding algorithm as well as the nearest restriction bring about some new properties, which is worthy to study further.
Acknowledgment We would like to thank Dr Ji Daoyun for his helpful suggestions.
References III 69
K. Aihara, T. Takabe and M. Toyoda, Chaotic neural networks, Physics Lerrers A 144 (1990) C. Jun, C. Tianlun
and H. Wuqun,
The Traveling
Salesman Problem: Optimization
sampling simulated anneahng method, Chinese /. Compututiond
Physics
I I (3) (I 994)
333-340.
by importance 278-282.
[31 N.H. Christ, R. Friedberg and T.D. Lee, Random lattice field theory: General formulation, Nucl. Phys. B 202
(I 982)
89- 125.
[41 J.J. Hopfield and D.W. Tank, “Neural” Cybernetics 52 ( 198.5)
computation of decision in optimization
problems, Biologicul
141-152.
[51 S. Renals and R. Rohwer, A study of network dynamics, J. Sfarisr. Physics 58 (1990) 825-848. id U. Riedel, R. Kiihn and J.L. van Hemmen. Temporal sequences and chaos m neural nets, Phy.tical Reuiew A 38 (1988)
1105-I
108.
M H. Sompolisky, A. Crisanti and H.J. Sommers, Chaos in random neural networks, Physicd Reuiew Lerr. 61 (1988)
[a1 I.
259-262.
Tsuda, Dynamic
link of memory-chaotic
memory map in nonequilibrium
neural networks,
Ntiurcrl
Networks 5 ( 1992) 3 13-326.
[91 G.V.
Wilson
Cybernetics 58
and G.S.
(I 988)
Pawley,
On the stability
of the Traveling
Salesman
Problems,
Biofogicrrl
63-70.
[lOI X. Xu and W.T. Tsai. Effective neural algorithms for the Traveling Salesman Problem, Neltrrrl Nrrwork 4 (1991)
193-205.
11 II C.A. Skarda and W.J. Freeman, How the brain makes chaos in order to make sence of the world, Behuuiorul and Bruin Science IO ( 1987) 16 I - 19.5.
iI21 2. Chang-song and C. Tian-lun, Chaotically temporal retrieval of memory, (Chinese) Comm. Theoret. Physics, to appear.
Zhou Chang-song is currently a Ph.D. student of the department of Physics, Nankai University. He received a B.S. in physics from the same university in 1992. His interests include nonlinear dynamical systems, neural networks. chaos control.
222
Z. Chnng-sorq
et ul./Neurocomputin(:
Chen Tian-lun graduated in After her graduation, she has where she is a professor. She to 1982. Her current interests
14 (1997) 209-222
1962 from Nankai University, Tianjin, P.R. China. been with Department of Physics, Nankai University was a visiting scholar at Brown University from 1980 are in statistical physics and neural networks.
Huang Wu-qun was born in Tianjin, China. He received a B.S. degree from Nankai University, China. Mr Huang is a Professor of Physics at Nankai University. His research interests include neural networks and statistical physics.