Vector-product Hopfield model

Vector-product Hopfield model

15 September 1999 Optics Communications 168 Ž1999. 445–455 www.elsevier.comrlocateroptcom Full length article Vector-product Hopfield model Changhe...

201KB Sizes 1 Downloads 67 Views

15 September 1999

Optics Communications 168 Ž1999. 445–455 www.elsevier.comrlocateroptcom

Full length article

Vector-product Hopfield model Changhe Zhou ) , Liren Liu Shanghai Institute of Optics and Fine Mechanics, Academia Sinica, P.O. Box 800-216, Shanghai 201800, China Received 8 April 1999; received in revised form 24 June 1999; accepted 29 June 1999

Abstract For pattern recognition, we are frequently faced with the problem of recognition for a real-world three-dimensional object. The mathematical vector-product algorithm in three-dimensional space is introduced into the neural network domain, and a new type of neural network model — vector-product Hopfield model — is proposed. Computer simulations show that the vector-product Hopfield model can recall the entire stored three-dimensional vectors at a high recognition rate with the partial information of the stored vectors in one or two dimensions only. Such a performance cannot be realized with the Hopfield model by simply presenting only one-third or two-thirds of the stored vectors. Thus, the proposed model is highly interesting for further developments of neural network models and practical applications. Preliminary optical experimental implementation is also given. q 1999 Elsevier Science B.V. All rights reserved. Keywords: Neural network model; Hopfield model; Vector-product method; Optical neural network

1. Introduction For pattern recognition, we are frequently faced with the problem of recognition for a real-world three-dimensional object. Neural networks with the aim of simulating the recognition capacity of human beings are promising for solving such complex pattern recognition problems. Several neural network models have been proposed, the Hopfield model w1x is one of the most famous among them w2–4x, and have been studied extensively w5–8x. The Hopfield model does have the capacity of associative memory, to some extent, like human beings, through its socalled ‘interconnection matrix’, but its size increases

)

Corresponding author. Tel.: q86-21-595-34-890 Ž664.; fax: q86-21-595-28-885; e-mail: [email protected]

rapidly as the dimensions of recognized pattern increases. For three-dimensional pattern with a size N 3 , the Hopfield model needs an interconnection matrix with a size N 6 where, for example, N s 100, and N 6 s 10 12 . A pattern may need more than one dimension to describe it, since an object in real world is usually three-dimensional, so how to represent and recognize a three-dimensional object is always an interesting problem. For example, a color image should be composed of three basic color dimensions: red, green and blue. From a mathematical view of point, each component may need more than one dimension to describe it. The Hopfield model is originally one-dimensional, which is not adequate to cope with applications where multidimensional representations are used. In the case above, the number of orthogonal dimensions is two and the complex Hopfield model

0030-4018r99r$ - see front matter q 1999 Elsevier Science B.V. All rights reserved. PII: S 0 0 3 0 - 4 0 1 8 Ž 9 9 . 0 0 3 5 9 - 4

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

446

w9x has been proposed, where each component of the vectors and interconnection matrix is complex-valued, i.e., in two orthogonal real and imaginary dimensions, respectively. If the information in one dimension is completely lost, numerical simulations show that the stored vectors can still be recalled at a very high recognition rate Ž) 95%. with the complex Hopfield model using the information in the other dimension. Such a high performance cannot be realized with the original Hopfield model by simply presenting one-half of the stored vector. It is natural to expect that the Hopfield model can be developed into three-dimensional space. We tried to develop the complex Hopfield model into threedimensional one by keeping the rules in the complex domain, such as 1 P i s i, i P i s y1, although we failed to do so; namely, it seems impossible, at least as far as we know, to develop the Hopfield model in three-dimensional space with rules that are compatible with those of the complex Hopfield model. In this paper, we introduce what mathematicians do in three-dimensions in the literature w10x and develop the vector-product Hopfield model. It is analytically proved and numerically verified that the vector-product Hopfield model has the associative capacity, and by presenting the information only in one or two dimensions, the entire stored vector in all three dimensions can be recalled at a much higher recognition rate than that of the Hopfield model, simply by presenting one-third or two-thirds of the stored vector. A preliminary optical experiment is also presented. 2. Vector-product Hopfield model Both for completeness and comparison, let us briefly review the Hopfield model and the complexvalued Hopfield model. The interconnection matrix constituted by the Hopfield model is M

Ti , j s

Ý

Õim Õjm ,

Ž 1.

ms1

where  Õim 4 are M vectors to be stored in the interconnection matrix Ti, j 4 , m s 1,2, PPP M, i s 1,2, PPP N, j s 1,2, PPP N, and N is the length of vector.

The recalling process is N

Õi s Q

žÝ /

Ti , j Õj ,

Ž 2.

js1

where i s 1,2, PPP N, j s 1,2, PPP N, and

Q Ž x. s

½

1, y1,

x G 0, x - 0,

Ž 3.

and  Õi 4 g Ž1,y 1.. Note that  Õi 4 and  Õj 4 in the left and right sides of Eq. Ž2. are the retrieved output vector and the input vector, respectively. Iterations Ž Õ input Žnext time. s Õoutput . may recall the whole stored vector, even if the partial of the stored vector is presented. Suppose that the vectors are two-dimensional, by simply relabelling, the interconnection matrix will be M

Ti , j,iX , jX s

Õim, j ÕimX , jX ,

Ý

Ž 4.

ms1

whose size will be N 4 , if the size of vector is N 2 . If the stored vectors are three-dimensional, by relabelling, the interconnection matrix will be M

Ti , j, k ,iX , jX , kX s

Ý

Õim, j, k ÕimX , jX , k X ,

Ž 5.

ms1

whose size will be N 6 , if the vector has a size of N 3. This size of the interconnection matrix is very large for practical applications. Note that the components of the vector and interconnection matrix in the above Eqs. Ž4. and Ž5. are still one-dimensional, so the basic operation rule is the same to the original Hopfield model Eq. Ž1.. In the complex-valued space, where both the vector and interconnection matrix are complex-valued, i.e., the number of orthogonal dimensions of the elements is two, as shown in Fig. 1, the interconnection matrix can be constructed as w9x M

Wi , j s

Ý

Vi m Vj m ,

Ž 6.

ms1

where Vi s ÕiR q iÕi I , Vj s ÕjR y iÕj I ,

Ž 7.

and the subscripts R and I mean the real and imaginary parts of the vector, respectively.

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

447

The recalling process is N

Vi s Q

ž

Ý Wi , j Vj js1

/

,

Ž 8.

where

Q Ž X . s Q Ž xR q i xI . s Q Ž x R . q iQ Ž x I . ,

Ž 9.

where Q Ž x R . and Q Ž x I . have the same meaning as in Eq. Ž3.. It should be noted that the above rules in complex-valued domain cannot be directly developed into three-dimensional space. Therefore, we use the vector-product algorithm to constitute a new type of model in three-dimensional space, as given below.

Fig. 2. The vector-product Hopfield model proposed in this paper: Ža. the three-dimensional vector; and Žb. the constructed interconnection matrix.

If the components of the vectors are three-dimensional, as shown in Fig. 2Ža., then a s a1 i q a2 j q a 3 k, b s b 1 i q b 2 j q b 3 k, Ž 10 . the product of the above two vectors is w10x i j k a a a a=bs 1 2 3 b1 b 2 b 3 s

Fig. 1. Ža.The complex-valued vector; and Žb. the interconnection matrix constructed by the complex-valued Hopfield model.

a2 b2

a3 a3 iq b3 b3

a1 a1 jq b1 b1

a2 k b2

s Ž a2 b 3 y a3 b 2 . i q Ž a3 b 1 y a1 b 3 . j q Ž a1 b 2 y a 2 b 1 . k,

Ž 11 .

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

448

where the subscripts 1, 2, and 3 mean the components of vector in i, j, and k dimensions, respectively. Based on the rule of Eq. Ž11., we propose the three-dimensional vector-product Hopfield model as

memory, e.g., if we expand Žvˆ l .1 , we can achieve N

Ž Õˆ l . 1 s Ý Ž Tl n . 2 Ž Õn . 3 y Ž Tl n . 3 Ž Õn . 2 ns1

M

Tl n s Ž y1 .

N

Vlm = Vnm ,

Ý

Ž 12 .

ms1

s

ns1

where both the components of the vectors and interconnection matrix are three-dimensional, as shown in Fig. 2: Vl s Ž Õ l . 1 i q Ž Õ l . 2 j q Ž Õ l . 3 k, Tl n s Ž Tl n . 1 i q Ž Tl n . 2 j q Ž Tl n . 3 k,

M

½

Ý Ž y1. Ý Ž Õlm . 3 Ž Õnm . 1 yŽ

ms1

Õ lm 1

. Ž

Õnm 3

. Ž Õn . 3

M

y Ž y1 .

Ý Ž Õlm . 1 Ž Õnm . 2 ms1

Ž 13.

5

y Ž Õ lm . 2 Ž Õnm . 1 Ž Õn . 2 .

we can express each component of Tl n as M

Ž Tl n . 1 s Ž y1.

Ý Ž Õlm . 2 Ž Õnm . 3 y Ž Õlm . 3 Ž Õnm . 2

,

ms1 M

Ž Tl n . 2 s Ž y1.

Ý Ž Õlm . 3 Ž Õnm . 1 y Ž Õlm . 1 Ž Õnm . 3

,

ms1 M

Ž Tl n . 3 s Ž y1.

If the input vector is one of the stored vectors, VŽinput. s V m0 , then we have N

Ž Õˆ l . 1 s 2 N Ž Õlm0 . 1 y

Õnm 2 y

Ý Ž .Ž .

Ž

Õ lm 2

. Ž

Õnm 1

. .

N

q

Ž 14 . The recalling process is N

žÝ

/

Tl n = Vn .

ns1

Ž 15 .

q

Ý Ý Ž Õlm . 3 Ž Õnm . 1 Ž Õnm0 . 3

ns1

ms1 m/m0

N

M

Ý Ý Ž Õlm . 1 Ž Õnm . 2 Ž Õnm0 . 2 N

N

y

Ý Tl n = Vn

M

Ý Ý Ž Õlm . 1 Ž Õnm . 3 Ž Õnm0 . 3

ns1

If we define

ˆls V

M

ns1 ms1

Õ lm 1

ms1

Vl s Q

Ž 19 .

ms1 m/m0 M

Ý Ý Ž Õlm . 2 Ž Õnm . 1 Ž Õnm0 . 2 .

Ž 20 .

ns1 ms1

ns1

s Ž Õˆ l . 1 i q Ž Õˆ l . 2 j q Ž Õˆ l . 3 k,

Ž 16 .

then we get

ˆl. Vl s Q Ž V s Q Ž Õˆ l . 1 i q Q Ž Õˆ l . 2 j q Q Ž Õˆ l . 3 k, Ž 17 . where Q Ž x . in Eqs. Ž15. and Ž17. has the same meaning as in Eq. Ž3., and N

Ž Õˆ l . 1 s Ý Ž Tl n . 2 Ž Õn . 3 y Ž Tl n . 3 Ž Õn . 2 , ns1 N

Ž Õˆ l . 2 s Ý Ž Tl n . 3 Ž Õn . 1 y Ž Tl n . 1 Ž Õn . 3 , ns1 N

Ž Õˆ l . 3 s Ý Ž Tl n . 1 Ž Õn . 2 y Ž Tl n . 2 Ž Õn . 1 .

Ž 18 .

ns1

Now we analyze why Eq. Ž15. works as associative

Since the values of the stored vector Ž Õ .1,2,3 are randomly chosen from Žy1,1., the last four terms have a mathematical expectation of zero, thus,

Ž Õˆ l . 1 f 2 N Ž Õ1m0 . 1 .

Ž 21 .

A similar proof can be found for the other two dimensions Ž Õ l . 2,3 . Longer length N will result in a better signal-to-noise ratio. With the thresholding function in Eq. Ž3., the correct stored vector should be recalled. Thus, our vector-product algorithm should have the capacity of associative memory. Computer simulations to verify the above algorithm Eqs. Ž12. – Ž15. have been carried out and shown in Fig. 3. In order to be comparable with results in Ref. w9x, we take the same parameters to that in Ref. w9x, N s 32, M s 3. The iteration number of feedback is three and all curves are the

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

449

Fig. 3. Computer simulations of the vector-product Hopfield model in comparison with the Hopfield model. Curves 1, 2, and 3 denote the performances of the vector-product Hopfield model where the randomly-selected elements of stored vector are changed in one-dimensional, two-dimensional, and three-dimensional, respectively, where: Ža. such changes are replaced by the opposite values of the selected elements; and Žb. such changes are replaced by zeros. The curve marked with ‘D’ and ‘Hopfield’ is the performance of the Hopfield model.

average of 20 times measurements. All the stored vectors are randomly-generated each time. Hamming

distance means the number of the changed elements of stored vectors. Curves 1, 2, and 3 denote such

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

450

random changes occur in one dimension, two dimensions, and three dimensions, respectively. In Fig. 3Ža., such changes are replaced by the opposite values of the stored vectors. In Fig. 3Žb., such changes are replaced by zeros. Because fewer components of elements in dimensions are changed in curve 1 than curve 2, and curve 2 than curve 3, so that curve 1 is better than curve 2, and curve 2 is better than curve 3. If the changed elements are replaced by zeros, the ‘negative’ contributions due to the opposite-changed values of the stored vectors will disappear, which is why curves 1,2, and 3 in Fig. 3Žb. are better than curves 1, 2, and 3 in Fig. 3Ža., respectively. For comparison, the performance of the Hopfield model with the same parameters Ž N s 32 and M s 3. is also calculated and drawn in Fig. 3Ža.. Curve 3 in Fig. 3Ža. is quite close to the Hopfield curve. It implies that our proposed method does not increase the overall storage capacity. Our method, however, is interesting in that if the information in only one or two dimensions is changed, the whole stored vector can still be recalled at a high recognition rate. Such high performances Žcurves 1 and 2 in Fig. Ž3.. cannot be obtained with the Hopfield model by simply presenting one-third or two-thirds of a stored vector.

M

Ž Tl n p . 3 s Ý  Ž Õlm . 2 Ž Õnm . 3 y Ž Õlm . 3 Ž Õnm . 2 Ž Õpm . 2 ms1

y

Ž Õlm . 3 Ž Õnm . 1 y Ž Õlm . 1 Ž Õnm . 3 Ž Õpm . 1 4 . Ž 23 .

The recalling process is N

Vl s Q

N

žÝ Ý

/

Tl n p = Vn = Vp .

ns1 ps1

Ž 24 .

If we let N

ˆls V

N

Ý Ý

Tl n p = Vn = Vp ,

Ž 25 .

ns1 ps1

ˆ l .1,2,3 is then each component of ŽV N

N

Ž Õˆ l . 1 s Ý Ý

½

Ž Tl n p . 3 Ž Õn . 1

ns1 ps1

y Ž Tl n p . 1 Ž Õn . 3 Ž Õp . 3 y Ž Tl n p . 1 Ž Õn . 2

5

y Ž Tl n p . 2 Ž Õn . 1 Ž Õp . 2 , N

N

Ž Õˆ l . 2 s Ý Ý

½

Ž Tl n p . 1 Ž Õn . 2

ns1 ps1

y Ž Tl n p . 2 Ž Õn . 1 Ž Õp . 1 3. Second-order vector-product Hopfield model It is straightforward that our proposed model can be developed into the second-order w11,12x three-dimensional vector-product Hopfield model as

5

y Ž Tl n p . 3 Ž Õn . 2 Ž Õp . 3 , N

N

Ž Õˆ l . 3 s Ý Ý

½

Ž Tl n p . 2 Ž Õn . 3

ns1 ps1

M

Tl n p s

y Ž Tl n p . 2 Ž Õn . 3

Ý

Vlm = Vnm = Vpm ,

Ž 22 .

ms1

where Tl n p is a three-dimensional vector ŽEq. Ž13.., and each component is given by M

Ž Tl n p . 1 s Ý  Ž Õlm . 3 Ž Õnm . 1 y Ž Õlm . 1 Ž Õnm . 3 Ž Õpm . 3 ms1

y

Ž Õlm . 1 Ž Õnm . 2 y Ž Õlm . 2 Ž Õnm . 1 Ž Õpm . 2 4 ,

M

Ž Tl n p . 2 s Ý  Ž Õlm . 1 Ž Õnm . 2 y Ž Õlm . 2 Ž Õnm . 1 Ž Õpm . 1 ms1

y

Ž Õlm . 2 Ž Õnm . 3 y Ž Õlm . 3 Ž Õnm . 2 Ž Õpm . 3 4 ,

y Ž Tl n p . 3 Ž Õn . 2 Ž Õp . 2 y Ž Tl n p . 3 Ž Õn . 1

5

y Ž Tl n p . 1 Ž Õn . 3 Ž Õp . 1 .

Ž 26 .

Computer simulations of Eqs. Ž22. – Ž24. are shown in Fig. 4. The parameters are the same as in Fig. 3. There are therefore some similarities between Figs. 3 and 4. One noteworthy difference is that when the random changes are replaced by the opposite values of stored vectors at two positions l,n at the same time,

Ž yÕl . Ž yÕn . s Ž Õl . Ž Õn . ,

Ž 27 .

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

451

Fig. 4. Computer simulations of the second-order vector-product Hopfield model. Curves 1, 2, and 3 in Ža. and Žb. have the same meaning as in Fig. 3.

the result of such a multiplication will appear as no error happens, which is why curves 1, 2, and 3 in Fig. 4Ža. are reverse higher again after Hamming

distance is larger than half of the vector length 16. Note that if the random changes are replaced by zeroŽs., Eq. Ž27. does not hold. That is why curves 1,

452

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

2, and 3 in Fig. 4Žb. will not be higher again as that occurs in Fig. 4Ža..

4. Optical implementation Optics is attractive in that it can realize the addition and interconnection operations in full paral-

lelism. Optics, however, is difficult to realize the subtraction and thresholding function as required by neural network models ŽEq. Ž15... Therefore, it is not our objective to pursue a pure optical implementation of neural network models without electronics; instead, hybrid opto-electronic implementation, combining both advantages of optics and electronic, is what we investigated.

Fig. 5. Optical implementation of the vector-product Hopfield model. Ža. is the schematic illustration of experimental set-up, Žb. is the experimental result taken in the focal plane of lens in Ža..

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

One example is shown in Fig. 5, where two spatial light modulators ŽSLMs. are used to encode the interconnection matrix and the input vector, respectively. The multiplication result is detected by a CCD camera in the focal plane of lens. The subsequent subtraction and thresholding are performed by electronics. The partially-retrieved vector and matrix are displayed in SLMs again for iteration. One example of vector-matrix operation as required by Eq. Ž15., is 3

Ý TlnVn s ns1

1 y1 y1

y1 y1 1

1 0 1

1 y1 1

3 0 0 3 s 1 y 0 y 1 s 0 1 1 1 y1

Ž 28 .

where the interconnection matrix is clipped into three values Žq1,0,y 1. In the experiment, two pinhole masks are used to encode the interconnection matrix and the vector, respectively. Each element of the interconnection matrix and the input vector is encoded as the positive part Žq1. and the negative part Žy1.. Zero value means opaque. The pinhole masks have a pinhole size of 1 mm and a distance between pinholes of 3 mm. The distance between two masks is 650 mm. The lens has a focal length of 135 mm and a relative aperture of 1:1.8. A CCD camera connected with an image board in a computer is used to take the experimental result in the focal plane of the lens, as shown in Fig. 5Žb., which accords well with the simulation results in Fig. 5Ža., and is equal to that in the right side of Eq. Ž28. after electronic subtraction. It is clear that the experiment in Fig. 5 performs only part of all the operations required by the neural network model ŽEqs. Ž15. – Ž18... However, it demonstrates that optics can finish the additions and interconnections between neurons in full parallelism as the light beam passes through the optical system once. Larger size of vector-matrix multiplication for better demonstration of this model is possible to be performed in this optical system. More detailed descriptions of this and other relevant optical neural systems can be seen in Refs. w13–15x. We are now just trying to build a microoptic neural system by

453

using Dammann grating w16x, Talbot grating w17x, and integrated electro-optic modulation w18x. New enhanced performances of such a microoptic neural system will be expected in the future.

5. Discussion Our proposed model seems to have wide applications. For example, a color image can be encoded by three basic colors: red, green, and blue, and it is known that the color of a real-world object contains much rich information for correct recognition. If the Hopfield model is used to deal with such a three-dimensional problem ŽEq. Ž5.., it would need N 6 interconnection matrix, which might be too large for practical applications. Our method needs only 3 N 2 interconnection matrix ŽEq. Ž12.., approximately Ž3rN 4 . smaller than that of the Hopfield model. In addition, by using our method, even if the information in one or two dimensions is completely lost, the whole stored object may still be recalled with a much high recognition rate, as demonstrated in Figs. 3 and 4. It should be noted that complex-valued and vector-product Hopfield models need 2 N 2 and 3 N 2 interconnection matrices, respectively. These sizes of interconnection matrix Ž2 N 2 and 3 N 2 . cannot be simply realized by the Hopfield model Žno matter Eq. Ž1., Eq. Ž4., or Eq. Ž5.., and this is the obvious difference between them. Let us compare them in the one-dimensional case, where the length of vector is 3 N. If the Hopfield model is used, it will need a ŽŽ3 N . 2 s. 9N 2 interconnection matrix, while our proposed method will need only a 3 N 2 matrix ŽFig. 2.. This means that even for one-dimensional pattern recognition, our proposed method will need one-third of the storage size, if the vector can be regarded as composed of three parts. If the number of the length of the vector is even Ž2 N ., the complex-valued Hopfield model will need one-half of the storage size Ž2 N 2 . ŽFig. 1Žb.., compared with the matrix size Ž4 N 2 . of the Hopfield model ŽEq. Ž4... It should be noted that the comparisons of the different sizes of interconnection matrices and neurons used by these models are necessary, but not the key point we want to express. The

454

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455

key point is that we introduced the vector-product algorithm into neural network domain, and it should provide us with some new performances. As noted in Ref. w9x, ‘‘further extension of storing memory vectors in more Ž N = N . interconnection matrix is possible’’. Generally speaking, the key point of such extension is to keep the dimensions of operation. If a q-dimensional vector is used, the interconnection matrix should also be q-dimensional and have a size of qN 2 . In this paper, we have reported that for q s 2,3, such interconnection matrices can be formed. As we have stated, however, we cannot develop the vector-product Hopfield model from the complex-valued Hopfield model, by keeping the rules in the complex domain such as i P 1 s i, i P i s y1, i.e., their rules are not compatible. The complex Hopfield model is based on the rules in complex-valued space, and the vector-product Hopfield model is based on the rules in the three dimensional vector space. Although their rules are not compatible with each other, their storage manners are similar. Both models uniformly store the information in interconnection matrices in different dimensions and retrieve the whole stored vector from the information in other dimensions each time. Therefore, both models can retrieve the entire stored vector at a high recognition rate, even when the information in other dimensions are present. Such an interesting performance cannot be realized by the Hopfield model by simply presenting one-half, one-third or two-thirds of the stored vectors. The fundamental cause of the difference between the vector-product Hopfield model and the original Hopfield model lies in the different internal ways of constructing the interconnection matrices and recalling the stored vectors. In the vector-product Hopfield model case, the first part of the interconnection matrix in the first i dimension is constructed from the parts of the stored vectors in the second j and the third k dimensions ŽEq. Ž14... The part of the recalled vector in the i dimension is recalled from the parts of interconnection matrix and the input vectors in the j,k dimensions ŽEq. Ž18... The Hopfield model uses a different method to do so. These internal differences result in the different performances between the vector-product Hopfield model and the original Hopfield model.

6. Conclusions Although the subject of Hopfield model has been extensively studied, to the best of our knowledge, no one has pointed out the vector-product Hopfield model before. Objects in the real world are usually three-dimensional, our brains are also three-dimensional; mathematically, this means that both the components of the vectors and interconnection matrix are three-dimensional. The most important contribution of this paper is to introduce the mathematical vector-product algorithm in three-dimensional space into the neural network domain and propose the vector-product Hopfield model. We analytically proved that the vector-product Hopfield model has the associative capacity, which is supported by the numerical simulations presented in this paper. We also pointed out that the direct extension from the complex Hopfield model to the vector-product Hopfield model is impossible, and other neural network models are possible to incorporate the vectorproduct algorithm. Thus, the vector-product Hopfield model presented in this paper is interesting for us to develop neural network models further.

Acknowledgements The authors acknowledge the support of National Science Foundation of China, Shanghai Science and Technology Committee, and Chinese Academy of Science under Hundred Talents Program.

References w1x J.J. Hopfield, Proc. Natl. Acad. Sci. USA 79 Ž1982. 2554. w2x D. Rumelhart, G. Hinton, R. Willams, Nature 323 Ž1986. 533. w3x B. Widrow, M. Lehr, Proc. IEEE 78 Ž1990. 1415. w4x T. Kohonen, Self-organization and Associative Memory, Springer-Verlag, Berlin, 1984. w5x D. Psaltis, N. Farhat, Opt. Lett. 10 Ž1985. 98. w6x K. Wagner, D. Psaltis, Feature on neural networks, Appl. Opt. 32 Ž1993. 1261. w7x S. Jutamulia, F.T.S. Yu, T. Asakura, Special section on neural networks in optics, Opt. Eng. 35 Ž1996. 2119. w8x S. Lin, L. Liu, Z. Wang, Opt. Commun. 70 Ž1989. 87. w9x C. Zhou, L. Liu, Opt. Commun. 103 Ž1993. 29. w10x I.N. Bronshtein, K.A. Semendyayev, Handbook of Mathematics, Verlag Harri Deutsch, New York, 1985, p. 520.

C. Zhou, L. Liu r Optics Communications 168 (1999) 445–455 w11x D. Psaltis, C.H. Park, AIP Conf. Proc. 151 Ž1986. 370. w12x C. Giles, T. Maxwell, Appl. Opt. 26 Ž1987. 4972. w13x C. Zhou, L. Liu, Z. Wang, Micro. Opt. Tech. Lett. 4 Ž1991. 547. w14x C. Zhou, L. Liu, Chin. J. Lasers B ŽEnglish Edition. 2 Ž1995. 137. w15x C. Zhou, L. Liu, Efficient optical neural network architec-

w16x w17x w18x

455

tures, in: K. Itoh ŽEd.., Advanced Optical Correlators for Pattern Recognition and Association, Research Signpost, Trivandrum, India, 1997. C. Zhou, L. Liu, Appl. Opt. 34 Ž1995. 5961. C. Zhou, S. Stankovic, T. Tschudi, Appl. Opt. 38 Ž1999. 284. C. Zhou, L. Liu, G. Li, Y. Ying, Appl. Opt. 34 Ž1995. 7608.