Local interconnection neural network and its optical implementation

Local interconnection neural network and its optical implementation

OPTICS COMMUNICATIONS Optics Communications 102 ( 1993 ) 13-20 North-Holland Local interconnection neural network and its optical implementation Jia...

532KB Sizes 0 Downloads 40 Views

OPTICS COMMUNICATIONS

Optics Communications 102 ( 1993 ) 13-20 North-Holland

Local interconnection neural network and its optical implementation Jiajun Zhang, Li Zhang, D a p e n Yan, Anzhi H e Department of Applied Physics, East China Institute of Technology, Nanjing 210014, China

and Liren Liu Shanghai Institute of Optics and Fine Mechanics, Academia Sinica, ShanghaL China Received 30 December 1992

Based on the Hopfield model, a new concept o f local interconnection neural network (LINN) is proposed. Compared with the global interconnection neural network ( G I N N ) , LINN has a relatively smaller interconnection matrix as well as the same ability of associative memory. Three optical implementation schemes are proposed, and experimental results are also presented.

1. Introduction

Artificial neural networks have been extensively researched during the past decade. It is well known that artificial neural network has brain-like properties, such as adaption, self-organization, learning and associative memory. All these excellent properties arise as the consequences of the extensive interconnections among neurons and neural network's special information storage algorithm - distributed information storage. Of all the neural network models, the Hopfield model [ 1 ] is specially attractive, because it has a simple mathmatical formula and definite physical meanings. During the past several years, a variety of modifications of the Hopfield model [ 26 ] have been suggested to improve its associative memory capability, and many approaches of optical implementation have been developed [ 7-13 ]. However, except the holographic method [8], the reported optical neural networks use vector-matrix multipliers simulated networks with only some tens of neurons [ 7, l 0,11 ]. Even the so called "large-scale" neural network has only 48 × 48 neurons [ 13 ]. The main obstacle to implement a neural network with a large number of neurons is that spatial light mod-

ulators (SLM) with extremely high resolutions have not been developed. To alleviate this constraint, the space-time-sharing technique [ 14,15 ] has been used. But this technique is at the expense of much more operation time and less optical parallism. In this paper, we propose the idea of a local interconnection neutral network (LINN), so as to make the optical implementation of a large scale neutral network possible. In general, it is believed that there exist two fundamental ways of information storage. One is the location addressable memory (LAM), such as the memory of a digital electronic computer, the feature of LAM is that there is no connection between storage units; the other is content addressable memory (CAM), or distributed memory, such as the neural network. The feature of CAM is that every storage unit (neuron) is connected to all other units in the system. The information storage alogrithm of LINN is a compromise of above two ways, i.e., every neuron in the neural network is connected only to its neighbor neurons instead of all the neurons of the network. Therefore, LINN shares the characteristics of both LAM and CAM, In other works, a given piece of information can only exist in a certain area of a

0030-4018/93/$06.00 © 1993 Elsevier Science Publishers B.V. All fights reserved.

13

Volume 102, number 1,2

OPTICSCOMMUNICATIONS

LINN, while the information is stored in the manner of CAM within this local area. Another purpose of this paper is to construct optoelectronic systems to perform LINN for associative memory. Suppose some modifications made, almost all the approaches which have been used for the Hopfield model can also be applied for the LINN model. Three architectures are discussed in sect. 4 and experimental results are described in sect. 5.

2. Local intereonnection neural network

In a fully interconnected neural network, every neuron in the output plane is connected to all the neurons in the input plane. Thus, during each iteration of the retrieval process, every neuron's output is determined by all the neurons' states of the input plane. The LINN is defined as follows: a set of Mbipolar, binary vectors V (m), each N bits long, are stored in the interconnection weight matrix (IWM) W: M E u;m)FJ('~J)O(1--(~ij+j0)'

Wij=

i

= I , 2 ..... N ; j = I, 2 .... , P ,

15 September 1993

with the thresholding operation v~/= 1, iff~, >~0, = 0,

otherwise,

yields an estimate of the stored vectors. As shown in fig. 1, the thresholded estimate vector is fed back to the input and converges to the correct stored vector. LINN neglects long range interconnections and maintains short range interconnections of the Hopfield model. Every neuron in the output plane is connected only to P neurons at the input plane. Therefore, every neuron's output is determined by P neurons of its corresponding local area. Since there are N neurons in the output plane, and every neuron has P interconnection weights, the total number of elements of the IWM W is NP. IfP=N/2, then the elements of Ware half that of T. I f P = N , then W= T, LINN becomes GINN. In the following, we analyse the relationship between the two IWMs, W and T, where T is the IWM of a globally interconnected neural network of the Hopfield model. Let

T= [T,, T2, . . . , (i)

where

(5)

Ti= {to},

TN]

(6)

T ,

i,j=l,2,...,N,

(7)

and

l~i~d,

W~-[Wl,

=i-d-l,

d
Wz={w,j},

=N-P,

N-d<~i<~N,

j0=0,

(2)

and d is the radius of the local area, P= 2d+ 1 is the local area size. If the stored vectors are unipolar, then the IWM W should be constructed as follows

(8)

W 2 .... , WN] T ,

i = l , 2, ..., Y; j = l , 2 ..... e .

(9)

(i) When l
M m=l

wo= ~ v}'~)v)m)(1-~o)=to,

j=I,2,...,P,

(10)

THRESHOLD

M

wij= ~ (2v/(m)--l)(2#~j)o--1)(1--60+jo), m=l

i = 1, 2, ..., N; j= I, 2, ...,P.

(3)

The matrix W can be used for the retrieval of stored information from imperfect input vectors. Suppose the input is V', then the vector-matrix product

P j=]

v~i= Z wov'J+JO,

14

"

~

}

II

1"

', 1111 FEEDBACK

(4)

Fig. 1. Diagram of local interconnection neural networks. N= 5, P=5.

Volume 102, number 1,2

OPTICSCOMMUNICATIONS

therefore

w~={to),

i-d-I

j = l , 2,..., P.

(11)

(ii) When d < i < N - d , w i t h j O = i - d - 1 , eq. ( I ) becomes M

Wij=

15 September 1993

Z u}m)u:~i)'-d--I(1--~iJ + i - d - I ) : t i j + i - d - l ' m=l

j = 1, 2, ..., P ,

(12)

therefore

W,={t,j+,_a_, } . (iii) When N - d < i < N , becomes

(13) with j O = N - P , eq. (1)

M W ij-'~ Z U(im) Vj('~l~rp( I --~i:+U--e) , m~l

j = 1, 2, ..., P ,

(14)

therefore

W~= {te+N_e}, j = 1, 2, ..., e .

( l 5)

From eq. (6) and eq. (8), we can see that both T and W are composed of N vectors. Moreover, eq. ( 11 ), eq. (13), and eq. (15) state that each vector W, is composed of P elements of the corresponding Ti. Therefore, the IWM W as a whole is a part of IWM T. Let us compare the outputs of the two neural networks for a given input {v~}, i= 1, 2 ..... N. Since every vector Wgis a part of its corresponding vector Ti, the vector-matrix product Y~j= P 1wov)+jo ' will always be part of ~2v t v' For example, when d
~;,= Y wijv)+i-a-i ' ,

(16)

j=l

whereas, the vector-matrix product of the G1NN is as follows N

~i~ = ~ t~jv~, j= 1

which can be easily rearranged as

(17)

v~,=~+

Y.

j= I

N

t.v~+ E

j=i+d+ I

t:~,

(18)

Obviously, v;, collects only the contributions of P neurons, while Vg~collects the contributions of all the N neurons. It seems that in a LINN, the state of a given neutron is only affected by the states of its P neighbor neurons, and the rest ( N - P ) neurons do not have direct relation with it. However, as the iteration of the retrieval process continues, the influence of the ( N - P ) neurons will be gradually transferred to it. Therefore, LINN still has the property of global associative memory. As for the storage capacity of LINN, we know that the maximum number of vectors that can be stored in a GINN is about M~ = 0.15N [ 1 ]. Therefore, the number of vectors that can be stored in a LINN is about ML=0.15P. That is to say, a LINN has a smaller storage capacity than a GINN. This seems a disappointing conclusion. However, for a large scale neural network, the number of vectors that can be stored in a GINN is very large. For instance, N = 1285< 128, M,=2400. In this case, even when P=0.1N, we still have ML=240, which is large enough for many practical uses,

3. Numerical simulations and performace analysis We first present an example to explain the relationship between the L1NN's IWM W and GINN's IWM T, and compare their retrieval processes. Three unipolar, binary vectors, each 20 bits long, as shown in fig. 2a, are stored in the IWM T o f GINN. According to the recipe of the Hopfield model, the IWM T, which is a 205<20 matrix, is constructed as shown in fig. 2b. If the same three vectors are stored in a LINN, in which the local area size is P = 11, then the IWM W, which is a 20 5< 11 matrix, is constructed as shown in fig. 2c. Notice that the first row of W is just the first 11 elements of the first row of T, and the second row of W is just the first 11 elements of the second row of T, etc. Each element of W has a corresponding element in T. In fact, the IWM M is the rearrangement of those elements between the two dashed lines in 15

Volume 102, number 1,2

OPTICS COMMUNICATIONS

15 September 1993

V~

1

1

1

1

0

0

0

0

1

1

1

1

0

0

0

0

1

1

1

Va

0

0

0

1

l

1

0

0

0

0

0

0

1

1

1

0

0

0

0

l 1

V3

I

I

0

0

0

0

I

I

I

0

0

0

0

0

I

I

1

0

0

0

[a/ 0

3

1 -1

-3

-3

1

1

3

1

1 I 1 -3

-3

-1

1

3

1

1 -1

3

0

1 -1

-3

-3

1

1

3

1

1 I 1 -3

-3

-1

1

3

1

1 -1

1

1

0

1 -1

-1

-1

-1

1

3

3 ] 3 -1

-1

-3

-1

1

3

3

1

-1

-1

1

0

1

1 -3

-3

-1

1

1

1 -1

-3

-1

1

1

3

-3

-3

-1

1

O

3 -1

-1

-3

-1

-1 I -

3

3

1 -1

-3

-1

-1

1

-3

-1

l

3

0 -1

-1

-3

-1

-1 k

3

3

1 -1

-3

-1

-1

1

-1

-3

-1

-1

-1

1 -1

-1

-3

"~3

1~1

-1

0

3

1

1 -1

11

-1

I

I~1-3-1-1

3

0

1-1-1-1-~-I

3

3

1~-3-3

1

1

0

1

1

I-3-~-1

I

1

3

1

0

3

3 -1

-3 - 3

I

-1

-1

-I

-1

-I

3 -

-3-3-1

-3

-1

-I

-1

3 -I-1X-I-I-I

-1

3

3

0

1

3

1-1-I-3

1

3

1

1 -1

l

3

3

-~1

~ . . . .

1

1 -1

-

3

1

-3

1

1

I

1

3

1 -1

-1

-1

-1

1

3

3

3 -1

-1

-3

-I

1

0

3

1

1

1

3

1 -1

-1

-1

-1

1

3

3

3 -1

-1

-3

-1

1

3

0

1

-1

-1

1

3

1 -3

-3

-1

1

1

1

t

-1

-3

1

1

0

1

-1

-1

-3

1 -3

-3

-1

1

-I

3

1 -1

-3

-3

1

1

3

1

3

0

1 -1

-3

-3

1

1

3

1

1

1

1

0

1 -1

-1

-1

-1

1

3

3

-1

-1

1

0

1

1 -3

-3

-1

1

-3

-3

-1

1

0

3 -1

-1

-3

-1

0 -1

-1

-3

-1

-I

1 -1

-1

-1

-3 - 3

-1

1

3

1 -1

-3

-1

-1

-I

-3

-1

-1

3

0

1 -1

-1

-3

-3

1

1

0

1

1

1

1

3 I 1

-1

0

3

3

I

-

-1

1

0

3

1 -|

-1

-1

-1

I

-3

-3

3 -I

-1

-1

-1

-1

I

0

3

-1

-1

-1

I

3

0

3 -1

-1

-1

1

3

3

0

-1

-1

-3

-1

-I

-1

0

3

-1

-3

-1

-3

-1

-1

-3

-1

1

l

-1

-3

-1

-3

-1

-1

-1

3

0

1 -1

-3

-1

-1

-3

-3

-3

1

1

0

1 -1

-3

-3

-1

-1

-1

-1

-1

-1

1

0

1 -1

-I

-3

1

1

1 -3

-3

-1

1

0

1

3

3

3 -I

-1

-3

-1

1

0

3

3

3

3 -1

-1

-3

-1

1

3

0

1

1

1

1

1 -1

-3

-1

1

1

0

1

(b)

1 -1 1

[c)

initial input

1

1

I

1

3

0

0

0

0

0

0

iteration~l

1

1

3 -1

-2

-2

-1

-2

-1

0

thresholded

1

1

1

0

0

0

0

0

0

1

iteration=2

4

4

4

1 -5

-5

-3

-4

3

thresholded

1

1

1

1

0

0

0

0

1

iteration~ 3

4

4

6

0 -5

-5

-3

-4

threaholded

1

1

1

1

0

0

0

0

0

1

0

0

0

0

0

1

1

1

1

0

1 -2

-3

-4

-2

1

3

3

1

1

1

0

0

0

0

1

1

1

1

2

2

3 -5

-6

-7

-5

4.

6

6

4.

1

1

1

0

0

0

1

1

1

1

2

3

3

4 -6

-7

-7

-5

4

6

6

4

1

I

1

1

0

0

0

I

l

1

1

0

13

(d) initail input

I

I

I

I

0

iteration=l

3

3

7

1 -4

thresholded

1

1

1

1

iteration- 2

7

thresholded

I

0

0

0

0

0

0

0

0

-4

-2

-2

4.

$

8

8 -4

0

0

0

0

1

1

1

1

7

11 3 - 8

-8

-4

-4

7

11 11

11-8

I

I

0

0

0

I

I

I

I

0

I

0

0

0

0

0

I

I

I

l

-4

-8

-2

3

7

7

I

0

0

0

1

1

1

I

7

11 11 3

I

i

-8 0

-12-4 0

O

I

I

re) Fig. 2. Comparison of LINN with GINN. (a) Stored vectors; (b) IWM Tof GINN; (c) IWM Wof LINN, P= 11, (d) retrieval process of LINN; (f) retrieval process of GINN. 16

Volume 102, number 1,2

OPTICS COMMUNICATIONS

matrix T, as shown in fig. 2b. This example illustrates clearly that a L I N N is constructed by using near region interconnections between neurons, while omitting those connections to remote area neurons. We have simulated many retrieval processes. A typical example is shown in fig. 2d. The H a m m i n g distance between the partial input and its nearest stored vector is 4. The H a m m i n g distance is decreased to 2 and 0, respectively in the first two iterations, then the output state becomes stable. However, for the same input, as shown in fig. 2e, the G I N N converges to the correct stored vectors only in one iteration. Compared to G I N N , L I N N usually has a smaller attractive radius, therefore, it needs more steps to converge to the correct stored vector. To examine the association ability o f LINN, we use a group o f 64 bits long vectors with equally distributed + l s and - 1s as training samples. Figure 3 shows the recoginition rate as a function of local area size P. With S N R - - 7 , ten randomly distorted partial inputs are selected. Every point of fig. 3 is the average over ten test results. Three curves correspond to the stored vector number m = 2, 6, 9, respectively. Notice that when M = 2, the smallest size of local area for correct association is only about P-- 10; even for M - - 6, which is about 0.1 N, the smallest P needed is only about 25. This means that to store 6 vectors with a size of 64 bits, 64X25, instead of 64X64, interconnections are needed for associative memory. A lot of computer simulations indicate that when the stored vector number M < 0.15P, the L I N N has the same associative ability as its corresponding G I N N . This is, in our opinion, a very useful conclusion. For in many cases, such as military target recognition, only several vectors (or images) need to be

0

0

/- . . . . . . p - -

//

/,U,

.

2O

.

.

.

AREA

4. O p t o - e l e c t r o n i c s y s t e m s for local i n t e r c o n n e c t i o n neural n e t w o r k

Optics does not offer a negative intensity, so optically performing bipolar data is quite difficult and requires special encoding techniques. However just as have been done in the Hopfield model, unipolar binary vector can also be stored in L I N N model by using eq. (3). The method which has primarily been

.

dO LOCAL

stored in the neural network, meanwhile, the neural network scale is very large. L I N N is certainly attractive for such applications. Another problem we are concerned with is the storage capacity of LINN. Figure 4 shows the recognition rate as a function of the number of stored vectors. The initial training conditions are the same as above, i.e., N = 64, SNR-- 7. The curve no. 1 shows the behavior of the GINN. Notice that the maxim u m number of vectors that can be stored in the G I N N is M, nax=9, which is about 0.15N. When P = 3 9 , we have Mmax=8. When P = 2 1 , we have Mm,x = 4. If the local area size is too small, for instance, P = 11, no guarantee exists for correct recovery of the stored vectors. It should be pointed out that although neural networks with a large number of neurons, such as N = 512 × 512, are needed in many applications, it is usually unnecessary to store up to 0.15N, for instance, 3923 I, vectors (or images) in the neural network. L I N N can then be applied to such situations, and the m i n i m u m local area size can be determined by the number of stored vectors. From above example, if N = 6 4 , and M = 4 , then we get Pmin=21.

~" LO zo.8 0 ~.0.6 O.4 0.2

lO =0.8 r. 0.6 ~0.4 0 ~ 02

15 September 1993

6O SIZE

Fig. 3. Recognition ratio via local area size. I: M = 2 , II: M = 6 , III: M= 10.

0

/2 NUMBER

OF

STORED

/6 VECTORS

Fig. 4. Recognition ratio via number of stored vectors. I: P = 64, II: P=39, III:P-21, IV: P= 11. 17

Volume 102, number 1,2

OPTICS COMMUNICATIONS

used to perform the Hopfield model [ 7 ] is also suitable for LINN only if some slight modifications are made. The input unipolar binary vector is encoded by incoherent intensity, the ON state of input device is for 1, OFF state is for 0, while the positive and negative values of IWM are encoded by two masks which form the positive and negative channels. By substracting the values of two channels, we can obtain the real results. In the following, we will discuss three optoelectronic systems to perform a 1D LINN for an associative memory.

4.1. Multi-imaging system As shown in fig. 5, the input unipolar binary vector is represented by liquid-crystal television (LCTV), LA is a lenslet array, IWM is encoded by another LCTV (for convenience, we only give one channel), L is a collecting lens, PD is the photo-detector array. After the resultant values from substration of positive and negative channels are thresholded, the outputs are electrically fed back to control the input LCTV. The iteration process continues till the outputs are stable. For the LINN model of an associative memory, although the input vector has N binary bits, while in fact, to obtain every output value only P elements of N bits need to be multiplied with IWM W, thus, the arrangements of a lenslet array and the photo detector array in this system are slightly different from the situation of the Hopfield model. In the following, we point out this special arrangements. The input vector shown on the first LCTV is N bits long. The IWM shown on the second LCTV is an NX P matrix. Being multi-imaged by the lenslet

S

LCTVI

LA

LCTV2 L

PD

15 September 1993

array, which consists of N small lenslets, the input vector forms N copies of itself on the IWM plane. It is required that the P elements of each row of the IWM should be multiplied with their corresponding P elements of the input vector. The rule is described by eq. (4). Therefore, the image of the input vector on the IWM plane should be appropriately shifted according to its vertical position. These special demands can be satisfied by arranging the lenslet array in a broken-line form accordingly, instead of in a vertical line. Because the output is the image of the lenslet array, therefore, the photo detector array should also be arranged in the same way as that of LA.

4.2. Incoherent correlation system Figure 6 is the incoherent correlation system. The two LCTVs, lens L, PD, and the control system are all the same as those described in fig. 5. The correlation system does not need a lenslet array, but instead it needs a plate of decoding mask in front of PD to shield the unwanted intensity peaks. The form of the decoding mask is also in a broken line.

4.3. Overlapping system Figure 7 is the so-called overlapping system. The LCTV 1 for encoding the input data and LCTV2 for IWM are just overlapped together. We only need a cylindrical lens to collect the light from the overlapped LCTVs. Therefore this system is more compact. Here the input vector is arranged in 2D form, which is N x P i n size. It is not simply N copies of the input vector as used in the Hopfield model. The first ( d + 1 ) rows have the same elements, the P elements

S

LCTV1

LCTV2

L

DM

PD

3 Fig. 5. Multi-imaging system for the implementation of 1D LINN. DM - decoding mask.

18

Fig. 6. Incoherent correlation system for the implemtaion of 1D LINN.

Volume 102, number 1,2 LCTV!

LCTV2

OPTICS COMMUNICATIONS CL

15 September 1993

PD

D

oo O0000 • • OOO00 oo • gO0 • O0 OO O0 O0 • O0

go

go • oo ooo

ooo Oo •

~OOO



oo oo go •

Fig. 7. Overlapping system for the implementation of 1D LINN. o f each row are j u s t the f o r m e r p bits o f the input vector. In the following ( N - P ) rows, every row can be o b t a i n e d by shifting forward the last row's position in the i n p u t vector by one bit. The last d rows also have the same elements, the p elements o f each row are the last P bits o f the input vector. U n d e r such arrangement, a correct o u t p u t can be obtained. At the same time, the p h o t o - d e t e c t o r array is arranged along a vertical line, instead o f along a broken-line.

ooo ooo O00 go000

oo •

ee~

OOJO0

eee

OOOO

~ s ~ • eo

eoOO eSOI

oo

• oo O0 •

O0

[a)

[b) 0 0

o

o O

o i o

5. E x p e r i m e n t @

o

In our experiment, the correlation system shown in fig. 6 is employed. In the system, LCTV1 is replaced by a high resolution image m o n i t o r HA3905 which is used to generate I W M . Thus the light source is unnecessary. A n Aves WL-338 liquid crystal displayer is used to show the input data. The focal lenth o f L is f= 135 mm. The p h o t o - d e t e c t o r array is a Panasonic CL-500 color C C D camera. The correlation peaks after the d e c o d i n g m a s k D M at the focal plane o f lens L are collected by the CCD camera, then sent to a thresholding circuit a n d the final results are fed back to the WL-338 for next iteration. The d a t a flow in this opto-electronic system is m a i n l y controlled by a m i c r o c o m p u t e r o f AST486. The experimental results shown in fig. 8 are a verification o f the c o m p u t e r simulation results shown in fig. 2. Using the three vectors o f fig. 2a as training samples, a n d letting P= 11, we get the I W M IV as shown in fig. 2c. The positive a n d negative parts o f the clipped I W M o f IVare shown in figs. 8a and 8b, respectively. The v e c t o r - m a t r i x products o f the initial input, whichis 1 l 1 1 000000000000 1 1 1 1, with the positive a n d negative parts o f the clipped I W M shouldbe223 1 1 1 10000100012332, and

tc/

td)

O00O OOO

iON OO0

OOO0

OOO0

OOO0

OOO0

O000

OOOO

O000

re) Fig. 8. Experimental results. (a), (b) Positive and negative parts of the clipped IWM of LINN for N=20 and P= 11. (c), (d) Intensity distributions of the correlations of initial input with positive and negative parts of IWM on the PD array plane after DM. (e) Iteration process. 19

Volume 102, number 1,2

OPTICS COMMUNICATIONS

1 1023322 10002 1 4 3 1 0 0 1 , respectively. This can be obtained from the convolutions of the initial input vector with the two parts of the clipped IWM. The intensity distributions of the corresponding correlation peaks are shown in figs. 8c and 8d. During the procedure of this experiment, we just display the positive a n d negative parts of I W M sequentially on the monitor, and use a computer to do the substraction of the sequential outputs of the correlation peaks to get net outputs. By thresholding the net outputs, we get the input vector for the next iteration. The whole iteration process is shown in fig. Be. From this experiment we can see that LINN indeed has global associative m o m o r y ability though it usually needs more iteration steps.

6. Conclusion We have presented the idea of a local interconnection neural network, a n d compared LINN with a globally interconnected Hopfield model. It is concluded that u n d e r the requirement of storage limit, L I N N has the same associative m e m o r y ability as G I N N . Moreover, LINN has a much smaller interconnection matrix. Therefore, it is easy to use for an optical implementation with present available SLMs. We also have performed a unipolar binary 1D LINN for associative m e m o r y using an optical correlation system. Two other architectures for realization of LINN are also discussed. We stress that these sys-

20

15 September 1993

tems are slight modifications of those which have primarily been employed for performing a Hopfield model. We believe that LINN is very useful for those neual network applications, in which the n u m b e r of neurons is very large while the n u m b e r of stored vectors is relative small. It is easy to expand our idea to two-dimensional neural networks. We will report this in another paper.

References [ 1] J.J. Hopfield, Proc. Natl. Acad. Sci. USA 79 (1982) 2554. [2] B. Macukow and H.H. Arsenault, Appl. Optics 26 (1987) 924. [ 3 ] S. Oh, T. Yoon and J. Kim, Optics Lett. 13 ( 1988 ) 74. [4] M.I. Sezan, H. Stark and S. Yeh, Appl. Optics 29 (1990) 2616. [ 5 ] W. Zhang, K. Itoh, J. Tanida and Y. Ichioka, Appl. Optics 30(1991) 195. [6] Y. Zhang, X. Wang and G. Mu, Appl. Optics 31 (1992) 3289. [7] N.H. Farhat, D. Psaltis, A. Prata and E. Pack, Appl. Optics 24 (1985) 1469. [8] E.G. Pack and D. Psaltis, Opt. Eng. 26 (1987) 428. [9] J. Jang, S. Jung, S. Lee and S. Shin, Optics Len. 13 (1988) 248. [10IT. Lu, S. Wu, X. Xu and F. Yu, Appl. Optics 28 (1989) 4908. [ 11 ] F. Yu, T. Lu, X. Yang and D.A. Gregory, Optics Lett. 15 (1990) 863. [ 12] J. Zhang, L. Zhang, A. He and D. Yan, Microwaveand Opt. Technol. Lett. 5 (1992) 321. [ 13] K. Noguchi, Optics Lett. 16 ( 1991 ) 110. [ 14] M. Oita, J. Ohta, S. Tai and K. Kyuma, Optics Lett. 15 (1990) 227. [ 15 ] F. Yu, X. Yang and T. Lu, Optics Lett. 16 ( 1991 ) 247.