decoding for image transmission through apertures smaller than the Rayleigh limit

decoding for image transmission through apertures smaller than the Rayleigh limit

15 April 1999 Optics Communications 162 Ž1999. 291–298 Full length article All-optical binary phase encodingrdecoding for image transmission throug...

376KB Sizes 0 Downloads 51 Views

15 April 1999

Optics Communications 162 Ž1999. 291–298

Full length article

All-optical binary phase encodingrdecoding for image transmission through apertures smaller than the Rayleigh limit A.R.D. Somervell a

a,)

, C.Y. Wu a , T.G. Haskell b, T.H. Barnes

a,1

Physics Department, UniÕersity of Auckland, PriÕate Bag 92019, Auckland, New Zealand b Industrial Research Ltd., PriÕate Box 31310, Lower Hutt, New Zealand

Received 28 September 1998; received in revised form 28 January 1999; accepted 28 January 1999

Abstract We describe a novel scheme for all-optical image encodingrdecoding, for transmission via a serial link. In our method, the pixels of the coherent input image are encoded using orthogonal binary phase codes and the sum of the resultant optical fields transmitted via the link. At the receiving end, the field is divided spatially into output pixels, and each output pixel has impressed upon it the conjugate of one of the input pixel phase codes. The output pixel fields are then combined interferometrically with a reference beam and the resultant intensity fluctuations averaged over the phase code sequence, to recover the original input image information. q 1999 Elsevier Science B.V. All rights reserved. Keywords: Encodingrdecoding; Rayleigh; Aperture

1. Introduction There has been considerable interest over several years in the direct transmission of images through optical fibres. Probably, most work has been done on methods for transmitting images through multimode fibres, where the fibre modes effectively act as a set of parallel transmission channels w1,2x, and ways are found to either compensate for mode dispersion using holographic w3,4x or phase conjugation methods w5–8x. Schemes have also been devised for ‘one-way phase conjugation’ so that the input and transmitted images can be at opposite ends of the fibre w9x. Work has also been done using the self-focusing properties of SELFOC fibres, and also using the fact that under certain conditions, the modes in a slab waveguide come back into phase periodically at certain distances along the guide w10,11x. Encoding of the image by illumination with known complex light fields, followed by detection with a

) 1

Corresponding author. E-mail: [email protected] E-mail: [email protected]

single detector and solution of the inverse problem to retrieve the original image, has also been discussed w12x. If the fibre is in single-mode, the problem essentially reduces to one of image encoding. Studies have been made of the relative advantages of transmitting images in multiplex encoded form – somewhat similar to multiplex spectroscopy – and it was shown that gains in signal-to-noise ratio are only available under certain conditions w13x and wavelength–time multiplexing schemes have been derived for coding images to be sent via single-mode fibres w14x. Studies of super-resolution techniques have also shown that spatial and temporal resolution can be traded w15,16x, and lead to the concept of coherence-encoding w17,18x where the image information is essentially encoded on random nearly-orthogonal phase sequences arising from limited coherence in the image illumination system. In this paper, we describe a new encoding method for image transmission via a single-mode optical fibre, or similar single-channel optical link. The method works with an input image formed in coherent light, upon each pixel of which is impressed a unique binary phase sequence Žin our case a Hadamard code.. The optical fields from the

0030-4018r99r$ - see front matter q 1999 Elsevier Science B.V. All rights reserved. PII: S 0 0 3 0 - 4 0 1 8 Ž 9 9 . 0 0 0 5 6 - 5

292

A.R.D. SomerÕell et al.r Optics Communications 162 (1999) 291–298

encoded input pixels are added by optically Fourier transforming the input image field and taking the on-axis ŽDC. component of the Fourier transform for transmission via the link. The optical field leaving the link Žwhich is the sum of the encoded input pixel fields. is spatially divided into output pixels, upon each of which is impressed the conjugate of one of the input pixel phase sequences. Light from the output pixels is then combined interferometrically with a reference beam and the output intensity from each pixel averaged over the sequence of phases to yield the input image pixel information.

2. Theory Fig. 1 is a diagram illustrating the interferometric system for phase-only image encoding. An expanded, collimated laser beam is incident on a beam splitter, the reflected beam from which forms the reference beam for the system. The transmitted signal beam passes through a mask carrying the input image and then through a pixelised spatial optical phase modulator Žin our case, we used an electrically addressed parallel-aligned liquid crystal phase modulator.. A lens then takes the Fourier transform of the input image plane, and the on-axis, DC component of the Fourier transform – which corresponds to the sum of the optical fields of the encoded input pixels – is selected for transmission down the single-channel link by a small aperture. After leaving the optical link, the encoded light beam is expanded and passes through a second spatial optical phase modulator. When the phase codes are applied to the system, each pixel in the output modulator will correspond to one pixel in the input modulator. Note that the corresponding input pixel will be determined by the code applied to the output pixel, so that it is possible to spatially scramble the pixels in the system. Light from the output

modulator is then combined with the reference beam at the second beamsplitter, and detected by a multi-pixel detector. Orthogonal phase sequences are now applied to each input pixel, and sequences of the conjugate phases applied to the corresponding output pixels. If the phase sequences for different pixels are orthogonal, the output intensity from each pixel will consist of the sum of four components: Ž1. A constant intensity corresponding to the average intensity at the input field plus the reference field intensity, Ž2. a fluctuating intensity arising from interference between the input pixels, Ž3. a fluctuating intensity component arising from interference between light from non-corresponding input pixels and the reference beam, and Ž4. a second constant intensity component arising from the interference of light from the corresponding input pixel and the reference beam. If the output intensity is averaged over the complete phase sequence, the fluctuating intensity components add to zero. The constant intensity component Ž1. is always positive, and corresponds to a DC offset in the system. This DC offset grows as the number of input pixels increases, and unfortunately, also depends on the number of input pixels illuminated. It is, therefore, necessary to find some method by which this comparatively large offset can be automatically compensated. This can be achieved by noting that the sign of the constant intensity component Ž4., arising from interference between corresponding input and output pixels, can be reversed by adding a constant phase of p on to all phase codes presented to the output pixel. In order to retrieve the image information, therefore, the phase code sequence is run twice, once with the conjugate of the corresponding input pixel applied to the output pixel, and once with the conjugate plus p . The average output intensities measured in the two cases are subtracted, eliminating the offset and leaving the pixel information.

Fig. 1. Diagram illustrating the principle of operation of our encodingrdecoding method.

A.R.D. SomerÕell et al.r Optics Communications 162 (1999) 291–298

The electric field at the ith image pixel EiS, j , after the signal beam has passed through the encoding phase modulator is given by: EiS, j s EiS g i , j

Ž1.

where < EiS < is the magnitude of the electric field at the ith pixel after passing through the input image, and g i , j is a complex number representing the jth phase in the phase sequence applied to the ith pixel. We assume that the aperture in the Fourier plane of the first lens is small enough that the optical field immediately after the aperture Ž EjS . is simply the sum of the electric fields from the input pixels: N

EjS s

EiS g i , j

Ý

Ž2.

is1

where N is the number of pixels in the image. After the serial link, this optical field is incident on all pixels of the decoding phase modulator. The phase sequences applied to the decoding modulator pixels are the conjugates of those applied to corresponding encoding modulator pixels so that the optical field emerging from the k th output pixel is: EkS s

EiS

Ý

g i , j gU k, j

Ž3.

is1

where g k , j is the jth phase in the phase sequence introduced at the k th input pixel, and U denotes the complex conjugate. The reference beam is passed directly on to the output of the interferometer where its optical field may be represented by E R. This is added to the output field from the k th modulator pixel and provided that the phase difference introduced between the arms of the interferometer is a multiple of 2p , the combined optical field at the k th output pixel for the jth phase in the sequence is given by: N

Ek , j s E R q

Ý

EiS g i , j g U k, j

Ž4.

is1

The intensity, Ik , j , is measured for each phase in the sequence. Provided the modulation is true phase-only: 2

gk, j s 1

Ž5.

N

2

Ik , j s E R q

N

Ý

2

EiS q R 2 < E R <

is1 N

N

Ý < EiS < g i , j gUk , j is1

N

Ý Ý

ElS EmS g i , j g m , j .

I k s Ý Ik , j s E R q j

Ý

2

EiS q 2 E R EkS

Ž6.

ls1 ms1 l/m

The first two terms in Eq. Ž6. arise from incoherent addition of the reference beam and the beams passing through each pixel in the signal beam. These terms are constant with time as the phase sequence is applied, but

Ž7.

is1

if we assume that the number of phase codes is equal to the number of input pixels Ž N . as must be the case to achieve orthogonality. The first two terms represent the DC offset discussed above, while the third term represents the decoded image. It is worth noting that it is the amplitude of the input optical field that is decoded, not the input intensity. This arises naturally from the interferometric decoding process. In order to remove the DC offset, a second phase sequence is run, with a constant extra phase of p added onto the phase codes applied to the output modulator. The output intensity obtained in this case Ž IkX . is then: 2

IkX s E R q

N

Ý

2

EiS y 2 E R EkS

Ž8.

is1

and the optical field of the input image may be retrieved with the offset automatically compensated by subtracting these two output intensities: IkF s Ik y IkX s 4 E R EkS .

and Ik , j can be written as

q

their contribution does vary depending on how many input pixels are illuminated. This is the constant DC offset referred to above, which, together with the dynamic range of the detector system, ultimately limits the number of pixels that may be transmitted. The last term in Eq. Ž6. arises from interference between light from different input pixels, in the centre of the Fourier plane of the first lens. This term fluctuates as the phase sequence is applied, but providing that the phase sequences are orthogonal, it will average to zero over the complete sequence. The third term in Eq. Ž6. is the term of interest. It contains two components. One is a steady DC component which arises from the interference of light originating from the pixel in the input plane whose phase sequence is conjugate with that applied to the k th pixel in the output plane. The other arises from interference of light from all the other input pixels with the reference beam at the k th output pixel, and fluctuates as the phase sequence is applied, averaging to zero. The DC component of intensity emerging from the k th output pixel may therefore be written as: 2

N

293

Ž9.

We can estimate the dynamic range required from the system, by noting that in the ‘worst’ case, the maximum intensity will occur when all input pixels are illuminated and the transmission process is at a point where all encoding phases are the same and are in phase with the reference beam. Typically, this will be at the beginning of the phase sequences. The output intensity from one pixel under these conditions will be: I Max s N E S q E R 2

2 2

s N 2 ES q ER q 2 N ES ER .

Ž 10.

A.R.D. SomerÕell et al.r Optics Communications 162 (1999) 291–298

294

Now, the change in intensity when the second sequence of phase codes is run, will be: D I s 4 ES ER

Ž 11.

and so the dynamic range, D, required from the detector is: Ds

4 ES ER N2 E

S 2

2

q ER q 2 N ES ER

.

Ž 12 .

The beam ratio in the interferometer should be adjusted so that this is minimised. Mathematically, we minimise D with respect to < ER < to find that the maximum dynamic range is required when: ER s N ES

for maximum required dynamic range

Ž 13.

Under these conditions, we find that: D Max s

1 N

.

Ž 14 .

For example, if there are 16 input pixels, the dynamic range of the detection system should therefore be at least 16:1. This is clearly a ‘worst case’ calculation, and the situation can be alleviated somewhat by using modified codes so that the input pixels never all have the same phase. Nonetheless, the dynamic range requirements still become more stringent as the number of pixels increases.

It is also worth noting that an estimate of the input image can be made even if all codes are not transmitted. Through the averaging process, the output intensities fluctuate relatively widely for the first few codes of the phase sequence, but gradually converge onto their final values as the sequence progresses. Thus, the signal-to-noise ratio of the output images improves as transmission progresses, and in this sense, our method is somewhat similar to holography where all parts of the hologram carry information about all parts of the image, but the image quality improves as more and more of the hologram is illuminated. This is illustrated in Figs. 2 and 3 which show the results of a computer simulation of the transmission of an image with eight pixels, using standard Hadamard coding. Table 1 shows the codes used. Fig. 2 shows the input and output images Žthe images are binarised with pixel intensities set to 0 or 1 at random.. Note that the output image is the difference between Ik and IkX obtained from the two phase sequences. In this noiseless system, the input image is recovered perfectly. Fig. 3 shows the averaged output intensities which eventually converge to Ik Žleft hand plots. and IkX Žright hand plots. for pixels 3, 4, and 5, as the phase sequences proceed. With these codes, all pixels have the same phase value at the beginning of the sequence and so the starting intensity is high for Ik and falls to the final value, while that for IkX starts low Žthe reference beam is in anti-phase with all pixels at the beginning of the sequence. and rises to its final value. Note how the final intensities are the same for pixels 3 and 4 which are

Fig. 2. Input and retrieved image pixel intensities Žsimulation results..

A.R.D. SomerÕell et al.r Optics Communications 162 (1999) 291–298

295

Fig. 3. Average interferometer pixel output intensities as the phase sequence proceeds Žsimulation..

originally zero intensity Žtheir difference is zero., but differ for pixel 5 with Ik larger than IkX as expected.

3. Experiments It is essential that the path difference be maintained at an exact number of wavelengths so that the interferometer output intensities average to their correct values. Deviation from this condition will lead to loss of output image contrast and errors in retrieval of the input image field. Although a Mach Zender interferometer as shown in Fig. 1 would, in principle, be capable of demonstrating the encodingrdecoding scheme, in practice, it proved to be much easier to use the stable common path polarisation interferometer shown in Fig. 4. Here, the phase modulation is provided by an electrically addressed parallel aligned nematic liquid crystal device with 9 = 9 pixels, operating Table 1 Hadamard codes used in the eight-pixel simulation Pixel no.

Phase sequence

1

2

3

4

5

6

7

8

1 1 1 1 1 1 1 1

1 y1 1 y1 1 y1 1 y1

1 1 y1 y1 1 1 y1 y1

1 y1 y1 1 1 y1 y1 1

1 1 1 1 y1 y1 y1 y1

1 y1 1 y1 y1 1 y1 1

1 1 y1 y1 y1 y1 1 1

1 y1 y1 1 y1 1 1 y1

1 2 3 4 5 6 7 8

in transmission, which changes the phase of only the horizontal polarisation component. The vertical polarisation component acts as the reference beam for the interferometer. The modulator was controlled by a computer via an RS232 link. The beam from a HeNe laser operating at 632.8 nm and polarised at 458 to the vertical first passed through a Soleil-Babinet compensator which allowed adjustment of the overall path difference in the system to take into account residual birefringence and ensure that the interferometer path difference was an integral number of wavelengths. After expansion and recollimation, the beam passed through a mask carrying the 4 = 4 pixel binary image to be encoded. The mask was imaged onto a 4 = 4 pixel area of the spatial light modulator which acted as the encoding phase modulator. An afocal lens system was used for the imaging to ensure that the modulator operated with plane wavefronts. The modulator plane was arranged to be in the front focal plane of a Fourier transform lens, and a 10 mm pinhole on the optic axis in the Fourier plane passed the DC component of the Fourier transform Ži.e., the sum of all input pixel optical fields. onto the decoding section of the apparatus. The Fourier transform lens had focal length of 400 mm, and the phase modulator pixels were 4.5 mm2 , so the aperture was roughly six times smaller than the Rayleigh limit for one pixel, and also smaller than the limit for the 4 = 4 pixel field. The pixels were completely unresolved. The beam diffracted through the pinhole was recollimated and passed through a second 4 = 4 pixel area of the liquid crystal device – which acted as the decoding phase modulator. The decoding area was imaged onto a CCD

296

A.R.D. SomerÕell et al.r Optics Communications 162 (1999) 291–298

Fig. 4. The experimental set-up.

camera used in conjunction with a frame grabber to measure and average the intensity at each pixel, and interference between the modulated signal beam Žhorizontal polarisation component. and the reference beam Žvertical polarisation components. was obtained using a polarised set at 458 in front of the camera. Binary Hadamard sequences were used to encode the image. With a nematic liquid crystal device, it would have been possible to use continuous sequences following the components of a Fourier series, but binary sequences have

two advantages: Ž1. the phase resolution of the modulator has only to be p , and Ž2. binary phase sequences could be implemented with a much faster modulator such as a ferro-electric device. Unfortunately, we did not have a ferro-electric device available when this work was done. 4. Results The interferometer was tested using the image shown in Fig. 5, in which alternate pixels were blocked in a chequer

Fig. 5. The input image.

A.R.D. SomerÕell et al.r Optics Communications 162 (1999) 291–298

297

Fig. 6. Input and retrieved optical field amplitudes obtained with the image of Fig. 5 in the set-up of Fig. 4.

board pattern using a binary mask. The variation in bright pixel intensity from place to place in the beam is due to the Gaussian non-uniformity of the input laser beam. Noting that the interferometer actually recovers the optical field amplitude rather than intensity, Fig. 6 is a plot of the input image field amplitude calculated from the measured intensity Ždotted line.. Also shown is the retrieved image field amplitude obtained by encoding and decoding with the interferometer Žsolid line.. Pixel number is taken columnwise starting from the top left pixel in the image field. Good agreement between input and retrieved images is obtained for pixels 5 to 11 which are in the middle of the field. Note, however, that the recovered optical field amplitudes for pixels 4 and 12 Žthe brightest pixels in the bottom row of the image. are much lower than the input values. Indeed, the recovered amplitude of pixel 12 was negative, and we believe this to be due to a combination of the finite size of the pinhole and inaccurate pinhole placement in the focal plane. This is a systematic effect and it was observed that when the pinhole was repositioned, the field recovered at some other pixel near the edge of the image would become erroneous and often negative.

5. Discussion The theory described above is only strictly valid if we assume that the pinhole in the Fourier plane is a delta function centred on the optic axis. If it has finite size, or is off-centre, the effect is to add a constant phase onto the optical field from each pixel, which depends on the pixel position in the input image. The condition that the interferometer path difference should be an integral number of

wavelengths is then not satisfied – and indeed, the effective path difference depends on the pixel position. The optical field across the encoding phase modulator is convolved with the Fourier transform of the pinhole aperture function. The effect of this is to change the interference intensities Ik and IkX obtained at the interferometer output. If the effective path difference approaches an odd number of half-wavelengths IkX may become larger than Ik for a bright pixel and the retrieved optical field amplitude becomes negative. Alternatively, if the effective path difference is lr4 or 3 lr4, the two intensities will become the same and a bright pixel will decode with zero retrieved amplitude. The system is more sensitive to this effect for pixels near the edge of the input image where phase distortions caused by inaccurate placement of the pinhole are greater. With our system, calculations indicated that the pinhole should be placed on the optic axis to an accuracy of a few micrometers to ensure accurate retrieval of the output image.

6. Conclusion We have described an interferometric method for encodingrdecoding two-dimensional images for transmission via a single-channel optical link. Each pixel in the input image is encoded using a binary phase sequence, with input pixel phase sequences being orthogonal to each other. The encoded optical fields are added together and sent via the link. At the decoding end of the link, output pixels are decoded with the conjugates of the input phase sequences, and the input pixel optical field amplitude recovered interferometrically.

298

A.R.D. SomerÕell et al.r Optics Communications 162 (1999) 291–298

We demonstrated this system using a polarisation interferometer operating with a parallel aligned liquid crystal spatial light modulator, and were successfully able to transmit 4 = 4 pixel images using Hadamard binary phase encoding sequences.

Acknowledgements The authors thank the Royal Society of New Zealand for a grant provided through the Marsden Fund for this work. We are also grateful to the University of Auckland and Industrial Research Ltd. for their support.

References w1x A. Yariv, On transmission and recovery of three-dimensional image information in optical waveguides, J. Opt. Soc. Am. 66 Ž4. Ž1976. 301–306. w2x A. Gover, C.P. Lee, A. Yariv, Direct transmission of pictorial information in multimode optical fibres, J. Opt. Soc. Am. 66 Ž4. Ž1976. 306–311. w3x U. Levy, A.A. Friesem, Direct picture transmission in a single optical fiber with holographic filters, Opt. Commun. 30 Ž2. Ž1979. 163–165. w4x J.Y. Son, V.I. Bobrinev, H.W. Jeon, Y.H. Cho, Y.S. Eom, Direct image transmission through a multimode optical fibre, Appl. Opt. 35 Ž2. Ž1996. 273–277. w5x A. Yariv, Three-dimensional pictorial transmission in optical fibers, Appl. Phys. Lett. 28 Ž2. Ž1976. 88–89.

w6x G.J. Dunning, R.C. Lind, Demonstration of image transmission through fibers by optical phase conjugation, Opt. Lett. 7 Ž11. Ž1982. 558–559. w7x B. Fischer, S. Sternklar, Image transmission and interferometry with multimode optical fibers using self-pumped phase conjugation, Appl. Phys. Lett. 46 Ž2. Ž1985. 113–114. w8x I. McMichael, P. Yeh, P. Beckwith, Correction of polarization and modal scrambling in multimode fibers by phase conjugation, Opt. Lett. 12 Ž7. Ž1987. 507–508. w9x A. Cunha, E. Leith, Generalised one-way phase-conjugation systems, J. Opt. Soc. Am. B 6 Ž10. Ž1989. 1803–1812. w10x R. Ulrich, Image formation by phase coincidences in optical waveguides, Opt. Commun. 13 Ž3. Ž1975. 259–264. w11x H. Hattori, T. Takeo, Y. Sakai, M. Umeno, Transmission characteristics of image formation in a single optical fiber, ICO-13 Conference Digest Paper, A4–5, 1984, pp. 350–351. w12x M.A. Bolshtyansky, B.Ya. Zel’dovich, Transmission of the image signal with the use of a multimode fibre, Opt. Commun. 123 Ž1996. 629–636. w13x C.J. Oliver, Optical image processing by multiplex coding, Appl. Opt. 15 Ž1. Ž1976. 93–105. w14x A.M. Tai, Two-dimensional image transmission through a single optical fiber by wavelength–time multiplexing, Appl. Opt. 22 Ž23. Ž1983. 3826–3832. w15x E.N. Leith, D. Angell, C.-P. Keui, Superresolution by incoherent-to-coherent conversion, J. Opt. Soc. Am. A 4 Ž6. Ž1987. 1050–1054. w16x P.C. Sun, E.N. Leith, Superresolution by spatial–temporal encoding methods, Appl. Opt. 31 Ž23. Ž1992. 4857–4862. w17x P. Naulleau, M. Brown, C. Chen, E. Leith, Direct three dimensional image transmission through single-mode fibers with monochromatic light, Opt. Lett. 21 Ž1. Ž1996. 36–38. w18x P. Naulleau, Analysis of the confined-reference coherenceencoding method for image transmission through optical fibers, Appl. Opt. 36 Ž29. Ž1997. 7386–7396.