Smart pixels in a focal-plane image compression system

Smart pixels in a focal-plane image compression system

Optics & Laser Technology 34 (2002) 429 – 437 www.elsevier.com/locate/optlastec Smart pixels in a focal-plane image compression system Hiroyuki Arim...

669KB Sizes 0 Downloads 24 Views

Optics & Laser Technology 34 (2002) 429 – 437

www.elsevier.com/locate/optlastec

Smart pixels in a focal-plane image compression system Hiroyuki Arimaa; ∗ , Masahiko Morib , Toyohiko Yatagaia b Photonics

a Institute of Applied Physics, University of Tsukuba, Tsukuba, Ibaraki, 305-8573, Japan Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, 305-8568, Japan

Received 31 July 2001; accepted 22 January 2002

Abstract Image compressors improve the handling of image data in image-processing systems. In our proposed image-compression system, we employ a smart complementary metal oxide semiconductor (CMOS) sensor and an integrated spatial light modulator (SLM) and then the optoelectronic architecture performs a large part of image-compression processes. Each pixel of the integrated SLM consists of multiple modulation pads; the integrated SLM then performs decoding and optical D=A conversion. A paired con6guration of the smart CMOS sensor and the integrated SLM transforms optical analog signals into electronic digital signals. A theoretical analysis showed that the error ratio of the proposed systems was 3%. ? 2002 Published by Elsevier Science Ltd. Keywords: Liquid crystal on silicon (LCoS); Focal plane processing; Optical free-space interconnections

1. Introduction Image compression technology is the key to faster handling of electronic image data, and in response to this need, numerous compression algorithms and compression processes have been developed [1]. The image-compression process, however, is usually a heavy burden on the image-processing system’s central processing unit. A functional interface dedicated solely to image-compression is thus required. Many image compressors have been reported [2,3]. These processors are designed to be linked with an image sensor to create an image-compression system. Charge-coupled devices (CCDs) [4] are the most commonly used image sensors for image-processing systems, since versions with high resolution and high quality are available. Adding several processors to a CCD is one approach to developing these systems into image-processing-oriented systems [5,6]. But it is still diAcult to combine a complementary metal oxide semiconductors (CMOSs) and a CCD onto one VLSI chip because of the diCerences between CMOS technology and CCD technology. Another type of image sensors are CMOS sensors [7,8]. The bene6t of this sensor is that LSI circuits can be combined with the sensor. Combination with image transformers ∗ Corresponding author. Tel.: +81-298-53-5217; fax: +81-29853-5205. E-mail address: [email protected] (H. Arima).

0030-3992/02/$ - see front matter ? 2002 Published by Elsevier Science Ltd. PII: S 0 0 3 0 - 3 9 9 2 ( 0 2 ) 0 0 0 1 2 - 9

for image-compression or with analog-to-digital (AD) converters has been reported [9 –12]. The resolution of such smart CMOS sensors and the functionality of the processing elements is in inverse proportion. Therefore, the resolution of a smart CMOS sensor is limited and is about several hundred thousand k pixels. On the other hand, since images with high resolution are required, the use of a CCD or a high-resolution CMOS sensor was assumed to be the image sensor in a general electronic image-processing system. It is expected that the conditions of image-compression change when the resolution of input images decreases. The impact of the smart CMOS sensor’s low sampling frequency on image-compression is analyzed in this paper. This paper also discusses image-compression algorithms and parallel architectures for image processing. A parallel architecture for image-compression is then proposed. An integrated spatial light modulator (SLM) and a smart CMOS sensor are employed in the proposed system. A paired smart CMOS sensor and lenslet array performs optical Fourier cosine transform. Optical digital-to-analog (DA) conversion with the use of an integrated SLM is proposed. Combination of an integrated SLM with the smart CMOS sensor allows the proposed system to perform optical A=D conversion. 2. Optoelectronic parallel architecture and an image-compression algorithm in the proposed system Methods of forming parallel interconnections can be divided into three classes. The 6rst class uses electronic

430

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437

parallel interconnections. Several image-processing systems of this type have been reported [13,14], but the bandwidth of electronic interconnections is limited, or the employment of electronic interconnections makes the system very bulky. The second class uses optical-6ber interconnections that are able to achieve high-volume data transmission; however, this system is used principally for interconnections between boards [15]. The application of interconnections between VLSI chips is still under development [16]. The third class uses optical free-space interconnections. Two types of switching devices are applicable to this class. The 6rst type of device features an integrated structure that incorporates semiconductor lasers, optical sensors, and electronic processors [17–19]. The issue of this design is that the number of integrated semiconductor lasers is much smaller than that of integrated optical sensors. The diCerent materials used in the semiconductor lasers and silicon devices, including optical sensors, resulted in the smaller scale of integration. However, optoelectronic architectures that take advantage of integrated devices have the potential to achieve higher computing performance because of the wide bandwidth of their optical interconnections [5,20 –25]. The second device is an SLM. The employment of an SLM results in an increased number of optical parallel interconnections. This system performs analog optical computing [26] and optical crossbar switching [27,28]. But these designs need to be developed further to overcome the inKexibility inherent in optical systems. On the other hand, many kinds of image-compression algorithms have been reported [1–3,29]. Predictive coding is a method to decrease redundancy in the sequence of the data by analyzing images in serial fashion. The coding processing is serial and recursive, making it unsuitable for parallel-processing oriented. HuCman coding is a method to construct a statistical tree network on a basis of pixel values, and the code is composed of the statistical values that are transformations of the pixel values. The performance of the HuCman code processor can be enhanced by the employment of a parallel architecture because the coding is a parallel and distributed process. Transform coding is a method to transform images before compression. Therefore, reduction of data volume is aCected on the basis of the values of the transformed data. When a Fourier transform is employed, data on the spatial frequency spectrum are used in the compression algorithm. Because most information contained in images exists in low-frequency spectral bands, Fourier transform packs the image information into a smaller data volume. Optical analog computing techniques can be utilized to perform the Fourier transform. When a Fourier transform is performed in an optical system, electronic image-compression processes can be greatly reduced. From these facts, it is safe to conclude that the optimal form of optoelectronic image-compression architecture is an optical free-space interconnection system that performs transform coding. Several optical architectures which have the potential to compose a transform-coding system have been reported

[30 –33]. One of the optical architectures is an optical AD converter and the converter may be combined with smart pixels. But if the response of the nonlinear electro-optic effect is not suAciently rapid, the phase modulation rate slows. Employment of an optical Fourier cosine transform system allows the optoelectronic transform-coding system to perform inverse transform. When a Fourier transform is performed in an optical analog computing system, only the power spectrum of the Fourier transform is obtained. Therefore, the inverse Fourier transform cannot restore transformed images. Fourier cosine transform is a real part of Fourier transform. This makes it possible to restore transform images by the use of power spectrum of the transform images. In the optical Fourier cosine transform system, in which the input image is multiplied by mirrors, Fourier transform of these multiple images is obtained optically with the use of a lens, and Fourier cosine transform is obtained from the Fourier transform. However, it is diAcult to make these optical devices in a conventional semiconductor process. Another approach is to design smart pixels for a transform-coding system. Because an SLM will be employed as a switching device in a transform-coding system, an integrated SLM should be developed. Employment of a smart CMOS sensor and an integrated SLM will increase functionality of an optoelectronic architecture. 3. Resolution tolerance of the image-compression algorithm If a functional smart CMOS sensor with a large number of processors was employed, however, the resolution of the input image would correspondingly fall. It was predicted that the conditions of image-compression would change when the resolution of input images decreased. The impact of low resolution on image compression was analyzed.

Fig. 1. Original image.

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437

431

Fig. 2. (a) Part of the input image; (b), (c) The same part of the restored images when MSE = 20 and 40, respectively. All the images are sized 40 pixels × 40 pixels and their size is 4 in. × 4 in. The originals of these images were 512 pixels × 512 pixels in size.

The processes of transform coding was as follows. The original image is divided into blocks that consist of 8 pixels × 8 pixels. Each block is a unit for discrete cosine transform (DCT). Next, dispersion is calculated with respect to the ith coeAcient in DCT of the blocks. When a 512×512 image is compressed, for example, the number of blocks is 4096. Dispersion is calculated from 4096 coeAcients, and 64 dispersion values are obtained. The bit length of each DCT coeAcient is calculated from the

dispersion N

Ri = Ravr +

1 1  log2 i2 + log2 i2 ; 2 2N

(1)

j=1

N

1 Ravr = Ri ; N

(2)

i=1

where N = 64, Ri is the bit length for the ith coeAcient, 2 i is the dispersion of ith coeAcient in the blocks, and Ravr is

432

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437

Table 1 Results of image-compression. The compression ratio is de6ned as Ravr =8

Resolution

1024×1024

Mean square error Ri Compression ratio

20 4.5 1.78

512×512 40 2.9 2.76

20 6.0 1.33

the average of the bit length. This process of reduction of bit length is called quantization. Finally, a single matrix of values of bit lengths is obtained. With respect to transform coding, the restored image and the original image are not completely identical. The transform coding of the image shown in Fig. 1 was computed and the results are shown in Fig. 2. Because the restored image and the original image are not identical, quantitative evaluation of the quality of images is needed to evaluate the conditions of compression. Mean square error (MSE) will give a numerical indicator of the mutual relationship between the original image and the restored image. In Fig. 2, gradation, contrast, and clarity of the image deteriorate when MSE increases. The compression rate is Ravr =8 because the original data is eight bits in length. When MSE = 20; Ravr = 6:0 and the compression ratio is 1.33. When MSE = 40; Ravr = 4:7 and the compression ratio is 1.70. MSE clearly increases when compression ratio decreases and the resolution of the image is kept constant. Results of analysis of compression of low-resolution images are shown in Table 1. Most of the compression ratios are less than 2.0 and smaller than the ratios seen in typical compression algorithms. The transform coding is usually performed together with another compression process: HuCman coding. The combination of these compression processes is known as JPEG. When a gray image is compressed, JPEG data occupies about half the volume of data seen after Fourier cosine transform and quantization. The results show that compression ratio decreases if the resolution of images decreases when the MSE is kept constant. It is known that the features of an image are mainly kept in low spatial frequency spectral band. Therefore, it is expected that the data in high spatial frequency spectral band are principally compressed. This fact possibly results in the decrease of the compression ratio when the resolution of images decreases. The resolution of a smart CMOS sensor is about several hundred pixels. On the other hand, the resolution of digital images produced by a CCD camera is more than 1 Mpixel. When a smart CMOS sensor is employed is place of a CCD camera, the compression ratio of the image-compression system would decrease and the image-compression algorithm would not function. In order to increase the resolution of a CMOS sensor, a part of processors should separate from an image sensor. We will propose an image-compression architecture composed of two smart pixels and an SLM connected via optical free-space interconnections.

256×256 40 4.7 1.70

20 7.1 1.13

128×128 40 6.1 1.05

20 7.6 1.18

40 6.8

Input image lenslet array

SLM

Integrated SLM

Smart CMOS sensor

Fig. 3. Image compression system.

4. Smart pixels and a proposed optoelectronic architecture The proposed system shown in Fig. 3 performs transform coding. The integrated SLM performs optical D=A conversion. The lenslet array performs Fourier transform and the combined smart CMOS sensor, lenslet array, and SLM performs Fourier cosine transform. The paired smart CMOS sensor and integrated SLM performs optical A=D conversion. When the optical intensity in each pixel on the integrated SLM is averaged, D=A conversion is completed. The averaging is done using photodetectors. When all the rays of light from one pixel are incident on one photodetector, the light intensity is translated into a current and an analog signal will be obtained in the form of this current. The transform coding consists of the Fourier cosine transform, A=D conversion, and quantization. Quantization involves the reduction of the bit length of transformed image data. Fourier cosine transform is performed in the proposed system as following. A lenslet array divides an input image into multiple parts. Each lens performs Fourier transform. The phase of the transformed image is shifted by the SLM. When a phase shift of is applied, the original transformed image is FC + iFS and another is FC − iFS , where FC is the Fourier cosine transform, FS is the Fourier sine transform, and i is an imaginary unit. The smart CMOS sensor shown in Fig. 4 detects the power spectra of the two analog optical images. DiCerential ampli6ers in the CMOS sensor give FC2 , after which the Fourier cosine transform is completed. Optical A=D conversion by the smart CMOS sensor and the integrated SLM uses the following process. The SLM outputs optical analog signals that act as reference signals for

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437

Optical signal from lenslet array

Reference signal from integrated SLM

Detectors

Detectors

433

Differential amplifiers

Comparators

Quantizer

Compressed data

Fig. 4. Block diagram of the smart CMOS sensor.

A=D conversion. The signals from the integrated SLM and FC2 from the diCerential ampli6ers are fed to the comparator in the smart CMOS sensor. The results, bigger or smaller, will be values in bits, 1 or 0. At that point, optical analog signals have been transformed into electronic digital signals. The electronic signals are fed to a quantizer in the smart CMOS sensor. The quantizer reduces the bit length of the input data. The functionality of a CMOS sensor is not only the combination of an image sensor and processors but variations in scanning. A CMOS sensor can scan in both directions with intervals of any length. This is an advantage in an image-compression system because image-compression processes are not always serial and this type of Kexibility in scanning is often required. The integrated SLM is a random access memory (RAM)-based SLM [34,35]. The RAM-based SLM is a binary SLM made on a VLSI chip and is sometimes called a liquid crystal on silicon (LCoS). The function of the proposed integrated SLM is to transform electronic digital signals into optical analog signals. The pixels of the proposed RAM-based SLM are composed of multiple modulation pads and the SLM achieves multilevel modulation. Another technique for achieving D=A conversion with the use of a ferroelectric liquid crystal (FLC) SLM has also been reported [36]. This is a method for keeping bi-stable FLC in an intermediate state by modulating the electronic signal that controls the FLC states. This method will allow higher resolution than our method. In order to employ this method, an FLC cell with stable control should be achieved. 5. Combination of optical D=A conversion and optical Fourier transform Image decompression processes of transform coding comprise restoration of quantized data, D=A conversion, and Fourier cosine transform. In an electronic digital system, several bits will be added to the quantized data before it is restored. In the proposed system, the integrated SLM restores the quantized data, and transforms the data into optical analog signals. The function of the integrated SLM is described in Fig. 5. When this method is employed, errors are inevitable. For example, when a pixel is composed of

4 3 1

2 2

1

(a)

(b)

Fig. 5. Restoring the quantized data and optical D=A conversion with the use of pixels composed of four modulation pads. When the intensity of light in each pixel is averaged, D=A conversion is complete. The ratio of the size of modulation pads is 8:4:2:1. (a) Decoding of compressed two-bit data. 1 denotes the 6rst bit and 2 denotes the second. The second bit is assumed to have twice the analog value of the 6rst bit; (b) Decoding of compressed four-bit length data. Table 2 Correspondence of bits of compressed data and modulation pads. Numbers from 1 to 8 are the order in bit queues of compressed data

Relative values of the size of modulation pads 128 1 2 3 4 5 6 7 8

bit bit bit bit bit bit bit bit

length length length length length length length length

1 2 3 4 5 6 7 8

64

32

16

8

4

2

1

1 1 2 3 4 5 6 7

1 2 1 2 3 4 5 6

1 1 3 1 2 3 4 5

1 2 2 4 1 2 3 4

1 1 1 3 5 1 2 3

1 2 3 2 4 6 1 2

1 1 2 1 3 5 7 1

When the bit length is two, for example, the 6rst bit activates four modulation pads whose sizes are 64, 16, 4, and 1. The sum of the sizes of these modulation pads is 85. The second bit also activates four modulation pads whose sizes are 128, 32, 8, and 2. The sum of the sizes of these modulation pads is 170. When the two bits are high, eight modulation pads are activated, and the sum of the size of modulation pads is 255. The ratio of these sums is 255 : 170 : 85 = 3 : 2 : 1. The electronic binary signals are then translated into optical analog signals.

eight modulation pads and data 6ve bits in length is decoded into data eight bits in length, the size of modulation pad that the 6rst bit activates is eight, as shown in Table 2. The size is 16 for the second bit. The third bit activates two modulation pads and the sum of the sizes is 33. The sum is 66 for the fourth bit and 132 for the 6fth bit. The ratio of these 1 values is not exactly 32:16:4:2:1. The error ratio is 33 . Data

434

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437

(a)

(b)

Fig. 7. Original image.

(c) Fig. 6. Pixel patterns: (a) a pixel of a general SLM; (b), (c) pixels composed of multiple pads.

of three bits in length has an error in the 6rst bit: the ratio 1 1 of the error is 73 . The ratios of the errors are 65 when the 1 data is six bits in length, and 129 when it is seven bits in length. The error is zero when the data is one, two, four, or eight bits in length. In a decompression system, the integrated SLM, another SLM, a lenslet array, and a smart CMOS sensor are connected in series. The optical analog outputs of the integrated SLM are coeAcients of the Fourier cosine transform of the original image. The SLM, the lenslet array, and the smart CMOS sensor perform a Fourier cosine transform of the outputs of the integrated SLM. The outputs of the integrated SLM are divided into blocks to obtain a Fourier cosine transform of each block. This process is performed in the same way as in the compression system. It is known that Fourier cosine transform and inverse Fourier cosine transform are the same. The blocks of Fourier cosine transform will compose the restored image. When the outputs of the integrated SLM are transformed using a lenslet array, the intensity of light in each pixel of the integrated SLM is not averaged. The combination of the optical processes risks causing deterioration of the resolution of the optical processes. We thus evaluated the resolution of the combined optical processes. Two examples of pixels consisting of multiple pads are shown in Fig. 6(b) and (c). With respect to these pixels, the ratio of the sum of the size of all the metal pads to the size of the whole pixel was 186:255. The largest pad was 8 pixels × 16 pixels, the smallest pad was 1 pixel × 1 pixel, and the whole pixel was 21 pixels × 21 pixels. The minimum space between modulation pads was 1 pixel. Eight

pads compose one pixel and the ratio of the size of the pads is 128:64:32:16:8:4:2:1. For comparative analysis, a pixel model of a general SLM that displayed a multilevel image was designed. Both modulation pad and pixel of the general SLM are square, as shown in Fig. 6(a). The modulation pad was 20 pixels × 20 pixels and the whole pixel was 21 pixels × 21 pixels. The signal of the general SLM has 256 levels. A Tukey–Cooley fast Fourier transform algorithm was used to calculate Fourier transform. The input image shown in Fig. 7 consists of 128 pixels × 128 pixels. Each pixel in the input image was expanded into 21 pixels × 21 pixels to simulate a pixel consisting of multiple pads. The results are shown in Fig. 8. When an input image has an SLM grid, Fourier transform becomes discrete transform. This results in multiple, small transforms in transform images, as shown in Fig. 8(b) and (c). The MSE based on a Fourier transform of the original image was calculated and the resolution of the optical process was evaluated. The original image had no SLM grid. The size of the Fourier transform of the SLMs and the Fourier transform of the original image is diCerent because of the pixel structure of the SLMs. To obtain MSE, Fourier transforms of the SLMs were divided into blocks consisting of 21 pixels × 21 pixels, which is the size of the pixel, and the optical intensity in each pixel was averaged. The values of MSE were 0.41, 0.33, and 0.39 when a pixel in Fig. 6(a), (b), and (c) were used, respectively. Because the intensity of results of Fourier transform is shown with log scale, Fourier transforms of the SLMs look completely diCerent from Fourier transform of the original image in spite of small MSE. The errors in Fourier transforms of binary SLMs were consequently smaller than the error seen in the multilevel SLM. It is possible to conclude that suAcient resolution of Fourier cosine transform and D=A conversion will be guaranteed in the proposed image-decompression system. Our design of the integrated SLM is shown in Fig. 9. The integrated SLM is composed of 4 pixels × 4 pixels, and each pixel is composed of three modulation pads for 3-bit D=A conversion. As shown in Fig. 10, the modulation pads and circuits for driving the pads comprise one pixel. The chip size is 4.8 mm × 4.8 mm. The integrated SLM has address decoders, memories, and the pixels. In Fig. 9, the

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437

435

Fig. 8. Transform images: (a) the result when a pixel in Fig. 6(a) was used; (b) the result when a pixel in Fig. 6(b) was used; (c) the result when a pixel in Fig. 6(c) was used; (d) a transform image of an original image.

address decoders can be seen at lower left, memories are seen at the center, and the pixels of the SLM are seen at the bottom of the center. The modulation of the proposed SLM is binary and the memories maintain the modulation states. The VLSI chip was fabricated using a double poly, double metal 1.2-m process. The size of each pixel can be further reduced since the size of the biggest metal modulation pad is 65 m×65 m. The structure of the static random access memory (SRAM), however, increases the size of the pixels. A dynamic random access memory (DRAM) structure has a higher level of integration than an SRAM structure. In an SRAM structure, for example, seven transistors are needed for one memory, whereas a DRAM structure requires only two transistors and one capacitor. Employment of a multiple pad structure reduces the pixel density; however, the proposed SLM is expected to achieve greater processing ability than a display

equipped with an electronic D=A converter (DAC). The processing ability of a DAC is several hundred Mbit=s. With respect to the proposed integrated SLM, the product of the resolution of the integrated SLM and modulating speed of FLC is the processing ability of D=A conversion. The modulating speed of the FLC is about 100 s when it is packed into a 1 m-thick cell [37]. On the assumption that the smallest modulation pad is 1 m×1 m and the pixel structure in Fig. 6(b) is used, the processing ability of this integrated SLM will be 2:27 Gbit=s=cm2 . The proposed optical A=D conversion system consists of two parts: an optical part and an electronic part. The optical part consists of the optical D=A conversion system and an SLM. The electronic part consists of optical detectors and comparators in a smart CMOS sensor. The resolution of the electronic part is a key factor in the resolution of the optical A=D conversion as well as the optical part.

436

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437

Fig. 9. VLSI design of our integrated SLM.

70 m

261m

Fig. 10. VLSI design of a pixel composed of multiple pads. Modulation pads are shown as oblique lines.

In our experiment, employment of the proposed integrated SLM and a smart CMOS sensor allowed a combination of three processes: restoration of quantized data, optical D=A conversion, and optical Fourier transform. The proposed smart-pixel-based system can substitute for a signi6cant proportion of most image compression and decompression systems. The theoretical analysis in this section demonstrated that the maximum rate of the error of the combined optical 1 processes is 33 . 6. Concluding remarks An image-compression algorithm was analyzed with respect to the spatial frequency spectral bandwidth of input images. The analytical results showed that the compression ratio of the compressed data volume to the original data volume is larger when a smart CMOS sensor is used in place of

a CCD camera. In order to decrease the compression ratio, image-compression processors should be shared by several devices rather than be placed in a single smart CMOS sensor. In the proposed system, optical processors perform a large part of image-compression processes. On the other hand, some image-processing systems which get restored images from an image-compression system may require restored images of various bandwidth values. When an image consisting of 1024 pixels × 1024 pixels is reduced to an image consisting of 128 pixels × 128 pixels, the compression ratio is 64.0. It is obvious that reduction of the resolution of images or low pass 6ltering is a much more eCective image-compression technique than general image-compression algorithms. Therefore, the analysis described in this paper is valid when the bandwidth of a restored image is 6xed, and is the same as that of the input image. Analytical results showed that the maximum error ratio was 3.0% when the integrated SLM restored quantized data while simultaneously carrying out optical D=A conversion. The impact of the unique pixel pattern of the integrated SLM on the resolution of optical D=A conversion and optical Fourier transform was analyzed. The results of the comparative analysis indicated that the unique pixel pattern did not reduce the resolution of the optical D=A conversion and optical Fourier transform. It was shown that an optoelectronic system consisted of an integrated SLM and a smart CMOS sensor will perform multiple Fourier cosine transform. The proposed architecture greatly minimizes redundancy during image compression and decompression. Acknowledgements The VLSI chips for the integrated SLM was fabricated at the University of Tokyo’s VLSI Design and Education Center (VDEC). On-Semiconductor, Nippon Motorola Ltd., Dai Nippon Printing Corporation, and Kyocera Corporation contributed to its fabrication. References [1] Jain AK. Fundamentals of digital image compressing. Englewood CliCs, NJ: Prentice-Hall, Inc., 1989. [2] Proc IEEE 1995; 83(2). [3] Rao KR, Yip D. Discrete cosine transform algorithms, advantages, applications. San Diego, CA: Academic Press, Inc., 1990. [4] Theuwissen AJP. Solid-state imaging with charge-coupled devices. Boston, MA: Kluwer Academic Publishers, 1995. [5] Fossum ER. Architectures for focal plane image processing. Opt Eng 1989;28(8):865–71. [6] Chiang AM. A video-rate CCD two-dimensional cosine transform processor. J SPIE 1987;845:2–5. [7] Fossum ER. Active pixel sensors: are CCDs dinosaurs? charge-coupled devices and solid-state optical sensors III. Proc SPIE 1993;1900:2–14.

H. Arima et al. / Optics & Laser Technology 34 (2002) 429–437 [8] Fossum ER. CMOS image sensors: electronic camera-on-a-chip. IEEE Trans Electron Devices 1997;44(10):1689–98. [9] Torelli G, Gonzo L, Gottardi M, Maloberti F, Sartori A, Simoni A. Analog-to-digital conversion architectures for intelligent optical sensor arrays. Advanced focal plane arrays and electronic cameras. Proc IEEE 1996;2950:254–64. [10] Shoji Kawahito, Makoto Yoshida, Masaaki Sasaki, Keijiro Umehara, Daisuke Miyazaki, Yoshiaki Tadokoro, Kenji Murata, Shirou Doushou, Akira Matsuzawa. A CMOS image sensor with analog two-dimensional DCT-based compression circuits for one-chip cameras. IEEE J Solid-State Circuits 1997;32(12):2030–41. [11] Shoji Kawahito, Makoto Yoshida, Yoshiaki Tadokoro, Akira Matsuzawa. An analog two-dimensional discrete cosine transform processor for focal-plane image-compression. IEICE Trans Fundamentals 1997;E80-A(2):283–90. [12] Barry LS, Robert WS, Glen PD, Eugene KR, Andre HS, Dirk AH, Daniel ML. Smart pixel technology and an application to two-dimensional analog-digital conversion. Opt Eng 1998;37(12):3175–86. [13] Hiroyuki Arima, Masahiko Mori, Toyohiko Yatagai. Optoelectronic parallel interface for neural computing. Opt Memory Neural Networks 1999;8(3):147–54. [14] Takashi Komuro, Idaku Ishii, Masatoshi Ishikawa. General-purpose vision chip architecture for real-time machine vision. Adv Robot 1999;12(6):619–27. [15] Ge Zhou, Yimo Zhang, Wei Liu. Optical Fiber interconnection for the scalable parallel computing system. Proc IEEE 2000;88(6): 856–63. [16] Barrett CP, Blair P, Buller GS, Neilson DT, Robertson B, Smith EC, Taghizadeh MR, Walker AC. Components for the implementation of free-space optical crossbars. Appl Opt 1996;35(35):6934–44. [17] Haney MW, Christensen MP, Milojkovic P, Fokken GJ, Blekberg M, Gilbert BK, Rieve J, Ekman J, Premanand C, Fouad K. Description and evaluation of the FAST-Net smart pixel based optical interconnection prototype. Proc IEEE 2000;88(6):819–28. [18] Lentine AL, Reiley DJ, Novotony RA, Morrison RL, Sasian JM, Beckman MG, Buchholz DB, Hinterlong SJ, Cloonan TJ, Richards GW, McCormick FB. Asynchronous transfer mode distribution network by use of an optoelectronic VLSI switching chip. Appl Opt 1997;36(8):1804–14. [19] Plant DV, Kirk AG. Optical interconnects at the chip and board level: challenges and solutions. Proc IEEE 2000;88(6):806–18. [20] Jen-Ming Wu, Kuzunia CB, Bogdan Hoanca, Chih-Hao Chen, Sawchuk AA. Demonstration and architectural analysis of complementary metal-oxide semiconductor=multiple-quantum-well smart-pixel array cellular logic processors for single-instruction multiple-data parallel-pipeline processing. Appl Opt 1999;38(11): 2270–81.

437

[21] Scott Hinton H, Cloonan TJ, McCormick FB, Lentine AL, Tooley FAP. Free-space digital optical systems. Proc IEEE 1994;82(11):1632–49. [22] Neil Mcardle, Makoto Naruse, Haruyoshi Toyoda, Yuji Kobayashi, Masatoshi Ishikawa. Recon6gurable optical interconnections for parallel computing. Proc IEEE 2000;88(6):829–37. [23] Zheng XZ, Marchand PJ, Huang DW, Kibar O, Ozkan NSE, Esener SC. Optomechanical design and characterization of a printed-circuit-board-based free-space optical interconnect package. Appl Opt 1999;38(26):5631–40. [24] Drabik T. Optoelectronic integrated systems based on free-space interconnections with an arbitrary degree of space variance. Proc IEEE 1994;82(11):1595–622. [25] Digest of topical meeting on optics in computing 2000. Brussels, Belgium: Optical Society of America, 2000. [26] Casasent D. General-purpose optical pattern recognition image processors. Proc IEEE 1994;82(11):1724–34. [27] Wilmsen CW, Chunjie Duan, Collington JR, Dames MP, Crossland WA. Vertical cavity surface emitting laser based optoelectronic asynchronous transfer mode switch. Opt Eng 1999;38(7):1216–22. [28] Dames MP, Collington JR, Crossland WA, Scarr RWA. Three-stage high-performance optoelectronic asynchronous transfer mode switch: design and performance. Opt Eng 1996;35(12):3608–16. [29] Davisson LD, Gray RM, editors. Data compression, benchmark papers in electrical engineering and computer science. Stroudsberg, PA: Dowden Hutchinson & Ross, Inc., 1976. [30] Becker RA, Woodward CE, Leonberger FJ, Williamson RC. Wide-band electrooptic guided-wave analog-to-digital converters. Proc IEEE 1984;72(7):802–19. [31] Pace PE, Styer D. High-resolution encoding process for an integrated optical analog-to-digital converter. Opt Eng 1994;33(8):2638–45. [32] Yoshio Hayasaki, Masahiko Mori, Nobuo Nishida. Optical image transformations for fully parallel optical analog-to-digital conversion. Appl Opt 1998;37(17):3607–11. [33] Akitoshi Yoshida, Reif JH. Optical computing techniques for image=video compression. Proc IEEE 1994;82(6):948–54. [34] Drabik TJ, Titus AH, Handschy MA, Banaa D, Gaalema SD, Ward DJ. 2D silicon=ferroelectric liquid crystal spatial light modulators. IEEE Micro 1995;15(4):67–76. [35] Ido Bar-Tana, Sharpe JP, McKnight DJ, Johnson KM. Smart pixel spatial light modulator for incorporation in an optoelectronic neural network. Opt Lett 1995;20(3):303–5. [36] Perennes F, Coker TM, Crossland WA. Digital-to-analog image conversion with an optically addressed spatial light modulator. Opt Lett 1997;22(7):472–4. [37] Clark NA, Lagerwall ST. Submicrosecond bistable electro-optic switching in liquid crystals. Appl Phys Lett 1980;36(11):899–901.