A simple method reconstructing colorful holographic imaging with GPU acceleration based on one thin phase plate

A simple method reconstructing colorful holographic imaging with GPU acceleration based on one thin phase plate

Optik 126 (2015) 3457–3462 Contents lists available at ScienceDirect Optik journal homepage: www.elsevier.de/ijleo A simple method reconstructing c...

2MB Sizes 2 Downloads 55 Views

Optik 126 (2015) 3457–3462

Contents lists available at ScienceDirect

Optik journal homepage: www.elsevier.de/ijleo

A simple method reconstructing colorful holographic imaging with GPU acceleration based on one thin phase plate Weirui Yue a,∗ , Qiang Song a,b , Cheng Yu a , Weili Yue c , Jing Zhu a , Guohai Situ a a b c

Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China Lenovo Corporate Research & Technology Shanghai Branch, China Hebei Eye Hospital of China, China

a r t i c l e

i n f o

Article history: Received 16 May 2014 Accepted 1 July 2015 Keywords: Holographic imaging Color holography Diffractive optics GPU DOE

a b s t r a c t In this paper, we propose a simple method for realizing color holographic imaging using one thin diffractive optical element (DOE), which can reconstruct a two-dimensional color image with one phase plate at a user defined distance from phase plate. The phase plate is optimized by combining Gerchberg–Saxton algorithm and compensation algorithm for improving the resolution ratio of reproduced color images. For accelerating the computational time, the Graphic Processing Unit (GPU) is used for saving time costs. In the end, the simulation experiments verified our method. © 2015 Published by Elsevier GmbH.

1. Introduction In recent years, holographic projection is developing rapidly corresponding to a fast growing demand for micro-display and virtual reality system. In terms of color holographic projection, three main kinds of approaches are developed. The first method is the use of volume medium holography. It can form a fine full parallax viewing with white light reconstruction [1]. But the phase recording medium is relatively expensive and causes difficulties in a massive reproduction, which limits the possible applications. The second scheme can realize real time color holographic display with use of spatial light modulators (SLMs) [2]. In response to this scheme, we record the red, green and blue components of a color image with three SLMs, upon which, the color image can be formed by compounding the diffracted lights from each SLM. However, this kind of system is not available for micro-display due to its large volume. The third approach is time division multiplexing (TDM) [3–6]. For this method, color image can be formed with only one phase plate. One can divide a color image into three components such as red, green and blue. Then the three holograms corresponding to the components can be calculated individually based on

Gerchberg–Saxton (GS) algorithm [7] or modified GS algorithm [8]. Next, three holograms are loaded into SLM according to the afterimage effect on human eyes, so we can observe the reconstructed image. Unfortunately, it is hard to control the synchronized signals between the laser and corresponding hologram at high precision. To address these problems, one of the possible solutions is to use just one single diffractive phase modulator based on space-division method (SDM) [9,10]. For this kind of method, three holograms, R, G, and B, are calculated, and then the holograms are added in a final hologram. Obviously, depth of a phase modulation is very low, and it spends too much time for generating three holograms. To address these issues, this paper shows a simple method which can reconstruct the full color image with only a single phase modulator with Graphic Processing Unit (GPU) technology [11–14] based on a modified Gerchberg–Saxton (GS) algorithm. In Section 2, we describe in detail the design method of color holographic image. In Section 3, we evaluate the proposed method numerically. In Section 4, we conclude our work. 2. Design principle According to the principle of diffraction optics, the transmission function of diffraction optical element (DOE) can be expressed as follows



∗ Corresponding author. Tel.: +86 13166598956. E-mail address: [email protected] (W. Yue). http://dx.doi.org/10.1016/j.ijleo.2015.07.001 0030-4026/© 2015 Published by Elsevier GmbH.

Tdiff (r) = exp i2



n−

r2 20 f



(2.1)

3458

W. Yue et al. / Optik 126 (2015) 3457–3462

Fig. 1. The dispersion properties of DOE, the red line represents the red beam, and the blue line stands for the blue beam. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 4. Original image.

In formulas (2.1) and (2.2), n is the refractive index of DOE and m represents the order of diffraction. In this paper, we set m = 1. According to the formula (2.2), there are several focus points which correspond to the wavelength f =

Fig. 2. Ideal scheme of the design.

Furthermore it can be expanded by Fourier series as





l=+∞

Tdiff (r) =

l=−∞

exp[−i(2n + 1)] sin c(˛ − m) exp

mr 2 −i 0 f

 (2.2)

0 f0 

(2.3)

So, we can know that the imaging distance is shorter when a longer wavelength beam impinges on a DOE. Fig. 1 shows the diffraction effect vividly. This character can be called chromatic dispersion of DOE. It can be transplanted into the design of color holographic display. In this paper, the diffraction can be calculated based on angular

Fig. 3. The iterative and image reconstruction processes. (a) Iterative method and (b) image reconstruction.

W. Yue et al. / Optik 126 (2015) 3457–3462

3459

Fig. 5. The reconstructed color image for different distances from DOE. (a) The distance from DOE is 0.19 m, (b) the ideal reconstruction distance from DOE, (c) the distance from DOE is 0.21 m.

Zg =

Fig. 6. The reconstruction color image with different input windows. (a) The scale or pixel is 256 × 256, (b) 512 × 512, (c) 1024 × 1024, and (d) 2048 × 2048.

propagation theory. The scale diffraction propagation can be written in angular spectrum representation method Uout (ro ) = F

−1

{F{Uincident Tdiff (r)}H(fx , fy )}

(2.4)

In Eq. (2.4), fx and fy are the spatial frequencies in x component and y component of DOE, respectively. H satisfies the equation as follows



H = exp

2 i d 





2

1 − (fx ) − (fy )

2

(2.5)

Considering the dispersion of DOE, we could divide a color image into three components such as R, G, and B. For compensating the dispersion, we can place R, G, and B components at a specific virtual position, which is shown in Fig. 2. The position of B and G components can satisfy the equation, respectively, as follows Zb =

b Zr r

(2.6a)

g Zr r

(2.6b)

The design process starts at distance Z = 0 with a random initial phase and a target amplitude imported from incident beam (see Fig. 2). In this paper, we set the initial amplitude at 1, and the initial phase is random phase. The wavefront then propagates forward to plane B at the distance Zb , where the amplitude is modified without changing the current phase. We will describe how to modify the amplitude later. Then, the wavefront is propagated backwards to plane DOE at the distance Z = 0, where the amplitude is forced to be the same as the initial amplitude of incident beam again without altering the phase. Next, the beam propagated to the plane G at Zg . There the amplitude is modified again without changing the phase, as mentioned above, the wavefront propagates back to the DOE plane, where the amplitude is equalized to a value of 1 in order to achieve a phase only structure. Again, the wavefront at Z0 propagates forward to the plane R at the distance Zr . In the same way, the amplitude is modified and does not change the current phase. Then the wavefront propagates back to the DOE plane, but the phase is not changed and the amplitude is set to 1. The loop is repeated assumed number of times. Fig. 3(a) shows the iterative and image reconstruction process. This process is similar to Gerchberg–Saxton (GS) iterative algorithm. The GS algorithm is well known to us. The traditional GS is an algorithm that is implemented by the fast Fourier transform technique. Here, we use the angular spectrum diffraction instead of Fourier transform. We have found that the image quality is not good if we adopt the algorithm mentioned above. For improvement in the quality of reconstruction image, we had to do a lot of investigations. We find that the combination of GS and the algorithm presented in [7,8], which we call as noise compression algorithm in this paper, can improve image quality well. The amplitude can be modified as UL1 = (2K|ULd | − |UL |) exp(iϕL1 )

(2.7)

where L = R, G, B, and ULd is the desired pattern in signal window. In Eq. (2.7) the UL represents the calculated amplitude distribution in the plane L. UL = F −1 {F{Uincident exp(iϕrand )}H(fx , fy )} ϕL1

(2.8)

is the phase, which can be expressed as

ϕL1 = angle(UL )

(2.9)

The K can be written as below K=

  |UL ∈ signal window|  M N d M

N

|UL ∈ signal window|

(2.10)

From Eq. (2.7), if the amplitude is lower than the desired in the signal window, it can be set to higher values and vice versa. It is the

3460

W. Yue et al. / Optik 126 (2015) 3457–3462

Fig. 7. The MSE with different input windows, (a) 256 × 256, (b) 512 × 512, (c) 1024 × 1024, and (d) 2048 × 2048.

internal essence of the algorithm. The quality of final reconstruction can be improved by increasing the number of iterations. But it does not need unlimited increase of the iterative time, since it is an asymptotically increasing process. So in this paper, 50 iterations were chosen as a trade-off between the quality and the computational time. Considering the real holographic display, we also take use of GPU technology for accelerating the computational time. This will be described in Section 3. In Fig. 3(b), it describes a reconstruction process. In this paper, the three wavelengths are 633 nm, 532 nm, 473 nm, respectively, due to this wavelength are convenient for future experiments in our Lab. The reconstructed beam is a plane wave. If a hologram designed for single wavelength rather than our proposed method is reconstructed with different wavelength, the reconstruct image of the three wavelengths will not be located at same plane. The proposed method in this paper considers the dispersion, so the R, G, and B components can form a color image in the same plane without longitudinal displacement. In Section 3, we will implement the numerical simulation experiment.

0.1549 m from the DOE, respectively. The distance for G and B components can be calculated according to (2.6a) and (2.6b). According to the proposed method, the distance is 0.2 m from DOE for color image reconstruction. Firstly, we calculate the reconstruction color image at the position with slight deviation. Fig. 5 shows the results for different reconstruction positions. In Fig. 5, the quality of reconstructed color image at 0.19 m and 0.21 m from DOE is worse. We can get a good reconstructed color image at 0.2 m by simultaneously illuminating the RGB lights to the DOE. For further study, we investigate the impact with different input windows. In this paper, the input window can be divided into signal window and noise window. The original image is the signal window, and the other is noise window. Fig. 6 shows the result with different input windows. Here, we define mean square error (MSE) to describe the performance of the reconstructed image.

  MSE =

3. Numerical simulation experiment In this section, we computed the quality of the reconstructed color image with the proposed method. Fig. 4 shows original images with 256 × 256 pixels. In our simulation experiment, the sampling interval is 9.8 ␮m × 9.8 ␮m. The wavelengths for reconstructing color image are 633 nm for red, 532 nm for green, and 470 nm for blue, respectively. These can be easily realized with use of a RGB LED light. The R, G and B components are set at 0.2 m, 0.1802 m and

M

N

(Iideal − Ical )2 MN

(3.1)

MSE represents the error between the reconstruction image and desired image. In (3.1), Ideal and Ical are the ideal and reconstruction intensity distributions, respectively, where M and N are the horizontal and vertical number of pixels for a reconstructed image and original image, respectively. Fig. 7 shows the results of MSE against the iterative number with different input windows. When the window scale is 256 × 256, the reconstruct result is worse. The error between reconstruction image and ideal image

W. Yue et al. / Optik 126 (2015) 3457–3462

3461

Fig. 8. The iterative process in GPU.

does not show any hint of convergence. If we increase the window scale, the error begins to fall down dramatically. So, if we continue to increase the window scale, the error of B component can continue to reduce slowly. When the scale is 2048 × 2048, the error of G component can also be stable at a low value. Considering the computational cost, we could accelerate the iterative process by virtue of GPU. The flow chart is shown as in Fig. 8. The Fourier transformation and inverse Fourier transform in the proposed iterative algorithm can be calculated by virtue of GPU. Meanwhile, the other function such as angular spectrum transfer function is not related to the iterative process. It can be transferred to the video memory. So the computational time could be saved. We program the code with MATLAB 2012; we can use GPU of NVIDIA Tesla C2075 for executing the process in Fig. 8. In terms of the Intel Xeon E5-2650 CPU, the computational time can be improved 15.4 times. Here, the fast Fourier transform is a built-in function in MATLAB. Further, two other kinds of image are reconstructed with GPU. The results are shown in Fig. 9. Fig. 9(a) is the original image of a lovely cat, and Fig. 9(c) shows three characters for the abbreviation of Optical Society of America (OSA). From Fig. 9, it is found that the reconstructed color image of cat is good. It is similar to the original image. The reproduced image for characters is not as good as its original. The color noise surrounding the characters is visible. We can call the noise speckle. The reason lies in the margin of characters being choppy. The color speckle comes from the speckle superposition of R, G, and B components. This can be solved by increasing the number of iterations in the future.

Fig. 9. Reconstructed color image. (a, c) Original images, respectively, (b, d) reconstructed color images in which scale is 2048 × 2048.

3462

W. Yue et al. / Optik 126 (2015) 3457–3462

4. Conclusion We proposed a DOE design of color image reconstruction using the modified iterative algorithm in order to optimize the image quality of a reconstructed image from the DOE. The color information can be recorded on one thin phase plate. Considering the application in real holography, the GPU is implemented for saving the time cost. The method was verified numerically by a simulation experiment with good result obtained for three kinds of input color images. Numerical simulations results show that the presented method could display a full color image at certain distance which can be set by the user based on only one single phase plate. In the future work, we will focus on the experimental research with the proposed method. Acknowledgment The authors express their gratitude for the helpful discussion with Dr. Jing Zhu. References [1] E.N. Leith, A. Kozma, J. Upatnieks, et al., Holographic data storage in threedimensional media, Appl. Opt. 5 (8) (1966) 1303–1311. [2] T. Yamaguchi, G. Okabe, H. Yoshikawa, Real-time image plane full-color and full-parallax holographic video display system, Opt. Eng. 46 (12) (2007), 125801-1–125801-8.

[3] T. Shimobaba, T. Ito, A color holographic reconstruction system by time division multiplexing with reference lights of laser, Opt. Rev. 10 (5) (2003) 339–341. [4] T. Shimobaba, A. Shiraki, N. Masuda, et al., An electroholographic colour reconstruction by time division switching of reference lights, J. Opt. A: Pure Appl. Opt. 9 (7) (2007) 757. [5] T. Shimobaba, A. Shiraki, Y. Ichihashi, et al., Interactive color electroholography using the FPGA technology and time division switching method, IEICE Electron. Express 5 (8) (2008) 271–277. [6] M. Oikawa, T. Yoda, T. Shimobaba, et al., Time-division color electroholography with low-price microprocessor, in: Digital Holography and Three-Dimensional Imaging, Optical Society of America, 2011, pp. DTuC11. [7] R.W. Gerchberg, A practical algorithm for the determination of phase from image and diffraction plane pictures, Optik 35 (1972) 237. [8] J.S. Liu, A.J. Caley, M.R. Taghizadeh, Symmetrical iterative Fourier-transform algorithm using both phase and amplitude freedoms, Opt. Commun. 267 (2) (2006) 347–355. [9] T. Ito, K. Okano, Color electroholography by three colored reference lights simultaneously incident upon one hologram panel, Opt. Express 12 (18) (2004) 4320–4325. [10] T. Shimobaba, T. Takahashi, N. Masuda, et al., Numerical study of color holographic projection using space-division method, Opt. Express 19 (11) (2011) 10287–10292. [11] Y. Pan, X. Xu, X. Liang, Fast distributed large-pixel-count hologram computation using a GPU cluster, Appl. Opt. 52 (26) (2013) 6562–6571. [12] N. Takada, T. Shimobaba, H. Nakayama, et al., Fast high-resolution computergenerated hologram computation using multiple graphics processing unit cluster system, Appl. Opt. 51 (30) (2012) 7303–7307. [13] Wikipedia, Comparison of Nvidia Graphics Processing Units, Wikipedia, 2012 http://en.wikipedia.org/wiki/Comparison of Nvidia graphics processing units. [14] nVidia, NVIDIA CUDA C Programming Guide Version 4.2, nVidia, 2012 http:// developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA C Programming Guide.pdf.