Holographic three-dimensional display based on optimizing arrangement of holograms

Holographic three-dimensional display based on optimizing arrangement of holograms

Journal Pre-proof Holographic three-dimensional display based on optimizing arrangement of holograms Peng Sun, Chenning Tao, Jinlei Zhang, Rengmao Wu,...

1MB Sizes 0 Downloads 32 Views

Journal Pre-proof Holographic three-dimensional display based on optimizing arrangement of holograms Peng Sun, Chenning Tao, Jinlei Zhang, Rengmao Wu, Siqi liu, Qinzhen Tao, Chang Wang, Fei Wu, Zhenrong Zheng

PII: DOI: Reference:

S0030-4018(20)30017-1 https://doi.org/10.1016/j.optcom.2020.125260 OPTICS 125260

To appear in:

Optics Communications

Received date : 30 September 2019 Revised date : 31 December 2019 Accepted date : 5 January 2020 Please cite this article as: P. Sun, C. Tao, J. Zhang et al., Holographic three-dimensional display based on optimizing arrangement of holograms, Optics Communications (2020), doi: https://doi.org/10.1016/j.optcom.2020.125260. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier B.V.

Journal Pre-proof

Holographic three-dimensional display based on optimizing arrangement of holograms

of

Peng Sun,1 Chenning Tao,1 Jinlei Zhang,1 Rengmao Wu,1 Siqi liu,1 Qinzhen Tao,1 Chang Wang,1 Fei Wu,2 Zhenrong Zheng1,* 1

pro

State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China 2 Beijing LLVision Technology Co., Ltd., Room 903, The Spaces International Center, No.8 Dongdaqiao Road, Chaoyang District, Beijing 100000, China *[email protected]

re-

Abstract: We present a simple yet effective method to realize holographic three-dimensional (3D) display by optimizing arrangement of holograms. After a 3D object is divided into a set of layers in axial direction, these layers are calculated into corresponding sub-holograms by Fraunhofer diffraction. The hologram uploaded on SLM consists of sub-holograms side-byside. Both simulations and experiments are carried out to verify the feasibility of the proposed method. The difference between Fresnel diffraction and Fraunhofer diffraction in the proposed method is analyzed. Aiming at the problems exposed in the experiments, reconstructed results are discussed and further improved by optimizing arrangement of sub-holograms. The process of 3D modeling is simple and the computational complexity is accordingly reduced. This method may provide a promising solution in future holographic 3D realization. Keywords: Computer holography; Wavefront encoding; Holographic display

urn al P

1. Introduction

Jo

Holographic display has attracted considerable attention for its high-contrast, low power consumption, wide color gamut and potentials for 3D display. For the presentation of 3D images, phase only holographic displays based on spatial light modulator (SLM) can reconstruct the whole optical wave field of the 3D scene; hence, they have the potential to provide all the depth cues that human eyes can perceive [1-3]. The images reconstructed from computer-generated hologram (CGH) can be captured by complementary metal-oxidesemiconductor (CMOS) cameras or directly by the human eyes. The applications of CGH include near-eye display [4], beam shaping [5], pattern recognition [6], optical encryption [7], etc. Although CGH has the potential for reconstructing all the 3D information of an object, complicated 3D modeling and time-consuming computation are big challenges faced by current 3D holographic display. 3D CGH calculations can be realized by using point clouds, polygons, multilayers and multiview images [8]. In the point cloud method, a 3D scene is represented by a set of aggregated point light sources (PLSs), and the CGH for this 3D scene is then calculated by accumulating the light from the PLSs into CGH. Although the computational process of this CGH is simple, the amount of calculation is usually very huge. The point cloud with look-up tables is an effective method because the wavefronts of point light sources can be precalculated [9-11]. However, the amount of memory space employed to store the precalculated wavefront data is also very huge. The polygon-based methods convert the diffraction calculation from points to polygons, yielding an accelerated calculation due to the decrease in diffraction time [12-14]. Nevertheless, modeling a 3D object into numerous polygons is still a cumbersome process, because the fast Fourier transform (FFT)-based diffraction can only be used to calculate light propagation between parallel planes. Angular-spectrum-based tilted diffraction calculations have been proposed to overcome this limitation, however, it is still computationally expensive. The multilayer-based method expresses a 3D scene as a combination of a set of

Journal Pre-proof

urn al P

re-

pro

of

predefined parallel depth images [15,16]. The final CGH is a superposition of holograms calculated by these depth images. Although the calculation time can be reduced by this method, aliasing and crosstalk that degrades a diffracted light field cannot be avoided. Besides, the more the layers there are, the worse the image quality will be. The multiview image method calculates CGHs from 2D images captured at different viewing position on a CGH plane [17,18]. The computational cost is much lower than that of the point cloud and polygon methods. Although an observer can recognize the 3D effect reconstructed from these different viewing holograms, the quality of depth images is problematic due to the fact that the reconstructions are essentially 2D images. In this paper, we propose a simple yet effective method to realize holographic 3D display by optimizing arrangement of holograms. A 3D object is first divided into many layers in longitudinal direction, which are parallel to the CGH plane, and then these layers are calculated into corresponding sub-holograms. The final CGH is formed by these sub-holograms which are arrayed side-by-side without any superposition. Figure 1 schematically shows the 3D reconstruction of the proposed method. As we can see from SLM (front), if the resolution of SLM is MN and the 3D object is divided into mn layers, the array of sub-holograms could be mn mode and the resolution of every sub-hologram could be (M/m)(N/n). The right part of Fig. 1 shows a reconstructed 3D teapot of many layers, one of which is reconstructed from a green sub-hologram in SLM. No matter where the sub-hologram is located in SLM, the corresponding layer will always be reconstructed in the center of the imaging area, which makes the process of 3D modeling much easier. The reconstructed image will locate at a designated depth when the sub-holograms are given different depth information. However, when the subholograms are reconstructed in different depths, the reconstructed images will not locate in the center by rule and line. The reason is discussed in details in Section 3 and the methods to solve this problem are also proposed.

Fig. 1. The schematic of 3D reconstruction.

2. Method and simulation

Before presenting the design principle of our proposed method, the phase shift theorem of Fourier transform is firstly introduced [19], which will yield one unique and important feature of the proposed method. The phase shift theorem says that the translation in frequency domain lead to a linear phase shift in space domain, which can be written as

F U 1 ( x1 − a , y1 − b )  = U 2 ( x 2 , y 2 ) exp  − j 2 ( x 2 a + y 2 b )  ,

(1)

Jo

where U1(x1, y1) and U2(x2, y2) are, respectively, the complex amplitude in source plane (SLM plane) and destination plane (reconstruction plane) in Fig. 1. a and b are two constants,  [∙] denotes the Fourier transform, and exp[-j2π(x2a+y2b)] is the phase shift factor. It should be noted that neither the image quality nor image position of the reconstructed image U2 will be influenced by this phase shift theorem. The phase shift theorem tells us that the translation in frequency domain will not lead to the translation of U2 in space domain. That means, the

Journal Pre-proof

U 2 ( x2 , y 2 ) =

exp ( jkz ) j z

of

reconstructed image U2 will still be located at the center of imaging area. This also holds when U1 is in space domain and U2 is in frequency domain. The relation between source and destination plane in a diffraction calculation is given by Rayleigh-Sommerfeld diffraction. In computer holography, Fresnel diffraction and Fraunhofer diffraction, which are two approximation forms of diffraction calculation, can be derived from Rayleigh-Sommerfeld diffraction. In the proposed method, the Fraunhofer diffraction formula is employed here to do diffraction calculation, which can be written as 2  jk 2 exp  ( x2 + y 22 )   U 1 ( x1 , y1 ) exp  − j  z ( x1 x2 + y1 y 2 )  dx1 dy1 , 2 z    

(2)

U 2 ( x2 ) =

exp ( jkz ) j z

pro

where the wavenumber k = 2π/λ , λ is the wavelength, and z is the propagation distance. For convenience of reading, parameter y is omitted. Then, Eq. (2) can be written as  jk 2  exp  x 2  F U 1 ( x1 )  .  2z 

(3)

When the translation a is introduced to U1(x1), from Eq. (1) we know that Fraunhofer diffraction of U1(x1-a) can be derived as follows j z =

exp ( jkz ) j z

 jk 2  exp  x 2  F U 1 ( x1 − a )   2z 

re-

exp ( jkz )

(4)

 jk 2  exp  x 2  F U 1 ( x1 )  exp ( − j 2 x 2 a ) .  2z 

urn al P

From with Eq. (3), we know that Eq. (4) can be written as U2(x2)exp(-j2πx2a), which indicates that U2 in space domain will not be influenced by the translation of U1 in frequency domain when the Fraunhofer diffraction is performed. That is, no matter where the sub-hologram is translated on SLM, the reconstructed image will still be at the center of the imaging area without any translation. Next, in order to explain why we choose Fraunhofer diffraction instead of Fresnel diffraction, analysis of Fresnel diffraction with phase shift theorem is conducted. Fresnel diffraction is expressed as U 2 ( x2 , y 2 ) =

exp ( jkz ) j z

  U ( x , y ) exp  2 z  ( x 1

1

jk

1

2

2 2  − x1 ) + ( y 2 − y1 )   dx1 dy1 . 

(5)

In computer holography, the commonly used form of Fresnel diffraction is given by U 2 ( x2 ) = F

−1

F



U 1 ( x1 )  F  h ( x1 )  ,

(6)

where ℎ(𝑥1 ) = exp⁡(𝑗𝑘𝑧)⁄𝑗𝜆𝑧 ∙ exp⁡[𝑗𝑘(𝑥2 − 𝑥1 )2 ⁄2𝑧]. When there is a translation a of U1(x1) in x direction, U1(x1) will be changed to U1(x1-a). Suppose the result is U3. From Eq. (1), U3 can be simplified as follows

Jo

U 3 ( x2 ) = F =F

−1

F

U 1 ( x1 − a )  F  h ( x1 ) 



−1

F

U 1 ( x1 )  F  h ( x1 )  exp ( − j 2 x2 a ) .

(7)



(8)

According to properties of Fourier transform, U3 can be written as U 3 ( x2 ) = F

−1

F



U 1 ( x1 )  F  h ( x1 )   F

−1

 exp ( − j 2 x2 a )  .

(9)

Journal Pre-proof

where  denotes the convolution operation. Note that formula before  is Fresnel diffraction of U1 in Eq. (6), so the formula can be replaced by U2. According to properties of Function δ, Eq. (9) can be further derived as U 3 ( x2 ) = U 2 ( x2 )  

( x2

− a ) = U 2 ( x2 − a ) .

(10)

Jo

urn al P

re-

pro

of

From Eq. (10), we can easily find that the position of the reconstructed image U2 is shifted from x2 = 0 to x2 = a just like U1 in space domain. That is to say, no matter where U1 translates in frequency plane, there will be a same translation of U2 in object plane. Compared with Fresnel diffraction, this feature of Fraunhofer diffraction provides a new idea for 3D modeling. We do not need to consider the position of sub-holograms, because according to phase shift theorem, the reconstructed images of all sub-holograms will locate at the center of space domain. Thus, we can reconstruct the 3D object as long as each subhologram is given corresponding depth information. Simple computer simulations are conducted to demonstrate the imaging result of Fraunhofer diffraction and the difference between Fraunhofer diffraction and Fresnel diffraction. The calculation of diffraction is accomplished by Gerchberg-Saxton (GS) algorithm with 5 iterations [20]. The GS algorithm is proposed to solve the problem of phase retrieval of a field at two different planes, when at those, only the field amplitudes are known and given that the fields are related by a Fourier transform. Besides the algorithm is often used to calculate the complex amplitude that light in one plane must have in order to form a desired complex amplitude in a second plane, the light distribution in which is related to that in the first plane by a propagating function such as the Fourier transform. FFTs are often used to iteratively propagate the complex amplitude backward and forward from the Fourier (or SLM) plane to the image plane, replacing the illuminating laser beam intensity profile at the SLM plane and the intensity at the image plane with the target intensity. Figure 2 shows the comparison between Fresnel diffraction and Fraunhofer diffraction. The original image with a resolution of 10241024 is given in Fig. 2(a), which consists of 4 subimages with a same resolution of 512512. The hologram given in Fig. 2(b) includes 4 subholograms and each sub-hologram is calculated by Fresnel diffraction from each sub-image in Fig. 2(a). Figure 2(d) gives the reconstructed image generated by hologram shown in Fig. 2(b). From Fig. 2(d), it is clear that the positions and orientations of the 4 squares remain unchanged. Figure 2(c) shows 4 sub-holograms calculated from sub-images separately with Fraunhofer diffraction. Figure 2(e) gives the reconstructed image generated by hologram shown in Fig. 2(c). We can clearly see that the result is quite different from that in Fig. 2(d). All the 4 squares are located at the center of the imaging area, which corresponds to our principle. In addition, every reconstructed image is enlarged from 512512 to 10241024 in Fig. 2(e). Actually, no matter what the resolution of sub-hologram is, the reconstructed image of every sub-hologram will be enlarged to the size of final hologram.

pro

of

Journal Pre-proof

Fig. 2. The comparison between Fresnel diffraction and Fraunhofer diffraction. (a) The original image. (b)-(c) CGHs calculated from the original image with Fresnel and Fraunhofer diffraction respectively. (d)-(e) Image reconstructed from holograms in Figs. 2(b) and (c) respectively.

re-

Due to the unique characteristic of the proposed method, the process of 3D modeling becomes much easier. Another important aspect in 3D modeling is to reconstruct every layer in its corresponding depth. In order to achieve this goal, different quadratic phase is added to each sub-hologram to control depth information, which can be expressed as

urn al P

 j ( x 2 + y 2 )  , w ( x , y ) = exp    f  

(11)

where f is the focal length of the predefined. It should be mentioned that the quadratic phase introduced can separate conjugate image from the primary image for better image quality [21,22]. A lens group, consisted of Lens 1 and Lens 2, is placed behind the SLM to converge the light beam modulated by each sub-hologram, as shown in Fig. 3. d is the distance between SLM and Lens 1, f1 and f2 are focal length of Lens 1 and Lens 2 respectively. The image plane of one layer is changed by the quadratic phase and the distance between Plane 1 and Plane 2 is Δz, which is given by z =

f1 2

f + f1 − d

.

(12)

Jo

Eq. (12) tells us that the depth of every layer can be controlled accurately. In the next section, the effectiveness of the proposed method will be experimentally verified.

Fig. 3. Optical design of depth control.

3. Experimental results and discussions

Journal Pre-proof

pro

of

Figure 4 shows experimental setup of the proposed system. The phase-only SLM used here is Holoeye LETO LCoS with pixel pitch: 6.4μm, resolution: 19201080. The CGH loaded on SLM is illuminated by a green laser source (532nm) after a polarization beam splitter (PBS), which can ensure the polarization state of beam is aligned with that of the SLM. The projected beam from SLM is transmitted through a 4-f system and spatial filter to suppress various noises. Finally the results are captured by a camera (Nikon D7100) and the camera can also capture the real scene through the beam splitter. The computing platform is Intel i5-6500k CPU, 3.2GHz, and 16GB RAM.

Fig. 4. Schematic of experimental setup.

re-

3.1 Example 1: Reconstruction of 9 squares

Jo

urn al P

In order to validate the proposed system and algorithm, we conducted a series of holographic 3D display experiments. The focal length of Lens 1 and Lens 2 are 250mm and d is 30mm. This experiment illustrates that our method can present 3D images with continue depth cues and large zooming range. Meanwhile, some problems in experiment are exposed. As we can see in Fig. 5, the images are reconstructed from 9 squares. The resolution of final hologram uploaded on SLM is 19201080, so the resolution of each sub-hologram is 640360. The diffraction calculation is accomplished by GS algorithm with 5 iterations. In Fig. 5(a), all the 9 squares are in the same plane without giving any depth information. In Fig. 5(b), the central square is in focus as the arrow points and the focal plane is 58.8cm from the camera. In Fig. 5(c), the outermost square is in focus also as the arrow points and the focal plane is 9.5cm from the camera. The rest squares are focused and displayed in succession at 51.8cm, 45.7cm, 39.9cm, 33.1cm, 27.2cm, 21.2cm and 15.4cm with nearly the same distance. We have also recorded this reconstruction by adjusting the focal length of the camera back and forth between the first focused plane and the ninth focused plane. The whole process is shown in Visualization 1. We can see that the reconstructed images are focused and blurred in the same way with the real objects at different depths. The results demonstrate that the proposed method can present continuous depth cues with nearly 50cm zooming range.

Fig. 5. Optical results of single-plane and multi-plane holographic display. (a) 9 squares reconstructed in the same plane. (b)-(c) 9 squares reconstructed at different planes. (b)-(c) are focused at 58.8cm and 9.5cm respectively (Visualization 1).

However, as we can see in visualization 1, when the focal length of camera changes, the squares are not strictly located at the center of imaging area. The characteristic of virtual image received by camera is identical with the real image. Therefore, for the convenience of analysis,

Journal Pre-proof

re-

pro

of

the filter is replaced by a whiteboard to receive the real image. The reason why there is a displacement of the reconstructed layers is discussed in Fig. 6. Here Lens 1 in Fig. 3 is omitted, because d is too short compared with the focal length f1. Actually, image reconstructed from sub-hologram is located at the center when the image plane coincides with the focal plane of Lens 1. However, for 3D display, there must be many other image planes of different depth besides the focal plane. CEDF represents one sub-hologram and O2 will be the center point of reconstructed image if the image plane coincides with the focal plane. However, when quadratic phase introduced, the image plane is moved forward and consequently, the center point moves from O2 to A. Apparently, triangle O2O3A is similar with triangle O2O1B, so the offset AO3 can be deduced as ΔdΔz/f1. In our system, the active area of LCoS is 12.57.1 mm and the maximum of Δd is 7mm, so offset AO3 will be 0.028Δz.

Fig. 6. Schematic of displacement in sub-holograms.

urn al P

3.2 Example 2: Reconstruction of teapot and “AB”

Jo

Fig. 7. Optical results of complicated 3D object reconstruction. (a)-(b) Intensity map of teapot and letter “AB” respectively. (c)-(d) Optical reconstruction of teapot focused at 48.2cm and 51.8cm after the camera. (e)-(f) Optical reconstruction of letter “AB” focused at 48.2cm and 51.8cm after the camera.

In experiments we find out that when the offset is less than 1mm, it will be hardly for human eyes to detect this displacement. When the offset is equal to 1mm, Δz can be deduced as 35.7mm according to Fig. 6. Another optical experiment is carried out to realize complicated 3D object reconstruction with more layers and also prove points mentioned above. The optical

Journal Pre-proof

3.3 Comparison with other methods

re-

pro

of

setup and conditions are identical with Example 1. The results of further verification are shown in Fig. 7. Figures 7(a) and 7(b) are the intensity maps of teapot and letter “AB” respectively. The two 3D objects are both divided into 25 layers from front to back in z direction. The resolution of final holograms uploaded on SLM are also 19201080, so the resolution of subhologram is 384216. The distance between every layer is shortened to 1.5mm, and the thickness of the object will be 36mm. Figures 7(c) and 7(d) are the optical reconstruction of teapot at two selected focused planes, 48.2cm and 51.8cm from the camera respectively. It can be observed that when the focused plane is 48.2cm, the spout of teapot in Fig. 7(c) is clearer than the handle of teapot. The clear content moves to the handle in Fig. 7(d), when the focused distance is changed to 51.8cm. The comparison between the same part of Figs. 7(c) and 7(d) is more obvious. The above observations will become even more apparent when the details are amplified four times. Figures 7(e) and 7(f) are the reconstruction of letter “AB”, which are also focused at 48.2cm and 51.8cm from the camera respectively. We can see that the letter “AB” are focused and blurred in the same way with the teapot. Meanwhile, we can find that the reconstructed teapot and “AB” remain intact and the offset can hardly be noticed, which means the experimental results are identical with the theoretical estimation. In traditional layer-based method, the projection vector is perpendicular to the hologram plane and parts of the layer may be blocked by layer ahead due to their corresponding positions, which will lead to the absence of the occlusion effect. In our method, every layer used is a complete intensity map, which means information of every layer has all been rendered out [16]. The hidden but rendered parts can then be used to process the occlusion calculation. Therefore, our method does not suffer from “hidden surface removal” problem, but the occlusion effect can hardly be noticed in our experiment, because the 3D models are too simple.

Jo

urn al P

In order to further illustrate the advantages of the proposed method, the comparison among point cloud method, traditional layer-based method and proposed method is conducted. As we can see in Fig. 8, three images are simulation results reconstructed from Fig. 7(a) by three methods respectively. By controlling the value of PSNR, we keep that the quality of each image is almost the same. The image in Fig. 8(a) is reconstructed by point cloud. In point cloud method, the model teapot is represented by a set of aggregated point light sources, and the algorithm for CGH calculation is to superimpose the wavefront from all point light sources. The number of points involved here is 12882. The image in Fig. 8(b) is reconstructed by traditional layer-based method in Ref. [16]. The number of layers is 25 and the resolution of every layer is 19201080. The image in Fig. 8(c) is reconstructed by the proposed method. The number of layers is also 25 and the resolution of every layer will be 384216 according to our method. The computing platform and the way for diffraction calculation of the three methods are all the same. Detailed data is compared in Table 1.

Fig. 8. Simulation results of the teapot. (a) Image reconstructed by point cloud method. (b) Image reconstructed by traditional layer-based method. (c) Image reconstructed by our proposed method.

Journal Pre-proof

Table 1. Measurement of three methods. Point cloud method 12882 \ 20.54 469.62s

Layer-based method \ 25 22.34 12.25s

Our method \ 25 21.88 0.95s

pro

Number of points Number of layers PSNR Processing time

of

It can be concluded from Table 1 that the proposed method is over 400 times faster than point cloud method, and is also over 10 times faster than traditional layer-based method. It means that the proposed method is more effective in reducing the calculation time, which is very helpful for real time 3D reconstruction. From Fig. 8, it can be seen that there are almost no difference of visual quality. Therefore, our method can reduce amount of calculation on the premise of changing the image quality not significantly.

re-

In our method, when the number of layer is small (less than 40 as we tested in the experiments), the visual quality of every layer is acceptable for the human eyes with the advantages of high calculating speed. However, when the number is more than 40, the quality of every layer can hardly be accepted. In this manuscript, a new way for 3D reconstruction has been provided, and if necessary, it can be combined with other methods for more layers. For example, due to the persistence of vision of human eye, time multiplexing of LCoS will increase the number of layers. It can also be combined with traditional layer-based method [15,16], with the superimposition of holograms, the number of layers will increase greatly. 3.4 Two methods to solve “displacement”

urn al P

However, the displacement of reconstruction differs from object to object and the notice of displacement varies from person to person, and for better 3D display, there is a tremendous need to provide a large zooming range.

Jo

Fig. 9. The comparison between convolution method and original method. (a) New arrangement of 9 sub-holograms. (b) Composition of the outermost sub-hologram in Fig. 9(a). (c) Original arrangement of 9 sub-holograms. (d)-(e) Simulation results reconstructed from holograms in Figs. 9(a) and 9(c) without depth information. (f)-(g) Experimental results reconstructed from hologram in Fig. 9(a) with depth information. (f)-(g) are focused at the central square and the outermost square respectively (Visualization 2).

Therefore, two methods are proposed to solve the “displacement” problem based on further optimizing arrangement of holograms. The first method, called convolution method, is shown in Fig. 9 [23]. Figure 9(a) shows the new arrangement of 9 sub-holograms, of which each subhologram is centrosymmetric about the whole hologram. Figure 9(b) is the outermost subhologram in Fig.9(a), which is made up of a 19201080 hologram and 18101017 black mask. That means the reconstructed image will be convolution of the hologram and the mask. To

Journal Pre-proof

urn al P

re-

pro

of

ensure the area of each sub-hologram is the same, the resolution of hologram and mask need to be carefully calculated. In Fig. 9(a), for a 9 layers reconstruction, the resolutions of holograms are calculated as 18101017, 1693952, 1568882, 1431805, 1280720, 1109622, 905509, 640360 in succession. Figure 9(c) shows the original arrangement for 9 layers reconstruction. Figures 9(d) and 9(e) shows the simulation results reconstructed from Fig. 9(a) and 9(c) without depth information. Obviously, the image quality in Fig. 9(d) is poorer compared with that in Fig. 9(e). Figures 9(f) and 9(g) are experimental results reconstructed from Fig. 9(a) with depth information, and they are focused at the central square and the outermost square respectively. We have also recorded this reconstruction by adjusting the focal length of the camera back and forth between the first focused plane and the ninth focused plane. The whole process is shown in Visualization 2. Compared Visualization 2 with Visualization 1, it can be found the displacement problem has been solved, but some new problems have arisen. The image quality is degraded due to the convolution with black mask. The PSNRs of Figs. 9(d) and 9(e) are 16.69 and 22.34 respectively. Apparently, the more the layers there are, the worse the image quality will be, and PSNR will be lower than 10 if the number of layers is greater than 15, which means the image quality can hardly be accepted by human eye. The computational cost also increases accordingly, because the resolution of sub-holograms is greater than 640360 except the central sub-hologram. Therefore, convolution method can solve the displacement problem, but it may be inappropriate for complicated 3D reconstruction.

Fig. 10. Schematic of oversampling method. (a) 4 sub-holograms with the size MN respectively. (b) New arrangement of 4 sub-holograms with the size 2M2N. (c) New structural unit composed of 4 pixels from 4 sub-holograms. (d) The hologram reduced from hologram in Fig. 10(b) to size MN again. (e) Simulation result reconstructed from hologram in Fig. 10(b). (f) Simulation result reconstructed with oversampling method from 9 sub-holograms. (g)-(h) Experimental results reconstructed from 9 sub-holograms with depth information. (g)-(h) are focused at the central square and the outermost square respectively (Visualization 3).

Jo

Another method, called oversampling method, is shown in Fig. 10. Figure 10(a) shows 4 sub-holograms, and it is supposed that the size of sub-holograms is MN. Figure 10(b) shows the new arrangement of 4 sub-holograms. The first structural unit, shown in Fig. 10(c), of the new hologram consists of the first pixels of each sub-hologram, which are shown in black box in Fig. 10(a). The rest structural units consist of pixels in corresponding position of each subhologram. The size of new hologram will be 2M2N. For a certain sub-hologram in Fig. 10(a), the pixel pitch and size are doubled in the process of hologram rearrangement, leading to the insufficiency of frequency spectrum sampling. Figure 10(e) shows the simulation result reconstructed from hologram in Fig. 10(b) and it can be found that aliasing appears due to the

Journal Pre-proof

pro

of

undersampling. In order to avoid aliasing, the size of hologram in Fig. 10(b) is reduced to size MN again, as Fig. 10(d) shows. In this case, the pixel pitch and size of a certain sub-hologram are the same with the sub-hologram in Fig. 10(a), which means aliasing will no longer exist. The whole hologram in Fig. 10(d) contains 4 times the pixels compared with a sub-hologram, which is a kind of oversampling. The image in Fig. 10(f) is reconstructed with oversampling method from 9 squares, and we can find that there is no aliasing problem at all. The PSNR of image in Fig. 10(f) is calculated as 22.17, which is nearly the same with the original method. In this method, the computational cost also increases for the pixels needed to be calculated become n19201080 (n is number of layers). Figures 10(g) and 10(h) are experimental results reconstructed from 9 sub-holograms with depth information, and they are focused at the central square and the outermost square respectively. The reconstruction of 9 squares has also been recorded in Visualization 3. It can be found that the displacement problem has also been solved, for each sub-hologram can be regarded as centrosymmetric about the whole hologram. 4. Conclusion

urn al P

Funding

re-

In this paper, a holographic 3D display scheme based on optimizing arrangement of holograms has been presented. For this, the comparison between Fresnel diffraction and Fraunhofer diffraction is conducted. The experimental results demonstrate that our method can reconstruct multi-plane 3D object with continuous depth map and the process of 3D modeling is simple, that is the computational complexity is accordingly decreased. We also propose two methods, convolution method and oversampling method, to solve the displacement problem encountered in the experiment based on further optimizing arrangement of sub-holograms. Both methods have its own merits and demerits. It is expected that the proposed method may provide a better solution in future holographic 3D display research.

National Natural Science Foundation of China (NSFC) (11804299). Declaration of interest statement

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. CRediT authorship contribution statement

Peng Sun: Writing — original draft, Conceptualization, Methodology, Validation. Chenning Tao: Validation. Jinlei Zhang: Validation. Rengmao Wu: Supervision. Siqi liu: Validation. Qinzhen Tao: Validation. Chang Wang: Validation. Fei Wu: Supervision. Zhenrong Zheng: Conceptualization, Methodology, Validation, Supervision. References 1. 2.

Jo

3.

C. Chang, Y. Qi, J. Wu, J. Xia, and S. Nie, “Speckle reduced lensless holographic projection from phase-only computer-generated hologram,” Opt. Express 25(6), 6568-6580 (2017). A. Maimone, A. Georgiou, and JS. Kollin. "Holographic Near-Eye Displays for Virtual and Augmented Reality," ACM Trans. Graph. 36(1),1-16 (2017). Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24(26), 30368–30378 (2016). Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017). S. Tao and W. Yu, “Beam shaping of complex amplitude with separate constraints on the output beam,” Opt. Express 23(2), 1052–1062 (2015). J. Campos, A. Márquez, M. J. Yzuel, J. A. Davis, D. M. Cottrell, and I. Moreno, “Fully complex synthetic discriminant functions written onto phase-only modulators,” Appl. Opt. 39(32), 5965–5970 (2000). P. C. Mogensen and J. Glückstad, “Phase-only optical encryption,” Opt. Lett. 25(8), 566–568 (2000). T. Shimobaba, T. Kakue, and T. Ito, “Review of fast algorithms and hardware implementations on computer holography,” IEEE Trans. Industr. Inform. 12(4), 1611–1622 (2016).

4. 5. 6. 7. 8.

Journal Pre-proof

12. 13. 14. 15. 16. 17. 18.

19. 20. 21. 22.

Jo

urn al P

23.

of

11.

pro

10.

S. C. Kim, and E. S. Kim, "Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods," Appl. Opt. 48(6), 1030-1041 (2009). S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with onedimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021– 12034 (2012). T. Shimobaba, N. Masuda, T. Sugie, S. Hosono, S. Tsukui, and T. Ito, “Special-purpose computer for holography HORN-3 with PLD technology,” Comp. Phys. Commun. 130, 75–82 (2000). Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). K. Matsushima, M. Nakamura, and S. Nakahara, “Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique,” Opt. Express 22(20), 24450–24465 (2014). H. Zheng, Y. Yu, T. Wang, and L. Dai, "High-quality three-dimensional holographic display with use of multiple fractional Fourier transform," Chin. Opt. Lett. 7(12), 1151-1154 (2009). H. Zhang, L. Cao, and G. Jin, "Computer-generated hologram with occlusion effect using layer-based processing," Appl. Opt. 56(13), F138-F143 (2017). Y. Takaki and K. Ikeda, “Simplified calculation method for computer-generated holographic stereograms from multi-view images,” Opt. Express 21(8), 9652–9663 (2013). Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita,“Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20(19), 21645–21655 (2012). J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11(3), 77–79 (1967). R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972). P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm,” Opt. Express 26(8), 10140–10151 (2018). Z. Wang, G. Q. Lv, Q. B. Feng, A. T. Wang, and H. Ming, "Resolution priority holographic stereogram based on integral imaging with enhanced depth range," Opt. Express 27(3), 2689-2702 (2019) Y. Deng and D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci. Rep. 7(1), 5893 (2017).

re-

9.

Journal Pre-proof Declaration of interest statement The authors declare that they have no known competing financial interests or personal relationships

Jo

urn al P

re-

pro

of

that could have appeared to influence the work reported in this paper.

Journal Pre-proof CRediT authorship contribution statement Peng Sun: Writing — original draft, Conceptualization, Methodology, Validation. Chenning Tao: Validation. Jinlei Zhang: Validation. Rengmao Wu: Supervision. Siqi liu: Validation. Qinzhen Tao: Validation. Chang Wang: Validation. Fei Wu: Supervision. Zhenrong Zheng: Conceptualization,

Jo

urn al P

re-

pro

of

Methodology, Validation, Supervision.