Resolution enhancement of near-eye displays by overlapping images

Resolution enhancement of near-eye displays by overlapping images

Optics Communications 458 (2020) 124723 Contents lists available at ScienceDirect Optics Communications journal homepage: www.elsevier.com/locate/op...

3MB Sizes 0 Downloads 43 Views

Optics Communications 458 (2020) 124723

Contents lists available at ScienceDirect

Optics Communications journal homepage: www.elsevier.com/locate/optcom

Resolution enhancement of near-eye displays by overlapping images Qijia Cheng a , Weitao Song a ,∗, Feng Lin b , Yue Liu c,d , Yongtian Wang c,d , Yuanjin Zheng a a

School of Electrical and Electronic Engineering, Nanyang Technological University, S2.1 B6-02 50 Nanyang Ave, 639798, Singapore School of Computer Science and Engineering, Nanyang Technological University, N4-02a-05 50 Nanyang Ave, 639798, Singapore Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China d AICFVE of Beijing Film Academy, 4 Xitucheng Rd, Beijing, 100088, China b c

ARTICLE Keywords: Near-eye displays Imaging systems Image processing

INFO

ABSTRACT It has always been a hot topic to break the angular resolution/field-of-view (FOV) invariant and achieve a near-eye display with a large FOV and high-resolution. We present a super-resolution method for near-eye displays by overlapping images from two display panels using a half mirror. Key aspects including theoretical analysis, optimized method, system set-up, and calibration process are discussed detailly. A proof-of-concept prototype was developed to provide a super-resolution image capable of significantly higher sampling rate.

1. Introduction Offering a solution to the growing demand of mobile displays and wearable computing, near-eye displays become popular and find many applications in virtual reality (VR) and augmented reality (AR) [1– 4]. Different kinds of high-quality near-eye display systems have been developed [5,6], but it is still difficult to realize a large field-of-view (FOV) while keeping a high resolution due to the angular resolution/FOV invariant in near-eye displays. The angular resolution of a near-eye display is determined by the FOV and the resolution of the display panel. That is to say, the angular resolution decreases as the FOV increases, no matter how complex the optics are made. Previous researchers have proposed various approaches to break the invariant, which can be classified as high-resolution area of interest [7], partial binocular overlap [8], dichoptic area [9], and tiling [10], etc. By using a high-resolution area of interest method, high resolution and a wide FOV can be obtained simultaneously by superimposing a small FOV image with high resolution on a large FOV background, the former known as area of interest. However, the utilization of pupil-tracking system will result in bulky size, while the laser used to generate the perceived image also brings risk in regard to eye safety. On the other hand, partial binocular overlap is to make the FOV of both eyes only overlap at the central area. Dichoptic area method presents a large FOV low-resolution image to one eye, and a small FOV highresolution image to the other. The display performance of these two methods has been verified, but the systems based on these two methods will lead to visual fatigue problem because of different view zones observed by each eye. Optical tiling is the most feasible method for applications in all display systems, such as video walls [11], multipleprojector displays [12] and three-dimensional display systems [13,14].

The tiled near-eye display still suffers from drawbacks such as small eye clearance, vignetting issue, visible tiling seam and mismatch of optical axes, which were discussed detailly in previous researches [15,16]. Overlapping multiple images can generate high-resolution image, which has been applied in multi-projector systems [17–19]. It will be a potential approach to break the angular resolution/field-of-view invariant in near-eye displays. Near-eye displays with multiple display panels have been developed for various applications where different display channels cover different view zones to extend the FOV, which can be categorized under the tiling method mentioned above. Meanwhile, devices with multiple focal depths were implemented, also known as multi-focal-plane displays and computational multi-layer displays [20–22]. These approaches were proposed to solve or to relieve the accommodation and convergence problem that may result in visual discomfort. High dynamic range near-eye displays can also be achieved by overlapping multiple images in the same view zone with the same focal depth. A vivid image with 14 orders of magnitude was generated on a pair of liquid crystal on silicon microdisplays with customized optics. That is to say, to overlap two images in front of human eyes in near-eye displays has already been achieved by different types of structures such as relay optics, freeform optics, and optical waveguides [23]. The super-resolution method based on multiple projectors has been developed and discussed in previous works. The projection screen and the projectors are always assembled individually, thus hard to satisfy the tolerance of super-resolution. In Ref. [24], Majumder et al. have discussed the practical feasibility of the method. They concluded that this method cannot be applied to an actual multi-projector display system. Moreover, to enhance the resolution or to break the angular resolution/field-of-view (FOV) invariant, the drawbacks of the previous

∗ Corresponding author. E-mail address: [email protected] (W. Song).

https://doi.org/10.1016/j.optcom.2019.124723 Received 23 July 2019; Received in revised form 6 October 2019; Accepted 9 October 2019 Available online 11 October 2019 0030-4018/© 2019 Elsevier B.V. All rights reserved.

Q. Cheng, W. Song, F. Lin et al.

Optics Communications 458 (2020) 124723

works including the high-resolution area of interest, partial binocular overlap, dichoptic area, and tiling have also been discussed. In this paper, we introduce the super-resolution method using overlapping images to near-eye displays. Different from the overlapping projectors, the presented near-eye displays with overlapping images can guarantee the relative positions of sub-frame images during the usage, which can ensure the practical feasibility of the method. Compared with previous near-eye display resolution enhancement methods, no visible seam and no active optical elements will be employed in the system. In the developed implementation using the proposed method, displays are overlapped by a reflective half-mirror. Calibration techniques are introduced to align pixels in near-eye displays, which is crucial for resolution enhancement. Image generation and optimization procedures are developed to improve perceived resolution. Based on the design, a prototype near-eye display was fabricated, which can overcome the physical resolution limit of a single display panel without additional artifacts like seam and mismatch. A near-eye display prototype was developed to verify the proposed method. 2. Principles and methods of a super-resolution near-eye display using overlapping images 2.1. System set-up As mentioned above, overlapping two images in near-eye displays has already been achieved by different types of structures in previous works. Fig. 1(a) gives one of the simplest ways to achieve the proposed approach. The least distance of distinct vision, which is the distance that the normal human eye can focus a sharp image of an object, is 250 mm [25]. That is to say, the display panel cannot be located nearer than this distance. Moreover, when the current neareye displays with one depth plane are used in VR/AR applications, accommodation-convergence conflicts will lead to visual fatigue [26]. Thus, the distance between the generated display plane and human eye should also be large enough, and eyepieces would be employed to image the display panel far away. In this design, two display panels are placed perpendicularly to each other at the rear and top. A half mirror is inserted between the eyepiece and the rear display panel so that the two display panels are symmetric across the mirror. By doing this, light from each display panel can be transmitted and reflected by the half mirror, respectively, and finally goes into the user’s eyes. Thus, when the user looks at the display system, the images preloaded on the display panels can be found overlapped with each other. Practically, difficulties arise during components alignment, due to the precision limit of the mechanical systems. On the other hand, sub-pixel level mismatch of the overlapping images can help achieve super-resolution. Let a two-dimensional (2D) Cartesian coordinate system be defined on the required image at a given distance away from the human eye. The positions of the two overlapping images are shown in Fig. 1(b), whose coordinate systems can be determined by rotation matrices R1 , R2 and translation vectors T1 , T2 , respectively. To display a required image (‘‘pepper’’) in Fig. 1(b), two sub-frame images should be shown on the corresponding devices. The user will perceive the overlapping image as a combination of two low-resolution sub-frame images, yielding a super-resolution image shown in Fig. 1(e) and (f). The principle and rendering method are described in the following section. As a result, near-eye display with high resolution can be obtained using the presented method.

Fig. 1. (a) Schematic diagram of the near-eye display with two overlapping images. (b) The source image and the position of two overlapping images. (c) and (d) Lowresolution images displayed on individual display panels. (e) Super-resolution image perceived by human eyes. (f) Zooming-in of the original (upper) and super-resolution (lower) images.

2.2. Pixel-based image generation for one single display panel

gives the continuous signal and sampling signal in both spatial and

Fig. 2. In spatial domain: (a) Original signal (b) Grayscale image of original signal (c) Sampling signal (d) Pixel structure of the displayed 1D image (e) Reconstructed signal (f) Grayscale image of the reconstructed signal. In frequency domain: (g) Frequency spectrum of the original signal (h) Frequency spectrum of the sampling signal (i) Convolution signal (j) Kernel function (k) reconstructed signal.

frequency domains. Here, a swept-frequency cosine continuous signal The principle of super-resolution in the multi-projector system was presented by A. Majumder [24]. Similar theory can be used to characterize the overlapping method in near-eye display. In this section, theoretical analysis is presented for one-dimensional (1D) signal for easy comprehension, which is a scan-line of the displayed image. Fig. 2

is employed to illustrate the process. Fig. 2(a) and (b) show the signal and the corresponding gray-scale image, respectively. The frequency of the continuous signal is shown in Fig. 2(g), which can be denoted by P(f). 2

Q. Cheng, W. Song, F. Lin et al.

Optics Communications 458 (2020) 124723

Fig. 3. (a) (b) and (c) are schematic diagrams of the original signal, the sampling signal and the convolution signal for situations when l = T/2. (d) (e) and (f) are schematic diagrams of the original signal, the sampling signal and the convolution signal for general overlapping situations. Fig. 4. Super-resolution results for 2D signals. (a) is one image with low resolution, (b) is the super-resolution image with shift distance of one-fourth of a pixel width, and (c) is the super-resolution result with the shift distance of half-pixel width, respectively. (d) (e) and (f) are the enlarged close-up images of the corresponding images above, respectively.

A single scan-line of the displayed image consists of multiple pixels, where a uniform sampling in the spatial domain is assumed. The distance between adjacent pixels is denoted as T, while the sampling function can be periodic comb function in the spatial domain. According to the characteristics of Fourier transformation, the corresponding frequency function is another comb with period 𝑓𝑠 = 1/T, which is shown as Eq. (1). Fig. 2(c) and (d) show the sampling signal and pixel structure, respectively. The spatial domain information of the sampling signal is reflected as Fig. 2(h). Thus, the frequency of the signal after sampling becomes Fig. 2(i), as a result of convolution. 𝑆 (𝑓 ) =

+∞ ( ) ∑ ( ) 𝑛 𝛿 𝑓 − 𝑛𝑓𝑠 = 𝛿 𝑓− 𝑇 𝑛=−∞ 𝑛=−∞ +∞ ∑

by two periodic comb functions with shift distance l (l⩽T ). 𝑙 = 𝑇 ∕2 means that the two images are just shifted by half of a pixel width when overlapping. The overlapping sampling signal 𝑆𝑜 (f) is just a combination of two periodic comb functions with shift distance l in the frequency domain, which can be derived by Fourier transform. Thus, the frequency response of the overlapping sampling signal 𝑆𝑜 (f) can be expressed as follows, which is shown in Fig. 3(e).

(1)

Based on Nyquist sampling theory, the signal should be firstly bandlimited by half of 𝑓𝑠 , because the frequency higher than that cannot be figured out by this sampling period. In the spatial domain, the 1D signal is sampled to get the grayscale for each pixel, which can be expressed as the convolution of the continuous signal and the comb function. In the frequency domain, the sampling signal C(f) is a replication of the frequency spectrum of P(f), which is expressed as follows with a floor function ⌊⌋ . The idea of Eq. (2) is to show the derivation process of the signal C(f) after sampling process, which is actually a convolution process between the original signal P(f) and the sampling signal S(f). ⌋ ) ( ⌊ 𝑓 + 𝑓𝑠 ∕2 𝑓𝑠 (2) 𝐶 (𝑓 ) = 𝑃 (𝑓 ) 𝑆 (𝑓 ) = 𝑃 𝑓 − 𝑓𝑠

𝑆𝑜 (𝑓 ) = 2𝑆 (𝑓 ) cos2 𝜋𝑓 𝑙 = 2

( ) 𝛿 𝑓 − 𝑛𝑓𝑠 cos2 𝜋𝑓 𝑙

(4)

𝑛=−∞

When 𝑙 = 𝑇 ∕2, the corresponding frequency function 𝑆𝑜 (f) can be obtained as a comb function with period 2𝑓𝑠 . 𝑆𝑜 (𝑓 ) = 2

+∞ ∑

( ) 𝛿 𝑓 − 2𝑛𝑓𝑠

(5)

𝑛=−∞

Similar to the analysis process mentioned above (Section 2.2), the signal under the frequency of 𝑓𝑠 can be figured out based on the overlapping sampling signal with a shift distance of T/2. That is to say, using a kernel function with width 𝑓𝑠 , the frequency zone of the reconstructed signal for overlapping images is two times of that for one image., In these equations, ⌊⋅⌋ and ⌈⋅⌉ are the ceiling and flooring functions, respectively, and ⊗ is a convolution operator. ⌋ ) ( ⌊ 𝑓 + 𝑓𝑠 2𝑓𝑠 (6) 𝐶𝑜 (𝑓 ) = 𝑃 (𝑓 ) ⊗ 𝑆𝑜 (𝑓 ) = 𝑃 𝑓 − 2𝑓𝑠

In terms of the reconstruction process, the displayed image can be shown as the convolution of the sampling signal and the kernel function K(f), which is the point-spread function of one pixel. In an actual situation, K(f) can be very complex according to the illumination distribution of displayed pixels. To make it simple, K(f) can be treated as a function bandlimited to 𝑓𝑠 /2. In C(f), as the signal under the frequency of 𝑓𝑠 /2 is already figured out, so the original frequency spectrum can be extracted from the sampling and reconstruction process. Fig. 2(e) and (b) show the reconstructed signal in value and grayscale, where the frequency information is shown in Fig. 2(k), after reconstruction with the kernel function in Fig. 2(j). 𝑅 (𝑓 ) = 𝐾 (𝑓 ) 𝐶 (𝑓 )

+∞ ∑

𝑅𝑜 (𝑓 ) = 𝐾𝑜 (𝑓 ) 𝐶𝑜 (𝑓 )

(7)

In terms of general situations (l ≠ T/2), the signal 𝐶𝑜 (f) can be given by Eq. (8). The spectrum of 𝑃𝑜 (f) within 𝑓𝑠 /2 will be contaminated by the adjacent replica at 𝑓𝑠 . Therefore, if the pixels from the two images are interleaved at exact half-width of one pixel, two times the resolution at one dimension can be achieved. For general cases, the resolution will be improved from the original one. Thus, the optimization method should be applied to obtain resolution gain, by decreasing high-frequency noise in the frequency domain. ( ⌊ ⌋ ) ⌊ ⌋ 𝑓 𝑓 𝐶𝑜 (𝑓 ) = 2𝑃 𝑓 − 𝑓𝑠 cos2 𝜋 𝑙 𝑓𝑠 𝑓𝑠 (8) ( ⌈ ⌉ ) ⌈ ⌉ 𝑓 𝑓 +2𝑃 𝑓 − 𝑓𝑠 cos2 𝜋 𝑙 𝑓𝑠 𝑓𝑠

(3)

2.3. Image formation from two overlapping images Super-resolution process from two overlapping images is investigated and the swept-frequency cosine continuous signal is still used in this section, whose frequency is shown in Fig. 3(a) and (d). In the spatial domain, the sampling signals for overlapping images are given 3

Q. Cheng, W. Song, F. Lin et al.

Optics Communications 458 (2020) 124723

2.4. Generic computational rendering algorithm In general cases, the relationship between overlapped images and the required image will not be the same as the designed parameters due to the misalignment in an actual system. Based on the 2D Cartesian coordinate system defined in Fig. 1(b). The rotation matrices R1, R2 and translation vectors T1, T2 of the two overlapping images can be achieved by many calibration methods which have been discussed by other works. To obtain two low-resolution sub-frame images for overlapping display devices, a high-resolution source image should be prepared as input (at least higher than twice the resolution of one display device), which can be treated as a target image. The target image is shown in the world coordinate system with a resolution of M × N, while the size of pixels is 𝑝𝑊 𝑥 × 𝑝𝑊 𝑦 . Based on the calibration result, the corresponding pixel of overlapping images at the same position of the pixel coordinates (i, j) on the target image can be obtained as (i1, j1) and (i2, j2). [ ] [ 𝑘] (𝑖 − 𝑀∕2) 𝑝𝑊 𝑥 𝑥 𝑘 = 𝑅 + 𝑇 𝑘 (𝑘 = 1, 2) (9) 𝑦𝑘 (𝑗 − 𝑁∕2) 𝑝𝑊 𝑦 ( 𝑘 𝑘 ) ⎧𝑘 𝑘 ⎪𝑖 = 𝑟𝑜𝑢𝑛𝑑 (𝑥 ∕𝑝𝑥 + 𝑀 ∕2 ) ⎨ 𝑘 𝑘 𝑘 𝑘 ⎪𝑗 = 𝑟𝑜𝑢𝑛𝑑 𝑦 ∕𝑝𝑦 + 𝑁 ∕2 ⎩

(𝑘 = 1, 2)

Fig. 5. (a) Experimental set-up, including the developed prototype and a digital camera. (b) Selected frames of structured light patterns used for the calibration process. (c) Calibration results of the two overlapped images.

The prototype was fabricated by 3D printing and painted into gloss black finish. The two display panels were connected to the computer via High Definition Multimedia Interface (HDMI) Cable. The developed near-eye display system is shown in Fig. 5(a) along with a camera that acts as the human visual system. During the experimental process, a digital camera without low-pass-filter (LPF) was used for image capturing, which will be crucial for image calibration and evaluation on pixel level. According to the rendering procedure, the relationship among the camera and the two display panels should be firstly calibrated. Sinusoidal phase shift patterns and multi-frequency heterodyne phase unwrapping method were used in the experiment to obtain the mapping. Of course, many other methods can be applied as mentioned above. Specifically, a three-frequency, four-step phase-shift implementation was used to achieve satisfactory results, which can be displayed on both display panels. Fig. 5(b) gives part of the structured light patterns viewed by the camera, and the rotation matrices and translation vectors can be easily obtained. In actual near-eye display systems, due to the misalignment among the optical and mechanical components, there will be noticeable rotations and shifts when overlapping the two images (shown in Fig. 5(c)). This misalignment demonstrates the necessity of the calibration process, which has been discussed in Section 2.4. By applying calibration, the rotation and shift of the overlapping display can be characterized, thus provide resolution enhancement. A high-resolution target image was employed to test the experiment results, and two images which will be loaded on the overlapped screen can be calculated based on the rotation matrices and translation vectors. The captured photos of one single display image (only one display panel is on) and overlapping image are also shown in Fig. 6(a), (b) and (c), respectively. Fig. 6(a) and (b) show that the calculated images may contain some discontinuous area. The rendering process to obtain the decomposed images from a high-resolution image is to optimize the pixels of these two decomposed images by minimizing the errors between the reconstructed image and the given high-resolution image (Eqs. (11)–(12)). The continuity of both decomposed images is not considered during the developed method. Thus, the discontinuous area can be observed in individual frames, but the discontinuous part cannot be found in the final overlapped image, which can be seen in Fig. 6(c). To demonstrate the effectiveness of the proposed method, a controlled sampled is generated by resizing the target image to the native resolution of a single display panel. Here, the display panel we used has a native resolution of 800 × 480. Thus, the resized image with a resolution of 800 × 480 can be treated as the traditional image, which can be compared with the developed method. Fig. 6(d) gives the photograph of the experimental result with one display panel in front of a human eye, which is commonly used in commercial products. It can be found that the presented overlapping method (Fig. 6(g)) brings a big improvement to overall image quality when compared with the previous works (Fig. 6(h)), and no without tiling seam can be observed. Comparing the results of the overlapping method with the traditional method (Fig. 6(g) and (h)), the line patterns on the windows are continuous, and the black and white stripes can be clearly figured

(10)

To get the optimized result, the error between the final overlapping performance and the target image should be minimum. If the grayscales of pixels on the two overlapping images are treated as variables, the optimization aims to get the minimum norm sum of the grayscale difference between every pixel (𝐼 𝑤 𝑖𝑗 ) on the target and the corresponding overlapping grayscale, which can be expressed as follows. 𝑎𝑟𝑔𝑚𝑖𝑛

𝑖≤𝑀,𝑗≤𝑁 ∑ 𝑖=1,𝑗=1

‖2 ‖ 𝑊 ‖𝐼𝑖,𝑗 − 𝐼𝑖11 ,𝑗 1 − 𝐼𝑖22 ,𝑗 2 ‖ ‖ ‖

𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 0 ≤ 𝐼 11

𝑖 ,𝑗 1

, 𝐼 22

𝑖 ,𝑗 2

≤1

(11) (12)

Based on the rendering method above, super-resolution results for two dimensional (2D) images without rotation can be shown in Fig. 4. The shift distances between the two overlapping images are set as one fourth and half-pixel width on both two dimensions (𝑙 = 𝑇 ∕4 and T/2). The original image and the super-resolution results are shown, which can verify the theoretical analysis mentioned above. For general cases, the rendering procedure can be summarized as follows: (a) Obtain camera–display mapping for both displayed images. This can be achieved using various methods (e.g., by finding the corner points or decoding the structured light images). (b) Calculate the rotation matrices and translation vectors for both displayed images. (c) Refine all of the parameters (grayscale of the pixels in both displayed images) in Eqs. (9)–(12), according to the linear optimization algorithm. (d) Display the images on the display devices based on the results in (c) 3. Experimental results To verify the presented method, a near-eye display prototype was implemented with two 5.0 in. liquid crystal display (LCD) panels along with two pieces of glasses, which were coated by the semi-transparent film. Fig. 5(a) shows the photo of the developed near-eye display system during the experiment. The distance between the eyepiece and display panel was set as 95 mm, and the focal length of the eyepiece was set as 100 mm, which can make the distance of the virtual image 2 m. The size of either LCD panel is 108 mm × 73.2 mm, and either eye can observe half images on both LCD panels by using one half-mirror. 4

Q. Cheng, W. Song, F. Lin et al.

Optics Communications 458 (2020) 124723

Fig. 7. Experimental results of (a) resized image, and (d) overlapped image (superresolution images). Enlarged images for corresponding images are listed in (b) and (d).

numbers in the experimental results. As can be seen, the stripes in the overlapping image can be distinguished till reading more than 4, while the reading in the control sample is only around 3. 4. Conclusions An image overlapping method was developed to solve the resolution bottleneck in near-eye display. A dual-display near-eye display prototype was designed to demonstrate the effectiveness of the presented super-resolution method. By adding one extra display panel, the perceived resolution by the user can be improved when compared to the traditional one display setup, validated by the experiment. Compared to existing methods, our method can reproduce high-resolution image without tiling seam. This research can act as guidance to achieve highresolution near-eye display with low additional cost, thus has great potential in VR/AR applications with critical resolution requirements. Moreover, this work can provide a good reference to other systems, such as projection displays and naked-eye 3D displays. Future works include real-time rendering and higher accuracy calibration methods. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 6. Experimental results of (a) (b) decomposed images, (c) overlapped image (super-resolution images), and (d) resized image. Enlarged images for corresponding images are listed in (e)–(h).

Acknowledgments This work was supported by National Natural Science Foundation of China (61727808), and A*STAR RIE2020 AME Programmatic Funding, China (A18A7b0058)

out in the photograph of the developed method. The line patterns are discontinuous in the traditional method, and the black and white stripes are shown out as several dots due to the under-sampling or low-resolution of the traditional method. As discussed in Section 2.3, the resolution will be increased by two times at one-dimensional direction based on the overlapping images with a shift distance of half-pixel, and the resolution will not be increased when the overlapping two images match pixel by pixel without any shift. As the calibration data is different for different set-ups of near-eye display systems, it is hard to give a uniform value for all the systems using the developed method. To show the display performance quantitatively, aside from real-world image demonstration, a resolution chart was tested, which not only provides great detail features that can show the effectiveness, but also gives a quantitative measure of the enhancement. The captured photos of one single display image (resized image) and overlapping image are shown in Fig. 7(a) and (d), respectively. Enlarged images have been given in Fig. 7(b) and (c) along with color squares to mark the positions. The resolution enhancement can be observed by the high-frequency strips and the corresponding

Appendix A. Supplementary data Supplementary material related to this article can be found online at https://doi.org/10.1016/j.optcom.2019.124723. References [1] O. Cakmakci, J. Rolland, Head-worn displays: A review, J. Disp. Technol. 2 (3) (2006) 199–216. [2] H. Hua, L.D. Brown, C. Gao, Scape: supporting stereoscopic collaboration in augmented and projective environments, IEEE Comput. Graph. Appl. 24 (2004) 66–75. [3] J.P. Roll, H. Fuchs, Optical versus video see-through head mounted displays in medical visualization, Presence 9 (2000) 287–309. [4] J.P. Roll, K. Thompson, See-through head worn displays for mobile augmented reality, Commun. China Comput. Fed. 7 (2011) 28–37. [5] D. Cheng, Y. Wang, H. Hua, M.M. Talha, Design of an optical see-through headmounted display with a low f-number and large field of view using a free form prism, Appl. Opt. 48 (2009) 2655–2668. 5

Q. Cheng, W. Song, F. Lin et al.

Optics Communications 458 (2020) 124723 [16] Weitao Song, Dewen Cheng, Zhaoyang Deng, Yue Liu, Yongtian Wang, Design and assessment of a wide FOV and high-resolution optical tiled head-mounted display, Appl. Opt. 54 (2015) E15–E22. [17] Takayuki Okatani, Mikio Wada, Koichiro Deguchi, Study of image quality of superimposed projection using multiple projectors, IEEE Trans. Image Process. 18 (2) (2009) 424–429. [18] Niranjan Damera-Venkata, N.L. Chang, Display supersampling, ACM Trans. Graph. 28 (1) (2009) 1–19. [19] Will Allen, R. Ulichney, 47.4: Invited paper: Wobulation: Doubling the addressed resolution of projection displays, Sid Symp. Dig. Tech. Pap. 36 (1) (2012) 1514–1517. [20] F.C. Huang, K. Chen, G. Wetzstein, The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues, ACM Trans. Graph. 34 (4) (2010) 60. [21] D. Lanman, M. Hirsch, Y. Kim, R. Raskar, Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization, ACM Trans. Graph. 29 (6) (2010) 1–10. [22] M. Liu, C. Lu, H. Li, X. Liu, Near eye light field display based on human visual features, Opt. Express 25 (9) (2017) 9886. [23] Miaomiao Xu, Hong Hua, High dynamic range head mounted display based on dual-layer spatial modulation, Opt. Express 25 (2017) 23320–23333. [24] A. Majumder, Is spatial super-resolution feasible using overlapping projectors?, in: Proceedings. (ICASSP ’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. [25] Y. Takubo, Y. Hisatake, T. Lizuka, et al., 64.1: Invited paper: Ultra-high resolution mobile displays, SID Symp. Dig. Tech. Pap. 43 (1) (2012) 869–872. [26] D.M. Hoffman, A.R. Girshick, K. Akeley, M.S. Banks, Vergence–accommodation conflicts hinder visual performance and cause visual fatigue, J. Vis. 8 (3) (2008) 1–30.

[6] X. Hu, H. Hua, High-resolution optical see-through multi-focal plane head-mounted display using freeform optics, Opt. Express 22 (2014) 13896–13903. [7] J.E. Melzer, Overcoming the field-of-view/resolution invariant in head-mounted displays, Proc. SPIE 3362 (1998) 284–293. [8] D.R. Tyczka, M.J. Chatten, J.B. Chatten, J.O. Merritt, H.L. Task, D.G. Hopper, B.A. Fath, Development of a dichoptic foveal/peripheral head-mounted display with partial binocular overlap, Proc. SPIE 8041 (2011) 80410F. [9] K.W. Arthur, Effects of Field of View on Performance with Head-Mounted Displays (Ph.D. thesis), University of North Carolina, 2000. [10] M. Hoppe, J. Melzer, Optical tiling for wide field-of-view head mounted displays, Proc. SPIE 3379 (1999) 146–153. [11] J. Leigh, A. Johnson, L. Renambot, T. Peterka, B. Jeong, D.J. Sandin, J. Talandis, R. Jagodic, S. Nam, H. Hur, Y. Sun, Scalable resolution display walls, Proc. IEEE 101 (2013) 115–129. [12] B. Sajadi, A. Majumder, Autocalibration of multi-projector CAVE like immersive environments, IEEE Trans. Vis. Comput. Graphics 18 (2012) 381–393. [13] R. Kooima, A. Prudhomme, J. Schulze, D. Sandin, T. DeFanti, A multi-viewer tiling autostereoscopic virtual reality display, in: Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology, Hong Kong, 2010, pp. 172–174. [14] W. Li, H. Wang, M. Zhou, S. Wang, S. Jiao, X. Mei, J. Kim, Principal observation ray calibration for tiled-lens-array integral imaging display, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, Oregon, 2013, pp. 1019–1026. [15] D. Cheng, Y. Wang, H. Hua, J. Sasian, Design of a wide-angle, lightweight head-mounted display using free-form optics tiling, Opt. Lett. 36 (2011) 2098–2100.

6