Fast ray-tracing of human eye optics on Graphics Processing Units

Fast ray-tracing of human eye optics on Graphics Processing Units

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314 journal homepage: www.intl.elsevierhealth.com...

3MB Sizes 0 Downloads 68 Views

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

journal homepage: www.intl.elsevierhealth.com/journals/cmpb

Fast ray-tracing of human eye optics on Graphics Processing Units Qi Wei a,∗ , Saket Patkar b , Dinesh K. Pai c a b c

Department of Bioengineering, George Mason University, 4400 University Drive, Fairfax, VA, USA Department of Computer Science, Stanford University, 353 Serra Mall, Stanford, CA, USA Department of Computer Science, University of British Columbia, 2366 Main Mall, Vancouver, BC, Canada

a r t i c l e

i n f o

a b s t r a c t

Article history:

We present a new technique for simulating retinal image formation by tracing a large num-

Received 26 November 2012

ber of rays from objects in three dimensions as they pass through the optic apparatus of the

Received in revised form

eye to objects. Simulating human optics is useful for understanding basic questions of vision

5 February 2014

science and for studying vision defects and their corrections. Because of the complexity of

Accepted 10 February 2014

computing such simulations accurately, most previous efforts used simplified analytical

Keywords:

associated with abnormal shapes of the ocular structures which are hard to be precisely

Human eye optics

represented by analytical surfaces. We have developed a computer simulator that can sim-

models of the normal eye. This makes them less effective in modeling vision disorders

Ray tracing

ulate ocular structures of arbitrary shapes, for instance represented by polygon meshes.

Computational simulation

Topographic and geometric measurements of the cornea, lens, and retina from keratome-

Patient specific modeling and

ter or medical imaging data can be integrated for individualized examination. We utilize

simulation

parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute

Vision defects

retinal images by tracing millions of rays. A stable retinal image can be generated within

GPU programming

minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. © 2014 Elsevier Ireland Ltd. All rights reserved.

1.

Introduction

Modeling and simulating human eye optics has been an active field of research with wide applications in various fields. In vision science research, the focus of these efforts is on understanding human vision and the functional significance of different ocular structures. In ophthalmology, modeling retinal image formation serves as a tool to



Corresponding author. Tel.: +1 703 993 5211. E-mail address: [email protected] (Q. Wei).

http://dx.doi.org/10.1016/j.cmpb.2014.02.003 0169-2607/© 2014 Elsevier Ireland Ltd. All rights reserved.

capture defects of the eye and simulate vision corrections to provide better treatment. However, simulating realistic light refraction through ocular structures in three dimensions (3D) is challenging due to the inherent complexity of human optics. In general, it is computationally intensive to trace millions of rays that form the retinal image. Fink and Micol report that it takes more than 300 min to simulate one still image [1], urging for the need of a more efficient simulator.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

Human eye optics simulation has been performed primarily on analytical models. The finite element ray tracing method proposed by Richerzhagen applied discretization on the spatial refractive index but not on the orbit geometry [2]. Most widely used schematic eye models are based on simplified spherical surfaces to conveniently obtain the surface curvatures [3]. Higher order representations have also been proposed: Zernike polynomials were used to characterize cornea and lens surfaces [1], and quantitatively study optical aberrations of human eyes [4]. The lens was represented by biconic surfaces in [5]. These approximation surfaces have inherent limitations as they globally optimize the shape and lack the flexibility to model local geometric variation. The geometric shapes of ocular structures involved in visual processing play a crucial role in image formation. In particular, realistic 3D representations are important for studying vision disorders, many of which are associated with abnormalities in the shape of an ocular structure. For instance, one common cause of astigmatism is the asymmetry of cornea [3]. There has also been evidence on the positive correlation between myopia (hyperopia) and the eyeball axial length [6]. In this paper, we present a novel 3D human eye optics simulator that simulates retinal image formed on the retinal surface by computationally propagating millions of light rays. Our proposed approach improves existing retinal image simulation by modeling ocular structures by polygon meshes. Compared to an analytical model, a polygonal mesh can more precisely represent the geometry of an anatomical structure of arbitrary shape. Many biomedical applications such as patient-specific treatment prediction require modeling the anatomical abnormality of an individual, which makes polygon representation preferable. Polygon models can be built from patients’ data such as medical images using standard image reconstruction algorithms. With these models, we simulated retinal images by applying ray tracing to calculate refractions of light rays through the multiple layers of lens and cornea. Our simulator is sufficiently general to incorporate spherical, aspheric, and even arbitrarily shaped ocular structures including cornea, lens, and retina. The contributions of our new technique are:

• Optical structures were modeled as polygon meshes which can model arbitrary shapes more accurately. With this representation, surface intersection and curvature as well as refractive rays can be computed efficiently. • The powerful parallel computing architecture of commodity Graphics Processing Units (GPUs) was utilized to accelerate optics ray tracing. Half a million light rays propagating through multiple optical surfaces can be simulated in one second. • Topographic and geometric measurements of the retina, cornea, and lens from keratometer or medical imaging data can be easily integrated to provide subject-specific simulation. • Various settings of the 3D environment related to depth, color, shape, and motion are programmable in the virtual model, potentially useful for studying visual perception problems.

2.

303

Methods

Human eye is a complex organ. Light enters the eye from the anterior cornea, propagates through the cornea, anterior chamber, pupil, lens, and vitreous humour and forms the retinal image on the retinal surface. Light ray is refracted at the interface of two different structures as it propagates. While the eye is treated as a physical optical system that is computationally tractable, the most prominent structures to consider are cornea, lens, and retina. Considering what occurs in optics of the human eye, numerous light rays coming from different sources arrive at the same spot on the retina and contribute to the sensed image. However, such optical process is impossible to model computationally. The practical approach to simulate how an image is formed on the retina is to trace the light rays in the backward direction from the retina. That is, millions of points are first sampled randomly on the retinal surfaces, representing the destinations of the light rays on the retina. Then the source of each light ray reaching that particular point is traced backwards, which determines the color contributing to the retinal image. Our problem is analogous to ray tracing, a fundamental computer graphics technique to render images of threedimensional scenes as seen from a camera from a particular view. Intersections of light rays originating from the camera with the objects in the external world are first computed. Illuminations of light sources at these positions are then determined, as well as other realistic effects such as refractions, reflections, and shadows. In addition to numerous other applications in graphics, ray tracing has been applied to simulate optical effects [7–9]. However, because their main objective is to produce a visually pleasing rendering, the optics models included are not necessarily physically realistic. Our proposed approach on computing retinal image is to use distributed ray tracing [10] augmented with a realistic optics model. The idea is to apply stochastic sampling to take into account the fact that numerous light rays reach the same spot on the retina. In our optics simulation, to calculate the color of each pixel on the retinal image, hundreds of light rays were generated which pass through randomly distributed locations on the posterior lens surface. Although computationally more expensive, combining multiple rays is necessary to realistically simulate vision effects such as depth of field and motion blur. Fig. 1 shows the diagram of how retinal images are calculated in our simulator described above, which includes an optics model with arbitrarily shaped ocular structures. We represented each surface of the retina, lens, and cornea as a polygon mesh (specifically, a triangle mesh), an efficient and flexible shape primitive common in computer graphics. A triangle mesh consists of a list of connected faces, each of which is defined by three vertices. Previously, ocular structure surfaces have been modeled as analytical (parametric) geometries, such as spheres, quadric surfaces [11], or

304

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

polynomial surfaces [1]. The main reason to apply these analytical surfaces is that curvatures are analytically calculable. However, analytical representations such as Zernike polynomials even of high orders can be inadequate in characterizing shape features significant for pathological corneal aberrations, as has been illustrated in former studies [12,13]. Compared to the existing representations, polygon meshes are more flexible at representing any arbitrary shape, and thus can model local geometric variations more accurately. Surface curvatures can still be computed efficiently. Errors due to discretization can be controlled by adjusting mesh resolution (i.e. number of vertices per unit area) at the expense of computational time. One characteristics of simulating retinal image formation is its high parallelism – all rays undergo the same operations independently and simultaneously. Taking advantage of this parallelism property, we implemented our algorithm on

correspondence, we were able to simulate pixels on the retina itself. All the images shown in the paper were actually the images on this screen as it was not possible to directly show the images on the retina due to its curved surface. The second loop iterated every pixel projected onto the retina and applied the same operations to back trace every light ray from retina to its source. For each ray back propagation, the ocular surfaces in the eye model were examined to calculate six refractions consecutively from the posterior lens surface to the anterior cornea. Since our focus is on the optics of the eye and not on the environmental effects (such as indirect illumination), we terminated tracing a ray after its first intersection with the closest object in the scene and used color of the intersecting point. The refractive indices of the vitreous humour, outer lens, inner lens, cornea and air were set to be 1.336, 1.386, 1.406, 1.326 and 1 respectively [3].

Algorithm 1 (Ray tracing mesh-based human eye optics).

a programmable Graphics Processing Unit (GPU), which is a specialized microprocessor that accelerates 3D graphics rendering with a large number of parallel computing cores. A GPU’s highly parallel architecture and computational power have made it a preferable platform than a conventional CPU for many scientific computing problems that can benefit from parallel processing, such as ray tracing [14–16]. More technical information on GPU computation and programming can be found in [14–22]. Algorithm 1 outlines how the mesh-based ray tracing of human eye optics was performed. The outer loop determined the number of accumulations (i.e. the number of accumulated light rays) contributing to the final color. With each sample, the accuracy of the result increases progressively. Hence one can keep running the algorithm till a desired accuracy is reached. We placed a screen with pixels behind the retina. For every point on the screen, we projected it onto the retina to get a corresponding point on the retina. Because of this

Random ray generation in Step 3 in Algorithm 1 was detailed as follows. For a pixel on the retina pj , a point randomly sampled on a spherical cap which approximated the posterior lens surface was first obtained. Then the nearest point on the actual posterior lens plj was calculated. We did not sample on the lens surface directly because uniformly sampling on an analytical surface is easier than a triangle mesh. The vector from the retinal pixel pj to plj on the posterior lens defined the direction of a ray that was traced subsequently through the optics model until it reached an object in the scene. The vertex normal at plj was computed by averaging the surface normals of all the triangle faces that share plj . The face normal was calculated by taking the cross product of two edge vectors of the triangle face. The outgoing ray refracted at the posterior lens surface was computed by applying Snell’s law. The key in mesh-based optics simulation is the calculation of ray intersection with the geometry as well as calculation

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

305

Fig. 1 – Three-dimensional distributed ray tracing of human eye optics. Any schematic optics model can be incorporated. Each optical component is represented by a polygon mesh. The 3D scene consists of objects of any shape, texture, and lighting at desired locations. For algorithmic convenience, each light ray is traced in the reverse direction: it originates from the retina and is propagated through the vitreous humour, the lens, the aqueous humour, and the cornea. At the boundary of two neighboring components, refraction of the ray is computed. If a ray reaches an object in the 3D world, the color at the intersection is retrieved which contributes to the pixel color on the retinal image.

of the surface curvature/normal at the intersection. Such information is required to determine the refracted ray and propagate light rays through each of the lens and corneal surfaces. It is also the most costly operation. To find the ray intersection with a surface, all triangles on that surface were checked in turn against the incoming ray, from which the triangle that was closest to the ray origin was determined. Although there are standard methods in Computer Graphics to accelerate this by making a hierarchical spatial partition which allows for searching the intersecting triangle in logarithmic time instead of linear time, it is beyond the scope of this paper. The barycentric coordinates of the line-plane intersection point with respect to the vertices of the closest triangle were then computed. The three vertices of the intersecting triangle (v1 , v2 , v3 ), the intersection point inside the triangle v, as well as the barycentric coordinates (b1 , b2 , b3 ) of v were returned (Step 7 in Algorithm 1). Surface normal at the intersection (refraction) point nv was computed by barycentric interpolation of the three vertices’ normals nv = b1 nv2 + b2 nv2 + b3 nv3 (Step 8 in Algorithm 1). Given the ratio of the refractive indices on two sides of the surface, the refractive ray direction was calculated (Step 10 in Algorithm 1). See Fig. 1 for a graphical illustration. In Step 12 in Algorithm 1, all the objects in the scene were checked calling the procedure calculate mesh intersection() to decide if the current ray outgoing from the anterior cornea intersect with any of those objects. If not, the retinal image color was not updated and the simulation continued to process the next light ray. Otherwise, the pixel color was determined through standard graphics ray tracing procedures, which combines object texture colors and

Fig. 2 – Calculated retinal images from (a) analytical Gullstrand simulation and (b) mesh-based Gullstrand simulation. The two simulated images show little difference visually, which demonstrates the accuracy of our mesh-based approach.

reflection effects due to lighting sources. We used a simple box filter in Step 13 to average colors from the rays that have been traced so far to sequentially update the present frame of the computed retinal image. We implemented the optics ray tracing program on a Intel® CoreTM Duo 2.33 GHz machine with 2 GHz of RAM and Nvidia GeForce GTX 470 GPU. Our implementation was built on the SmallptGPU OpenCL ray tracer [23]. We implemented the computation kernels outlined in Algorithm 1 and rendering kernels for visualization using the OpenCL (Open Computing Language) API. All the ocular surfaces were stored as textures in the texture buffer on the GPU, which included a vertex coordinate list, a face list and a vertex normal list. The scene objects were also loaded into GPU texture buffer, each of which included a vertex coordinate list, a vertex normal list, and a 2D texture coordinate list. All the geometry data was first loaded to the GPU buffer (Steps 1 and 2 in Algorithm 1) before the simulator started subsequent computation.

306

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

Fig. 3 – Simulated retinal images at three accommodation levels. (a) Three images with textures were placed at different locations along the optical axis. (b–d) By adjusting the lens thickness and the curvatures of the four lens surfaces, proper accommodation at expected distance can be simulated, demonstrated by each of the corresponding images being in focus.

3.

Results

We first present validation of simulator accuracy through comparison with analytical schematic optics model (Section

3.1). We then present simulations of accommodation and depth of field in Section 3.2. An important application of our approach in modeling astigmatism and its correction by corrective lenses is discussed in Section 3.4. Finally, we show the

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

3.1.

Fig. 4 – Simulated retinal images at continuously varying accommodation levels by adjusting the four lens surface curvatures. The eyeball rotates to track each of the objects with varying distance in the scene. Each object in the scene was modeled by a triangle mesh with thousands of vertices to represent the geometrical details (see the shadow on bunnies). Three representative simulation results are shown here, where the cyan, red, and green bunnies in the scene are in focus in (a), (b), and (c) respectively. A movie of the actual simulation is submitted as supplementary material. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of the article.)

distinctive feature of the technique in computing the retinal images of human subjects by incorporating geometric models of the orbit reconstructed from clinical medical images (see Section 3.5).

307

Validation using Gullstrand Model

We began with the standard Gullstrand exact schematic eye model [24] to generate retinal images as the ground truth for validating the mesh-based simulation. It is necessary to evaluate the accuracy so that the implementation can be generalized to more advanced optics models or to subject-specific models. We show that the discrete representation introduces minimal errors and thus does not degenerate quality of the rendered retinal images. We implemented two programs: one analytically exact case as the ground truth and one mesh-based case. For the analytical case, ray intersections and surface normals at the refraction points were calculated analytically. For the mesh case, triangle meshes approximating spheres were generated with center positions and radii identical to the analytical model. Surface normals at the mesh vertices were pre-calculated and used in the GPU program for interpolation during ray tracing. Fig. 2 compares simulated retinal images of an emmetropic eye at full accommodation. Visually, the two images were nearly identical. Assessing the fidelity of computer generated images has been actively studied and has generally proved to be difficult. Our objective for validation is to quantitatively test whether mesh-based optics ray tracing produces images comparable to the analytical version. We first calculated the simplest metric, the mean squared difference (MSD) of pixel intensity. MSD between the two images in Fig. 2 is 10, which is considered small given the image intensity range [0 255]. We also computed the structural similarity (SSIM), a perceptual metric measuring image similarity [25]. The SSIM between the two images is 0.99, which shows strong perceptual agreement between the image pair. Considering the random ray generation process involved in both simulations, we conclude that the two images closely represented each other and the mesh implementation was adequately accurate. Simulation accuracy depends on the mesh resolution because mesh discretization results in unavoidable error. The above validation shows that errors introduced from geometric discretization were small even with ocular structure meshes of moderate resolution. The number of vertices on the seven surfaces from the retina to the anterior corneal were 973, 1516, 799, 799, 565, 723, and 613 respectively. The posterior lens mesh contained more triangles, because it was randomly sampled to generate distributed rays and higher resolution improved sampling distribution. Obviously computation time increases with finer mesh resolution. The computational efficiency reported here was based on the mesh resolution leading to insignificant discretization error, discussed above. In other words, accuracy was not sacrificed for speed. The simulated retinal image resolution was 500 × 500. Each final pixel color was based on weighing traced colors from 500 rays. Thus there were 125 million rays in total to compute a retinal image. The computation costs of the analytical and mesh versions were 10 s and 300 s respectively, as the discrete implementation takes significantly more time to check all the triangles to calculate mesh intersections. Fink et al. reported more than 330 min to simulate an image at

308

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

Fig. 5 – Simulated binocular retinal images of the left eye and right eye. Three pyramids with textures were placed at various distances from the eyes. Binocular disparity is clearly shown by comparing the two images. A movie of the simulation is submitted as supplementary material.

the same image resolution with the same number of rays per pixel [1]. Our simulator was nearly two orders of magnitude faster than previous work. In order to quantify the relationship among mesh resolution, simulation error and computational cost, we experimented with doubling the number of vertices on every surface. The SSIM was not improved but the simulation time was almost doubled.

3.2.

Depth of field and accommodation

Depth of field of an optical system refers to the finite range of distances at which an object is perceived as focused. Depth of field was determined by the properties of the eye optics, visual processing, and the environment [1]. Simulation of depth of field can help to quantitatively understand these factors. To

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

309

Fig. 6 – Simulated retinal images of (a) emmetropia, (b) and (c) horizontal astigmatism, and (d) and (e) vertical astigmatism. Results show positive association between the amount of image distortion and the level of asymmetry of the cornea as expected.

simulate variable accommodating eyes, we used a modified Gullstrand model presented in [1]. The centers and radii of the four lens surfaces were defined as functions of the accommodation level (L) between zero and 10.87D (full accommodation), which adjusted the lens thickness and the curvatures of the four lens surface. Fig. 3 shows the simulated retinal images when the eye was focused at different focal lengths. Three textures were placed at different positions along the optical axis corresponding to the theoretically predicted focal lengths given the specified accommodation levels (see Fig. 3(a)). Sharp focus was observed for each texture at each accommodation level while the other two were clearly out of focus. Fig. 4 demonstrates another example of depth of field. To track three Stanford Bunny models and a teapot model from near to far respectively, the accommodation level changed continuously based on the instantaneous focal length.

3.3.

Binocular vision

Our simulator was able to simulate the two different retinal images with binocular vision. The left eye and the right eye were placed 62 millimeters apart, which is the average pupillary distance of adults and can be adjusted easily to match individuals. In order to visibly demonstrate binocular disparity, the closest object was placed about 20 centimeters away from the two eyes. Fig. 5 compares the left and right retinal images while the eyes were focusing on the first and second pyramids respectively. The accommodation level depends on the focal distance, as described in the previous section. The difference in the two retinal images is clearly seen: the left and right eyes viewed different sides of the pyramids. Realistically simulated binocular retinal images are valuable to understand binocular eye movement (for instance convergence), depth perception, as well as binocular disorders,

310

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

Fig. 7 – Simulated astigmatism correction by introducing a plano-cylindrical spectacle by varying spectacle back surface radius of curvature. The optimal correction spectacle with radius of 50 mm was found shown in (d) which led to sharpest retinal image.

such as strabismus. In our simulator, scene objects and their arrangement can be conveniently designed to investigate specific questions.

3.4.

Astigmatism and spectacle correction

Simulating eye optics is also useful for understanding vision disorders and assist ophthalmic lens design. We modeled astigmatism, a common type of refractive defect that leads

to blurred vision and its correction by corrective lens. A typical cause of astigmatism is the rotational asymmetry of the anterior corneal surface which is then associated with different refractive powers along the horizontal and vertical corneal meridians. A sphero-cylindrical (plano-cylindrical) corrective lenses, which has a spherical (plane) frontal surface and a cylindrical back surface, is able to correct regular astigmatism with perpendicular principal meridians [3]. If an astigmatic eye is accompanied with myopia or hyperopia,

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

311

Fig. 8 – (a) A representative T-2 weighted clinical Magnetic Resonance (MR) image of the ocular structures. Stacks of such images in coronal and sagittal planes were acquired, from which 3D triangle mesh models were built, shown in (b).

sphero-cylindrical lenses are required as plano-cylindrical lenses are inadequate. Fig. 6 presents a set of images simulating the effect of regular astigmatism. The anterior corneal surface was modeled as an ellipsoid with varying horizontal and vertical equatorial radii that led to asymmetry. The parameter ˛ was the asymmetric ratio between the longer axis and the shorter axis. The defective images in Fig. 6(b) and (c) were blurry in the horizontal direction whereas were focused in the vertical direction (see the clear bars at 0◦ and 180◦ ). The distortion became more severe with higher asymmetry of the cornea shown in Fig. 6(c). Fig. 6(d) and (e) illustrated defocus caused by vertical astigmatism of similar pattern. A plano-cylindrical spectacle with refractive index 1.5 sufficient for correcting pure astigmatism was placed in front of the cornea at about 10 mm to simulate refractive error correction. Fig. 7 demonstrates an emmetropic image, a horizontal astigmatic image, and three corrections. In the astigmatic image in Fig. 7(b), the vertical dashes had thickened widths while the horizontal dashes were elongated horizontally which made the spacings vague (particularly for the dotted box). The radius of the cylindrical back surface of the spectacle was adjusted gradually. The resulting retinal images from these adjustment were presented in Fig. 7(c)–(e), which all showed improved vision acuity whereas Fig. 7(d) generated with spectacle radius of 5 cm produced the sharpest retinal image. The corrected images encountered horizontal distortion due to the insertion of the spectacle. This example demonstrated the utility of the simulator in exploring spectacle parameters to determine the optimal values for specific patients.

3.5. MRI

Simulation of optics model reconstructed from

One distinctive contribution of our approach is the ability to represent arbitrary ocular structure shapes. This is especially beneficial for facilitating vision diagnosis and

corrections which require modeling subject-specific anatomical characteristics. Various medical imaging modalities have been explored to characterize ocular shapes in three dimensions, including Magnetic Resonance Imaging (MRI) [3,26]. The 3D geometric model of a patient’s orbit was reconstructed from MRI using a methodology that we previously developed [27]. Fig. 8(a) shows a representative MR image of the orbit1 ; boundaries of the eyeball and the lens had excellent contrast. The reconstructed cornea surface accurately captured the aspheric characteristics of the cornea shape, and gradually flattened from the apex (see Fig. 8(b)). We simulated the retinal image using this subject’s cornea and retinal surfaces. The image from ray tracing in Fig. 9(b) shows severe oblique blur at full accommodation due to the irregular shape of the cornea. Fig. 9(c) demonstrates the outcome by including the reconstructed retinal surface. Note that blur is more significant in Fig. 9(b) than in (c), which illustrates that the dominant role of the cornea curvature in retinal image formation since the anterior cornea causes the greatest refraction. Simulated results indicate that the cornea of this subject had greater asphericity in the vertical direction, which was confirmed by examining the actual shape of the cornea. We looked at the cross section of the cornea at one representative coronal imaging position (2.3 mm posterior to the apex) perpendicular to the optical axis. The vertical axis length of this particular cross section was 12.72 mm and the horizontal axis length was 12.20 mm, consistent with the simulation prediction. Fig. 9(d) and (e) shows the simulated retinal images by incorporating another subject’s cornea and retinal surfaces reconstructed from MRI. Different from the first subject’s vertical smear, this subject was subject to horizontal smear, indicating the possibility of having greater asphericity in the horizontal direction.

1 Anonymous MR images were provided by courtesy of Dr. Joseph Demer in the Jules Stein Eye Institute at UCLA.

312

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

Fig. 9 – Simulated retinal images from (a) standard schematic eye model, (b) applying the cornea surface of subject 1, (c) applying the retinal surface of subject 1, (d) applying the cornea surface of subject 2, and (e) applying the retinal surface of subject 2. All surfaces were reconstructed from MRI. Because of the aspheric shapes of the subjects’ retina and cornea, the simulated retinal images (b) and (c) are blurrier than the one from the standard model (a). Subject 1 shows more smear vertically while subject 2 shows horizontal smear. Note that blur is more significant in (b) and (d) than in (c) and (e), which illustrates that the dominant role of the cornea curvature in retinal image formation.

4.

Conclusion

We have presented a new technique for computing retinal images by tracing light rays passing through the eye. The accuracy and flexibility of using polygon meshes to model ocular

structures are the unique contributions of our approach. We took advantage of the complex scene ray tracing technique in computer graphics to calculate ray intersections and surface curvatures of arbitrary geometries, and overcame the limitation of previous analytical models. To simulate depth

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

of field, accommodation and other effects, hundreds of random rays were generated for each pixel on the retinal image and their returned colors were averaged to estimate the final color. Retinal images were rendered within minutes by exploiting parallelism in GPU. This performance is nearly two orders of magnitude faster than previous simulators. Simulation of astigmatism distortion, spectacle correction, and patient specific vision were demonstrated. One immediate work to carry on is to speed up the computation by utilizing recent advances in real time ray tracing [9,14]. Our ultimate goal is to compute retinal images in real time or at least at interactive rates, so that as the “eye” explores the 3D world, what is “seen” on the retina would be displayed realistically and interactively. In the future, we will extend the method to simulate motion, due to either movement of objects in the external world or movement of the eye. As eye movements are inextricably linked to the images produced on the retina, our simulator provides a useful computational tool to examine interesting scientific hypotheses raised in vision science. Previously we have developed a biomechanical model of the oculomotor plant that produces realistic eye movements given the neural control of the extraocular muscles [28]. The present optics simulator can be integrated with the biomechanical model to display dynamic retinal images seen from the eye during eye movement and to provide visual information to drive the biomechanical model. We would also like to explore application of the simulator in simulating patient specific vision disorders and in assisting refraction correction. For instance, given cornea curvature measurement from a keratometer, the effectiveness of corrective lenses or refractive surgical treatments such as LASIK correction that reshape the cornea can be predicted.

Conflict of interest The authors have no conflict of interest related to the publication of this manuscript.

Acknowledgments This research was supported in part by a Canada Research Chairs, NSERC, Peter Wall Institute for Advanced Studies, NIH Grant 5R01AR053608-04, MITACS and George Mason University faculty startup grant. We would like to thank Dr. J.L. Demer for providing anonymized MRI images of the orbit. Dr. Demer is supported by NIH Grant EY08313.

Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.cmpb. 2014.02.003.

313

references

[1] W. Fink, D. Micol, SimEye: computer-based simulation of visual perception under various eye defects using Zernike polynomials, Journal of Biomedical Optics 11 (5) (2006) 054011:1–054011:12. [2] B. Richerzhagen, Finite element ray tracing: a new method for ray tracing in gradient-index media, Applied Optics 35 (31) (1996) 6186–6189. [3] D.A. Atchison, G. Smith, Optics of the Human Eye, Butterworth-Heinemann, Oxford, 2000. [4] L.A. Carvalbo, Accuracy of Zernike polynomials in characterizing optical aberrations and the corneal surface of the eye, Investigative Ophthalmology & Visual Science 46 (6) (2005) 1915–1926. [5] J. Einighammer, T. Oltrupa, T. Bendea, B. Jean, The individual virtual eye: a computer model for advanced intraocular lens calculation, Journal of Optometry 2 (2) (2009) 70–82. [6] D.A. Atchison, N. Pritchard, K.L. Schmid, D.H. Scott, C.E. Jones, J.M. Pope, Shape of the retinal surface in emmetropia and myopia, Investigative Ophthalmology & Visual Science 46 (8) (2005) 2698–2707. [7] J. Loos, P. Slusallek, H.-P. Seidel, Using wavefront tracing for the visualization and optimization of progressive lenses, Computer Graphics Forum 17 (3) (1998) 255–266. [8] M. Kakimoto, T. Tatsukawa, Y. Mukai, T. Nishita, Interactive simulation of the human eye depth of field and its correction by spectacle lenses, Computer Graphics Forum 26 (3) (2007) 627–636. [9] S. Lee, E. Eisemann, J.-P. Seidel, Real-time lens blue effects and focus control, ACM Transactions on Graphics 29 (4) (2010), Article No. 65. [10] R.L. Cook, T. Porter, L. Carpenter, Distributed ray tracing, Computer Graphics 18 (3) (1984) 137–145. [11] A. Langenbucher, A. Viestenz, A. Viestenz, H. Brunner, B. Seitz, Ray tracing through a schematic eye containing second-order (quadric) surfaces using 4 × 4 matrix notation, Ophthalmic and Physiological Optics 26 (2) (2006) 180–188. [12] M.K. Smolek, S.D. Klyce, Zernike polynomial fitting fails to represent all visually significant corneal aberrations, Investigative Ophthalmology & Visual Science 44 (11) (2003) 4676–4681. [13] S.D. Klyce, M.D. Karon, M.K. Smolek, Advantages and disadvantages of the Zernike expansion for representing wave aberration of the normal and aberrated eye, Journal of Refractive Surgery 20 (5) (2004) 537–541. [14] S.G. Parker, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock, D. Luebke, D. McAllister, M. McGuire, K. Morley, A. Robison, M. Stich, OptiX: a general purpose ray tracing engine, ACM Transactions on Graphics 29 (4) (2010) 66:1–66:13. [15] T. Foley, J. Sugerman, KD-tree acceleration structures for a GPU raytracer, in: Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Conference on Graphics Hardware, 2005, pp. 15–22. [16] T.J. Purcell, I. Buck, W.R. Mark, P. Hanrahan, Ray tracing on programmable graphics hardware, ACM Transactions on Graphics 21 (3) (2002) 703–712. [17] O. Fluck, C. Vetter, W. Wein, A. Kamen, B. Preim, R. Westermann, A survey of medical image registration on graphics hardware, Computer Methods and Programs in Biomedicine 104 (3) (2011) 45–57. [18] A. Eklund, M. Andersson, H. Knutsson, fMRI analysis on the GPU – possibilities and challenges, Computer Methods and Programs in Biomedicine 105 (2) (2012) 145–161. [19] G. Kalantzis, H. Tachibana, Accelerated event-by-event Monte Carlo microdosimetric calculations of electrons and protons tracks on a multi-core CPU and a CUDA-enabled

314

[20]

[21]

[22]

[23] [24]

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 4 ( 2 0 1 4 ) 302–314

GPU, Computer Methods and Programs in Biomedicine 113 (1) (2014) 116–125. F. Zhu, D.R. Gonzalez, T. Carpenter, M. Atkinson, J. Wardlaw, Parallel perfusion imaging processing using GPGPU, Computer Methods and Programs in Biomedicine 108 (3) (2012) 1012–1021. F. Zheng, W. Lu, Y. Wong, K. Weng, C. Foong, An analytical drilling force model and GPU-accelerated haptics-based simulation framework of the pilot drilling procedure for micro-implants surgery training, Computer Methods and Programs in Biomedicine 108 (3) (2014) 1170–1184. J. Jiménez, J.R. de Miras, Fast box-counting algorithm on GPU, Computer Methods and Programs in Biomedicine 108 (3) (2012) 1229–1242. D. Bucciarelli, http://davibu.interfree.it/opencl/smallptgpu/ smallptgpu.html, 2010. H. von Helmholtz, Appendices II, in: Helmholtz’s Handbuch der Physiologischen Optik, Voss, Hamburg, 1909.

[25] Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing 13 (4) (2004) 600–612. [26] K.D. Singh, N.S. Logan, B. Gilmartin, Three-dimensional modeling of the human eye based on magnetic resonance imaging, Investigative Ophthalmology & Visual Science 47 (6) (2006) 2272–2279. [27] Q. Wei, S. Sueda, J.M. Miller, J.L. Demer, D.K. Pai, Subject-specific reconstruction of the human orbit from magnetic resonance images, in: Proceedings of the IEEE International Symposium on Biomedical Imaging, 2009, pp. 105–108. [28] Q. Wei, S. Sueda, D.K. Pai, Physically-based modeling and simulation of extraocular muscles, Progress in Biophysics and Molecular Biology 103 (2/3) (2010) 273–283.