Optics Communications 312 (2014) 23–30
Contents lists available at ScienceDirect
Optics Communications journal homepage: www.elsevier.com/locate/optcom
Myopic aberrations: Simulation based comparison of curvature and Hartmann Shack wavefront sensors Roopashree M. Basavaraju a, Vyas Akondi b,n, Stephen J. Weddell c, Raghavendra Prasad Budihal a a
Indian Institute of Astrophysics, II Block, Koramangala, Bangalore 560034, India Advanced Optical Imaging Group, School of Physics, University College Dublin, Belfield, Dublin 4, Ireland c Department of Electrical and Computer Engineering, University of Canterbury, New Zealand b
art ic l e i nf o
a b s t r a c t
Article history: Received 17 July 2013 Received in revised form 20 August 2013 Accepted 2 September 2013 Available online 17 September 2013
In comparison with a Hartmann Shack wavefront sensor, the curvature wavefront sensor is known for its higher sensitivity and greater dynamic range. The aim of this study is to numerically investigate the merits of using a curvature wavefront sensor, in comparison with a Hartmann Shack (HS) wavefront sensor, to analyze aberrations of the myopic eye. Aberrations were statistically generated using Zernike coefficient data of 41 myopic subjects obtained from the literature. The curvature sensor is relatively simple to implement, and the processing of extra- and intra-focal images was linearly resolved using the Radon transform to provide Zernike modes corresponding to statistically generated aberrations. Simulations of the HS wavefront sensor involve the evaluation of the focal spot pattern from simulated aberrations. Optical wavefronts were reconstructed using the slope geometry of Southwell. Monte Carlo simulation was used to find critical parameters for accurate wavefront sensing and to investigate the performance of HS and curvature sensors. The performance of the HS sensor is highly dependent on the number of subapertures and the curvature sensor is largely dependent on the number of Zernike modes used to represent the aberration and the effective propagation distance. It is shown that in order to achieve high wavefront sensing accuracy while measuring aberrations of the myopic eye, a simpler and cost effective curvature wavefront sensor is a reliable alternative to a high resolution HS wavefront sensor with a large number of subapertures. & 2013 Elsevier B.V. All rights reserved.
Keywords: Hartmann Shack wavefront sensor Curvature wavefront sensor Ocular aberrations Myopia
1. Introduction Sensing ocular aberrations of the human eye is critical in improving vision. It is even more important to measure accurately the aberrations prevalent in the myopic eye [1]. Among the available wavefront sensors, the Hartmann Shack (HS) wavefront sensor is more often used in sensing aberrations of the eye [2,3]. The HS wavefront sensor is made up of an array of microlenses. A detector placed at the focal distance away from the microlens array records the focal spot pattern. The shift in the recorded focal spots from their ideal position provides information on the local slopes of the incident wavefront distortion. The performance of HS depends on various parameters such as the number of subapertures, the size and the distance between them, the number of detector pixels per subaperture, the focal length of the microlenses and detector noise [4,5].
n
Corresponding author. Tel.: þ 353 1 716 2352. E-mail address:
[email protected] (V. Akondi).
0030-4018/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.optcom.2013.09.004
The curvature wavefront sensor measures the intensity on either side of the focal plane. The difference intensity between the extra-focal and intra-focal planes has information about the wavefront curvature. If low-noise extra-focal and intra-focal pupil images can be obtained, the high sensitivity advantage of the curvature sensor and its simpler optical setup make it a cost effective alternative for high dynamic range wavefront sensing of low order dominant ocular aberrations. The distance between the defocused image planes defines the spatial sampling resolution and the wavefront sensing accuracy; both are critical parameters. The error propagation in the curvature sensor is a drawback, but this is often sufficient to accurately sense low order aberrations [6–8]. The strong coupling of the high and low order aberration sensing is another disadvantage, i.e., a large residual error found in low order aberrations will restrict accurate sensing of high order modes [9]. In spite of its shortcomings, a curvature wavefront sensor has been applied to study dynamic aberrations of the tear film [10,11] and to measure the ocular aberrations of the eye [12,13]. Also, the presence of high order aberrations blurs the extra-focal and intra-focal pupil images, which could be improved by taking images at four different planes [14]. The advantage of the
24
R.M. Basavaraju et al. / Optics Communications 312 (2014) 23–30
curvature sensor over the HS is its higher sensitivity. It may be used to sense the low order aberrations even at low light levels. The pyramid wavefront sensor is another attractive alternative in ophthalmic applications due to its high sensitivity, large dynamic range and tunability [15,16]. This article is organized as follows. Section 2 describes the methods used to simulate the behavior of the HS and curvature wavefront sensors. Section 3 defines the metrics used for evaluating the performance of the wavefront sensors. Section 4 presents a comparison of the HS and curvature wavefront sensors. Finally, Section 5 presents discussion and conclusions.
Zernike polynomial, Z 66 ), is shown in Fig. 1. This figure summarizes the data obtained from the literature and it can be inferred from here that the defocus term (j ¼4) is the most dominant mode in myopic subjects. The error bars shown in Fig. 1(a) correspond to the confidence range relative to a 95% confidence interval. It can be noted here that among the higher order coefficients shown in Fig. 1(b), the Zernike coefficient corresponding to j¼12 is dominant, which corresponds to primary spherical aberration, Z 40 . From the simulated random numbers, b, as in Eq. (1), a random aberration of the myopic eye, ϕðx; yÞ; can be simulated in the following manner: j ¼ 27
ϕðx; yÞ ¼ ∑ bj Z j ðx; yÞ: j¼3
2. Methods 2.1. Simulation of ocular aberrations of myopic subjects The performance of the HS and curvature sensors in sensing the aberrations of the myopic eye was compared through Monte Carlo simulations, which consisted of three main steps. Firstly, representative ocular aberration of the myopic eye was simulated. Secondly, the corresponding HS focal spots and the intra- and extra-focal plane images of the curvature wavefront sensor were obtained as described in the following sections. Thirdly, wavefronts were reconstructed and compared with the original, simulated aberration. Representative random ocular wavefronts were generated using the statistics of six orders of Zernike coefficients obtained by Schwiegerling [17] from the measurements made on 41 myopic subjects over a pupil diameter of 6 mm (see Fig. 1). The Cholesky decomposed lower triangular matrix (C) was obtained from the covariance matrix (M) of the measured Zernike coefficients such that CCT ¼M. This matrix, C, was used to generate random Zernike coefficients, b, such that b ¼ a þ Cv
ð1Þ
where ‘a’ is the mean of the measured Zernike coefficients of 41 myopic subjects [17] and ‘v’ is an array of random numbers generated from a Gaussian distribution of zero mean and unit variance. Adopting the ANSI standard (ANSI Z80.28), the single index notation of Zernike polynomials, Zj, is obtained from the standard double index notion, Z nm , where ‘n’ is the radial index and ‘m’ is the azimuthal index, by using the following ordering method: j¼
nðn þ2Þ þm 2
ð2Þ
The mean of Zernike coefficients, ‘a’, starting from j¼3 (corresponding to Zernike polynomial Z 22 ) to j ¼27 (corresponding to
ð3Þ
The method described here is the same as that used by Schwiegerling [17] with a difference that the piston and tilt terms (j¼0, 1, 2), which are not relevant in ophthalmological application, were excluded in the simulation of the aberrations of the myopic eye. 2.2. Hartmann Shack wavefront sensor simulations As described earlier, a sample ocular aberration (as defined in Eq. (3)) was simulated over a 6 mm pupil diameter (see Fig. 2(a)). Assuming a 31 31 microlens array and 10 mm focal length for the microlenses, a HS focal spot pattern was simulated using the fast Fourier Transform (FFT) method [18,19]. The reconstructed wavefront shown in Fig. 2(b) is evaluated from the calculated local shifts in the HS spots, which gives the information about the local slopes of the wavefront and is comparable with the originally simulated sample ocular aberration in Fig. 2(a). Here, the HS spot occupies six detector pixels. The reconstruction of the wavefront from the focal spot pattern involves two steps: (a) centroid detection to evaluate the local slopes of the wavefront and (b) evaluation of wavefront phase from wavefront slope measurements. In this study, intensity weighted centroiding (IWC) algorithm was applied for calculating the centroid locations of the focal spots [20]. In the IWC technique, the spot intensity (I) is weighted with itself to calculate the location of the focal spots as defined below: x^ c ¼
∑ij I 2ij X ij ∑ij I 2ij
:
ð4Þ
y^ c can be calculated in a similar fashion as in Eq. (4). Here, (x^ c , y^ c ) is the centroid estimate and Xij represents the ‘x’ position coordinate at different locations of the HS focal spot. IWC is the simplest and accurate centroid detection method in the absence of noise [21].
Fig. 1. (a, b) Mean Zernike coefficient value of 41 myopic subjects obtained from the literature [17].
R.M. Basavaraju et al. / Optics Communications 312 (2014) 23–30
The local slopes of the wavefront (Sx, Sy) are estimated from the location of the focal spots by dividing the displacement of the spots with the focal length of the microlenses. The classical vector matrix multiply method, in addition to the single value decomposition technique, was used with the slope geometry of Southwell [22] for the reconstruction of wavefront phase matrix from the estimated slope values. The effects of readout noise can be included by the addition of pseudorandom numbers obtained from a Gaussian distribution. The root mean square (RMS) wavefront error, for the case of a 31 31 HS sensor, estimated by repeating the calculation of RMS wavefront error 20 times for a given wavefront (Fig. 2(a)) in the presence of noise (signal to noise ratio, SNR ¼14 dB) is 274.89 77.00 nm. The reduction in wavefront sensing accuracy and fluctuations in RMS wavefront error are due to the centroiding inaccuracies in the presence of noise in a HS sensor [21,23].
derivative, the curvature sensor measures the second derivative, i.e., curvature of the wavefront is determined. The sensor output is taken to be the intensity difference between two imaging planes, I1 and I2 (Fig. 3), and this difference image is proportional to the wavefront curvature. The working of the curvature wavefront sensor can be summarized using the following expression [6]: I1 ðrÞI2 ðrÞ ∂W f r fr ð5Þ δc ∇2 W ¼C I1 ðrÞ þI2 ðrÞ ∂n l l where I1(r) and I2( r) are the defocused extra- and intra-focal intensity images, respectively. ∇2 is the Laplacian operator. Here, W is the phase in the pupil, ∂W/∂n is the outward normal derivative of the wavefront at the pupil edge, δc is the linear impulse distribution around the pupil edge, ‘l’ is the degree of defocus from the focal point, f is the focal length, and the constant C is given by C¼
2.3. Curvature wavefront sensor simulations As outlined in the Introduction section, the curvature wavefront sensor uses two defocused images taken at two different planes about the focal plane to determine the curvature of an incident optical wavefront. Unlike the HS that subdivides the pupil plane into regions and measures wavefront slope by employing the first
25
λf ðf lÞ 2π l
ð6Þ
where λ is the wavelength of light. Curvature sensors have two important properties. Firstly, the differential signal shown in Eq. (5) ensures relative insensitivity to scintillation. Secondly, the amount of defocus, l, can be used to trade-off sensitivity of the wavefront sensor with image resolution. In the extreme case where Fraunhofer diffraction
Fig. 2. (a) Randomly simulated ocular aberration of the myopic eye. (b) Wavefront reconstructed using the HS with 31 31 microlenses. (c) Residual wavefront error with HS sensor. (d) Simulated normalized difference intensity (see Eq. (5)) (e) Wavefront reconstructed using the curvature sensor (28 Zernike coefficients). (f) Residual wavefront error with curvature sensor.
Fig. 3. Schematic of the curvature sensor for determination of (a) a spherical optical wavefront, and, (b) a defocused optical wavefront, where intensity distributions for both are shown at intra-focal (top) and extra-focal (bottom) measurement planes. Adapted from [24].
26
R.M. Basavaraju et al. / Optics Communications 312 (2014) 23–30
dominates, i.e., l ¼0, the curvature sensor is limited to measuring tilt aberrations. Thus, the degree of Fresnel blurring essentially determines the resolution of the wavefront estimate. The operation of the curvature sensor is shown in Fig. 3, where a converging lens is used to concentrate light evenly as the wavefront propagates towards image plane P. In Fig. 3(a) an unaberrated (planar) wavefront passes through a focal point on plane P. Fig. 3(b) shows the effect of an aberrated (defocused)
Fig. 4. Estimation of a wavefront from curvature sensing data [24].
wavefront on focal plane P. The curvature of each wavefront is determined by using two measurement planes at equidistant length, l, from the focal plane, where intensity distributions I1 and I2 are plotted with respect to 1D spatial coordinates, x1 and x2, representing intraand extra-focal measurement planes, respectively. To illustrate the relationship between curvature and phase, we reduce the problem to one dimension. An estimate of an aberrated wavefront, such as defocus that is shown in Fig. 3(b), is obtained by first determining the curvature of the signal. This is shown in the 1D case as s(x) at the top of Fig. 4, where the intensity distribution of I2 is subtracted from I1 to reduce the effect of scintillations. The wavefront slope, W † ðxÞ, is then found by integrating the resulting curvature estimate with respect to the spatial coordinates. The 1D result is shown in the middle of Fig. 4. Lastly, the wavefront, W(x), is estimated by integrating the resulting slope, again with respect to the spatial coordinates, and the result is shown at the bottom of Fig. 4. Clearly, as the measurement planes are moved closer together, i.e., l-0, the sensitivity of the curvature sensor is increased due to the higher dynamic range between I1 and I2; this is characterized by the concentration of light. However, the spatial resolution is severely compromised. Conversely, as ‘l’ is increased, the sensitivity of the curvature sensor deteriorates but the spatial resolution is improved. The measurement data can also be interpreted in terms of slopes [25]. To estimate the wavefront from Eq. (5) using wavefront slope measurements, we employed a method based on the linearity of the Radon transform to generalize the problem in 2D. Zernike polynomials were used to represent the reconstructed aberrations [24]. For an extended analysis of this method, one may refer to [8]. In this paper, the dependence of wavefront sensing accuracy on the selection of distance between the measurement planes was analyzed by using the effective propagation distance, z. This distance can be approximated as the ratio of the square of the focal length to the degree of defocus for l o f [8,26]: z¼
f
2
l
:
ð7Þ
The difference signal, corresponding to Fig. 2(a), measured by the curvature sensor is shown in Fig. 2(d). The wavefront reconstructed by using 28 Zernike coefficients is shown in Fig. 2(e). Here, the curvature sensor signal is sampled using 350 350 pixels of the detector. The results of wavefront reconstruction match closely with a 31 31 subaperture HS sensor.
3. Evaluation metrics Fig. 5. The dependence of the wavefront sensing accuracy (in terms of the RMS wavefront error) on the square root of the number of HS subapertures, N, used for wavefront sensing. The error bars show the standard deviation of the RMS wavefront error obtained while sensing 10 randomly simulated myopic aberrations.
The root mean square (RMS) wavefront error (e), computed R between the simulated random wavefront (ϕ ) and the estimated ^ wavefront (ϕ ), was used to measure the wavefront sensing
Fig. 6. The simulated wavefront in (a) and the wavefronts reconstructed using the HS wavefront sensor with (b) 11 11; (c) 21 21; (d) 31 31 subapertures are shown here. The values of the RMS wavefront error and correlation coefficient are shown at the top of each sub-figure.
R.M. Basavaraju et al. / Optics Communications 312 (2014) 23–30
accuracy and is calculated as sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R ^ 2 ∑Li ¼ 1 ∑M j ¼ 1 ðϕij ϕij Þ e¼ LM
ð8Þ
where L M represents the dimension of the wavefront matrices in pixels. The dimensionless quantity correlation coefficient (CC) was also used as an evaluation metric. It is defined as R R ^ ^ ∑Li ¼ 1 ∑M j ¼ 1 ðϕij ϕ Þðϕ ij ϕ Þ CC ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R R ^ ϕ ^ Þ2 ∑L ∑M ðϕ ϕ Þ2 ∑L ∑M ðϕ i¼1
j¼1
ij
i¼1
j¼1
ð9Þ
ij
^ represent the mean of ϕR , ϕ ^ , respectively. and ϕ , ϕ R
4. Results As expected, the performance of the HS sensor improves as the number of subapertures increases. Fig. 5 shows that the RMS wavefront error corresponding to wavefront reconstructions for ten randomly simulated aberrations of the myopic eyes reduces with the square root of the number of subapertures, N. The error bars show that the standard deviation in the measured correlation coefficient decreases with increasing number of subapertures. Furthermore, since the size of the pupil is fixed, as the number of subapertures is increased, the number of photons per subaperture decreases. HS spot size has a significant effect on the accuracy of centroiding and hence wavefront sensing [23,27]. To eliminate the effects of HS spot size on the accuracy of sensing, the size of the spot was kept constant as the number of subapertures is varied. In Fig. 6, three reconstructed wavefronts, in addition to the
Fig. 7. The performance of curvature sensor as the number of Zernike modes is increased. Error bars show the standard deviation in reconstructing 100 simulated wavefronts.
27
randomly simulated wavefront, are shown for three different cases of the number of HS subapertures. The RMS wavefront error drops from 0:59 μm to 0:10 μm when a 11 11 microlens array is replaced with a 31 31 microlens array. This suggests that by increasing the number of subapertures, the magnitude and shape of the aberration can be detected more accurately. Also, it can be noted that the wavefront sensing errors near the boundary decrease significantly by increasing the number of subapertures. The accuracy of wavefront sensing in the case of the curvature wavefront sensor largely depends on the number of Zernike modes used for wavefront sensing and the effective propagation distance. Fig. 7 shows that the mean RMS wavefront error (of 100 sample wavefronts with the corresponding reconstructed wavefronts) decreases with the accumulation of Zernike modes. Corresponding to the sample simulated wavefront, Fig. 8 shows the reconstructed wavefronts using the curvature wavefront sensor employing an increased number of Zernike polynomials. The dominance of the primary defocus aberration among the ocular aberrations of myopic subjects can be seen from Fig. 1(a). A histogram of the magnitude of the defocus coefficient in 1000 randomly simulated wavefronts is shown in Fig. 9. The seemingly Gaussian shape of the histogram is due to the fact that the array of random numbers, ‘v’, was generated from a Gaussian distribution with zero mean and unit variance and the dominance of primary defocus aberration. It is highly probable that the Zernike coefficient, Z 20 , corresponding to primary defocus aberration takes a value near 4 μm, making the randomly simulated aberrations defocus dominant. Henceforth, the performance of the wavefront sensor is analyzed in relation to the defocus coefficient, Z 20 . The scatter plots shown in Fig. 10 present a comparison of the HS and curvature sensors when defocus aberration is dominant. Clearly, by comparing the RMS wavefront error scatter plots, the curvature sensor with 21 Zernike polynomials (corresponding to the first 6 radial
Fig. 9. This histogram shows the occurrence of different magnitudes of defocus aberration in 1000 simulated sample aberrations of the myopic eye.
Fig. 8. The simulated wavefront in (a) and the wavefronts reconstructed using the curvature sensor with (b) 6, (c) 15 and (d) 21 Zernike polynomials. The RMS wavefront error and correlation coefficient values for the wavefront reconstructions are shown at the top of each sub-figure.
28
R.M. Basavaraju et al. / Optics Communications 312 (2014) 23–30
Fig. 10. Comparison of the performance of wavefront sensors in terms of the RMS wavefront error for 100 randomly simulated wavefronts. (a) Curvature sensor using 21 Zernike coefficients. (b) Hartmann Shack wavefront sensor with 21 21 subapertures. (c) Hartmann Shack wavefront sensor with 31 31 subapertures.
Fig. 11. The choice of effective propagation distance and its impact on the accuracy of wavefront sensing in terms of the RMS wavefront error. The error bars shown here represent standard deviation for 100 randomly simulated aberrations.
orders of Zernike polynomials) outperforms the HS wavefront sensor with 21 21 subapertures. The consistency of sensing aberrations can be represented in terms of the standard deviation of the estimated RMS wavefront error. In the case of the curvature sensor (see Fig. 10(a)), the mean RMS wavefront error for 100 randomly simulated wavefronts is 0:20 μm and the standard deviation is 0:09 μm. In the case of a 21 21 HS wavefront sensor (see Fig. 10 (b)), the mean RMS wavefront error is 0:25 μm and the standard deviation is 0:23 μm. The straight line fit (dotted line in Fig. 10(b)) shows that the RMS error increases with increasing defocus coefficient in the myopic aberrations. The slope of this curve reduces as the number of subapertures is increased. For 31 31 HS wavefront sensor (see Fig. 10(c)), the mean RMS wavefront error is 0:19 μm and the standard deviation is 0:13 μm. The standard deviation in RMS wavefront error reduced significantly while using greater number of HS subapertures. The mean RMS for a 31 31 HS is nearly the same as that of the curvature sensor. In order to improve the consistency with a HS, the number of subapertures must be increased further. Increasing the number of subapertures reduces the number of photons per subaperture, forcing to increase the exposure time. In addition, a large number of subapertures increase the computational time to calculate local centroids and reconstruct the wavefront from slope measurements. In contrast, the curvature sensor is more reliable, with the calculation of just 21 Zernike coefficients giving a low standard deviation in the RMS wavefront error. These results show a clear supremacy of the curvature sensor in the presence of dominant primary defocus aberrations, not only in terms of the RMS wavefront error, but also in terms of the consistency of wavefront reconstruction.
The choice of the defocus distance plays a vital role in the performance of the curvature sensor. This is illustrated in Fig. 11. The plot of RMS wavefront error as a function of effective propagation distance shows that a minimal error can be obtained for an optimal propagation distance, which is near z¼ 2.65 m. The error bars shown here depict the standard deviation corresponding to 100 randomly simulated wavefronts. It can be noted that although the optimal effective propagation distance is small, the approximation l o f remains valid for small enough focal distance. For instance, with a lens of focal length 100 mm, the optimal defocus distance, l, is 3.77 mm (see Eq. (7)). Spatial sampling plays a vital role in determining the performance of the wavefront sensors. Fig. 12(a) shows that the RMS wavefront error of the curvature sensor increases nearly exponentially as the spatial sampling (n n detector pixels) decreases. Below n¼40, the mean RMS wavefront error, corresponding to 100 randomly simulated myopic aberrations, goes beyond 0:20 μm. The mean RMS wavefront error saturates at 0:18 μm for large ‘n’ as given by the exponential fitting function in Fig. 12(a). However, a closed loop operation of the curvature sensor was tested through simulations and when n¼350, the RMS wavefront error while estimating 28 Zernike coefficients for 100 sample aberrations is 55.8070.02 nm after two loops. The RMS wavefront error does not decrease further with increasing sampling and increasing number of closed loop operations due to the limit on the number of estimated Zernike coefficients. In the case of the HS wavefront sensor, in addition to HS specifications, the choice of centroiding algorithm and SNR determine the dependence of wavefront sensing accuracy on spatial sampling. Beyond SNR¼0 dB, we observed that the RMS wavefront error reduced with an increase in spatial sampling for the center of gravity (CoG) algorithm [28], and remains constant in case of IWC and matched filter (MF) algorithms [21,28]. However, it should be noted that both IWC and MF methods are better suited than the CoG algorithm for low signal to noise conditions. In closed loop operation, the fundamental limit to the accuracy of the HS wavefront sensor is set by the number of microlenses. The detector readout noise is a common noise source in curvature and HS wavefront sensors and this noise increases the RMS wavefront error. In the case of the curvature wavefront sensor, detector noise was generated using a Gaussian distribution and was added to the difference signal, I1(r) I2(r). Similarly, readout noise was added to the HS focal spot pattern [21]. The RMS wavefront error as a function of SNR for both wavefront sensors is shown in Fig. 12(b). The error bars shown in the figure correspond to a standard deviation in the estimated RMS wavefront error with 100 trials. The standard deviation reduces for both wavefront sensors with increasing SNR. In these simulations, 28 Zernike coefficients were estimated in the curvature sensing calculations and HS wavefront sensor consisted of 31 31 subapertures. In the presence
R.M. Basavaraju et al. / Optics Communications 312 (2014) 23–30
29
Fig. 12. RMS wavefront error as a function of (a) spatial sampling of the curvature sensor signal (28 Zernike coefficients), (b) SNR for curvature (28 Zernike coefficients) and HS (31 31 subapertures) wavefront sensors.
of noise, since the IWC algorithm does not accurately determine the spot centroid location in the case of the HS wavefront sensor, MF algorithm was applied [21]. Employing a HS sensor with a greater number of subapertures increases the accuracy of wavefront sensing, but at the cost of reducing the signal across individual subapertures, which is critical in accurately determining the spot centroid in the presence of noise. This is one of the limitations of the HS and the number of subapertures cannot be increased indefinitely.
5. Discussion The curvature wavefront sensor needs high resolution intraand extra-focal images with high SNR. A practically achievable SNR, which places limits on the measurement of high spatial frequency features on the difference signal (Eq. (5)), can be disadvantageous for applications that require to sense higher order aberrations. However, for sensing the lower order dominant ocular aberrations of the myopic eye, the curvature sensor can be highly advantageous. Also, the choice of the propagation distance, z, is critical to the performance of the curvature wavefront sensor. Here, we have chosen a value of effective propagation distance for which the accuracy of wavefront sensing is optimal. However, it needs to be noted that making the right choice of ‘z’ is not straightforward in practical situations, where a priori knowledge of the aberrations present in the optical system is not available. For sensing higher order aberrations, the optimal propagation distance may differ compared to a preferential value selected for lower order aberrations [29]. In order to sense higher order aberrations more accurately, a Badal optical system could be used to remove the large magnitude primary defocus aberration. Any residual defocus error and the higher order aberrations can then be detected using the wavefront sensors. A limitation of the current study is the lack of a large database of measured ocular aberrations taken from myopic subjects. A larger data set would allow better replication of the aberration statistics. The simulations were repeated with aberrated myopic wavefronts simulated using the data of 126 eyes obtained by Karimian et al. [30] and it was found that the conclusions drawn here are not affected. As can be seen from Fig. 6, although the boundary errors reduce with increasing number of microlenses, the HS sensing algorithm for low number of subapertures could be improved further by including appropriate boundary conditions. Due to the limitations of the HS data obtained from the literature [17], Zernike modes j r 27 alone were included in the analysis. Pupil tracking errors were neglected in the present
comparative analysis. The results of simulations presented here are relevant to clinical wavefront sensing applications with the limitations stated above and can form the basis for further numerical as well as experimental analysis of the HS and curvature wavefront sensors. In conclusion, through simulations, and by limiting the number of high-order aberrations, we have shown that the curvature wavefront sensor performs better than a HS wavefront sensor with fewer subapertures while sensing ocular aberrations of the myopic eye with dominant primary defocus aberration. HS wavefront sensor with a large number of subapertures performs as good as the curvature sensor. It was noted that in the case of HS, the number of subapertures plays a vital role in determining the accuracy of sensing and in the case of curvature sensor, the number of Zernike modes used for sensing and the effective propagation distance play a critical role.
Acknowledgments We thank the anonymous reviewer and the editor for a critical review. Vyas Akondi would like to thank Dr. Brian Vohnsen for his support during the course of this work and financial support from Science Foundation Ireland (Grants: 07/SK/B1239a and 08/IN.1/ B2053) is also gratefully acknowledged. References [1] S. Vitale, R. Sperduto, F. Ferris III, Archives of Ophthalmology 127 (2009) 1632. [2] J. Liang, B. Grimm, S. Goelz, J.F. Bille, Journal of the Optical Society of America A 11 (1994) 1949. [3] P.M. Prieto, F. Vargas-Martín, S. Goelz, P. Artal, Journal of the Optical Society of America A 17 (2000) 1388. [4] J.W. Hardy, Adaptive Optics for Astronomical Telescopes, Oxford University Press, 1998. [5] V. Akondi, M.B. Roopashree, B.R. Prasad, in: B. Tyson (Ed.), Topics in Adaptive Optics, InTech, 2012, p. 167. [6] N. Roddier, Curvature Sensing for Adaptive Optics: A Computer Simulation, Ph. D. Thesis, The University of Arizona, 1989. [7] F. Roddier, Optics Communications 113 (1995) 357. [8] M.A. van Dam, R.G. Lane, Journal of the Optical Society of America A 19 (2002) 1390. [9] O. Guyon, C. Blain, H. Takami, Y. Hayano, M. Hattori, M. Watanabe, Publications of the Astronomical Society of the Pacific 120 (2008) 655. [10] S. Gruppetta, L. Koechlin, F. Lacombe, P. Puget, Optics Letters 30 (2005) 2757. [11] S. Gruppetta, F. Lacombe, P. Puget, Optics Express 13 (2005) 7631. [12] F. Díaz-Doutón, J. Pujol, M. Arjona, S.O. Luque, Optics Letters 31 (2006) 2245. [13] C. Torti, S. Gruppetta, L. Diaz-santana, Journal of Modern Optics 55 (2008) 691. [14] O. Guyon, Publications of the Astronomical Society of the Pacific 122 (2010) 49. [15] R. Ragazzoni, Journal of Modern Optics 43 (1996) 289.
30
[16] [17] [18] [19]
[20] [21] [22] [23]
R.M. Basavaraju et al. / Optics Communications 312 (2014) 23–30
V. Akondi, S. Castillo, B. Vohnsen, Optics Express 21 (2013) 18261. J. Schwiegerling, Clinical and Experimental Optometry 92 (2009) 223. J. Goodman, Introduction to Fourier Optics, McGraw-Hill, 2008. O. Manneberg, Design and simulation of a high spatial resolution HartmannShack wavefront sensor, M.Sc. Thesis, Department of Physics, Royal Institute of Technology, 2005. V. Akondi, M.B. Roopashree, B.R. Prasad, Advances in Recent Technologies in Communication and Computing, IEEE (2009) 366. V. Akondi, B. Vohnsen, Ophthalmic & Physiological Optics 33 (2013) 434. W.H. Southwell, Journal of the Optical Society of America 70 (1980) 998. V. Akondi, M.B. Roopashree, B.R. Prasad, International Journal of Computer Applications 1 (2010) 32.
[24] T.Y. Chew, Wavefront Sensors in Adaptive Optics, Ph.D. Thesis, Department of Electrical & Computer Engineering, University of Canterbury, 2008. [25] M. A. van Dam, R. G. Lane, in: Proceedings of SPIE, vol. 4825, 2002, pp. 237–248. [26] F. Roddier, Applied Optics 27 (1988) 1223. [27] V. Akondi, M.B. Roopashree, B.R. Prasad, in: Proceedings of SPIE, vol. 7588, 2010, p. 758806. [28] V. Akondi, M.B. Roopashree, B. R. Prasad, in: Proceedings of the National Conference on Innovative Computational Intelligence & Security Systems, 2009, pp. 400–405. [29] S. Huang, F. Xi, C. Liu, Z. Jiang, Journal of Modern Optics 59 (2012) 35. [30] F. Karimian, S. Feizi, A. Doozande, Journal of Ophthalmic and Vision Research 5 (2010) 3.