Comparison of Fourier phase spectrum estimation using deconvolution from wavefront sensing and bispectrum reconstruction

Comparison of Fourier phase spectrum estimation using deconvolution from wavefront sensing and bispectrum reconstruction

I January 1997 OPTICS COMMUNICATIONS ELSEVIER Optics Communications 133 ( 1997) 38 I-392 Full length article Comparison of Fourier phase spectru...

1MB Sizes 0 Downloads 45 Views

I January

1997

OPTICS COMMUNICATIONS ELSEVIER

Optics Communications

133 ( 1997) 38 I-392

Full length article

Comparison of Fourier phase spectrum estimation using deconvolution from wavefront sensing and bispectrum reconstruction Michael C. Roggemann

‘, Cynthia A. Hyde b, Byron M. Welsh c

’ Department of Engineering Physics, Air Force Institute of Technology, Graduate School of Engineering, Wright-Patterson AFB, OH 45433, USA h Center for Electra-Optics. University of Dayton, 300 College Park, Dayton, OH 45469, USA ’ Department of Electrical and Computer Engineering. Air Force Institute of Technology, Graduate School of Engineering, Wright-Patterson AFB, OH 45433, USA Received 6 December

1995; accepted 22

March 1996

Abstract

Atmospheric turbulence limits the resolution of astronomical optical imaging systems. Speckle imaging and deconvolution from wavefront sensing (DWFS) are two post-detection image processing methods for overcoming some turbulence effects which do not require adaptive optics. Both speckle imaging and DWFS create estimates of the object irradiance distribution by first estimating the Fourier transform, or spectrum of the object. Previous work has established the signal-to-noise ratio performance for some of the estimators used in both speckle imaging and DWFS. However, the comparative quality of the object spectrum phase estimation provided by these two techniques has not been studied in detail. In this paper we present the results of a simulation study of the comparative Fourier phase estimation errors for speckle imaging using the bispectrum technique, and DWFS. It is shown that even for good wavefront sampling and bright objects DWFS provides Fourier phase spectrum estimation of lower quality than the phase spectrum estimate provided by the bispectmm technique. When the beacon is not bright enough to provide a good signal-to-noise ratio in the wavefront sensor the bispectrum technique provides much better Fourier phase spectrum estimation than DWFS.

1. Introduction Turbulent motion in the atmosphere causes the index of refraction of air to be random in both space and time. Light from a distant object propagating through this random index distribution is randomly retarded. The effect of this random retardation on imaging systems is equivalent to the presence of an aberration in the pupil of the telescope which is random in both space and time [ 11. Using the methods of Fourier optics it is easy to show that aberrations never improve

0030-4018/97/$17.00 Copyright PII SOO30-4018(96)00261-I

the performance of imaging systems compared to the diffraction limited case, and in general, aberrations degrade optical performance [ 21. In fact, for the case of imaging space objects with large ground-based telescopes at visible wavelengths this random aberration is large, and the resulting reduction in optical performance is severe. For large uncompensated telescopes using long exposure times the resolution obtained in practice is approximately equal to that of a telescope approximately re in diameter, where 10 is the so-called Fried pararn-

@ 1997 Elsevier Science B.V. All rights reserved.

382

M.C. Roggemann et al./Optics Communications 133 (1997) 381-392

eter characterizing the strength of the turbulence [ 31. Typical values for ro at visible wavelengths range from less than 10 cm at poor seeing sites to 20-30 cm at the very best sites in the world. An uncompensated telescope has the light gathering capability of its full aperture, an important consideration for imaging very dim objects. However, the realizable resolution is imposed by atmospheric turbulence rather than the optical design of the telescope. Many techniques for overcoming the optical effects of turbulence have been studied. These correction techniques include adaptive optics [4,5], post detection image processing, such as speckle imaging (SI) [6-S] and deconvolution from wavefront sensing (DWFS) [9-l 11, and hybrid techniques which use elements of both adaptive optics and image processing [ 12-161. The post detection image processing techniques of speckle imaging and DWFS are of interest because they avoid the investment in expensive and complicated hardware required for adaptive optics. Much is known about the performance of both SI and DWFS. Both of these imaging techniques create an estimate of the object irradiance distribution from an estimate of its Fourier transform, or spectrum. We shall refer to the phase of the Fourier transform as the phase spectrum of the function. In SI the modulus of the object spectrum and the object phase spectrum are computed by using different algorithms applied to short exposure measurements of the object of interest and a reference star. The modulus of the object spectrum is typically reconstructed using the Labeyrie technique [ 61, also referred to as speckle interferometry. The phase spectrum is generally reconstructed using either the Knox-Thompson method [ 7,171 or the bispectrum method [ 81 in conjunction with a suitable phase reconstruction technique [ 181. The signal-tonoise ratio (SNR) of the unbiased image power spectrum estimator Q(f) used to obtain the modulus of the object spectrum has been derived [ 191, and photon noise effects on both the bispectrum [20] and cross spectrum [ 211 have been analyzed. The SNR of the object spectrum estimate obtained using DWFS has also been studied, and compared to the SNR of the estimator used in speckle interferometry [ 221. It was shown in Ref. [ 221 that the SNR of the unbiased speckle interferometry estimator Q ( f) is superior to the SNR of the DWFS estimator for point

source objects. However, for extended objects the SNR of the DWFS estimator was shown to be either comparable to or greater than the SNR of Q(f) . It should be noted that in Ref. [22] the DWFS and SI SNRs were compared directly without accounting for the fact that the DWFS estimator is complex-valued while the SI estimator Q(f) is real-valued. In Ref. [ 231 the SNR performance of SI and DWFS was theoretically investigated for the case in which the two SNRs being compared are for the same estimated quantity: the object power spectrum. This previous analysis revealed that DWFS has a slight SNR advantage over speckle interferometry regardless of the object characteristics (points or extended source) for the case that performance is limited by image shot noise. This result was derived assuming that the DWFS wavefront sensor enjoyed a high signal-to-noise ratio. This previous work also revealed that SI had a slight SNR advantage over DWFS for the case where the atmospheric induced variation of the OTF limited performance. In either case these reported performance differences were slight and we can generally conclude that DWFS and speckle interferometry give the same performance in terms of estimating the object power spectrum. A key factor distinguishing DWFS from SI is that no separate phase spectrum reconstruction step analogous to computing and processing the bispectrum is required in DWFS since the complex-valued object spectrum is estimated directly in DWFS. An initial comparison of the quality of the phase spectrum estimates obtained by DWFS and SI was presented in Ref. [ 111. However, in Ref. [ 111 only point sources were considered, and an idealized wavefront sensor model capable of perfectly estimating the first few Zernike polynomials was assumed. In this paper we present the results of a study of the comparative errors in phase spectrum estimation between DWFS and SI. Due to the complexity of the statistical estimators involved, an accurate simulation was used to perform this study. Both the DWFS and the SI estimators and their associated SNRs are object dependent. Hence, the signal-to-noise ratios and imaging performance of these two techniques depend upon the object being imaged. To explore the object dependence of the quality of the phase spectrum reconstruction two objects were simulated for the results presented here: (i) an infinitely distant point source representing an unresolved star; and (ii) a computer rendering of a

M.C. Roggemann et al/Optics

satellite model to simulate imaging an extended exoatmospheric object. The object brightness and 10 were varied for both objects, and the range of the extended object was varied. DWFS results were computed using a Hartmann sensor model which included slope errors arising from using an extended object as a beacon [ 241. and errors which arise from the use of a realistic reconstruction algorithm. The main result of this paper is that DWFS implemented with least squares phase reconstruction is shown to provide object phase spectrum estimation of lower quality than the phase estimate obtained with the bispectrum technique, even for bright objects and good wavefront sampling. The qualifier bright is used here to describe the case where the object or beacon is bright enough to provide ample signal for low noise wavefront estimation using a Hartmann sensor. For the hardware system studied here the requirement is that the objects be brighter than eighth visual magnitude. For dimmer objects the object phase spectrum estimate obtained using the bispectrum technique is better than the phase spectrum estimate obtained with DWFS. The remainder of this paper is organized as follows. In the next section we present the mathematical models used for this analysis. In Section 3 simulations for SI and DWFS are described. Results are presented in Section 4, and conclusions are drawn in Section 5.

2. Models In this section we present the various models for the systems and components used in the simulation. 2. I. Imaging

system model

The incoherent imaging system is modeled as a linear, shift invariant system described by the point spread function h(x) , where x is an image plane coordinate [ 21. In this model the noise free image irradiance distribution i(x) is given by the convolution of the object it-radiance distribution o(x) and h(x) i(x)

dcuo(a)h(x

= J

- a),

(1)

Communications 133 (1997) 381-392

F(u)

=

xi?

dxf(x)exp(-j27ru.x),

112)

s

where II is the 2-D spatial frequency coordinate. Fourier transforming Eq. ( 1) yields the familiar relationship between the image spectrum I(U), the object spectrum O(U) and the optical transfer function (OTW

H(u)

I(U) = O(u)H(u).

(3)

2.2. OTF Estimation In an imaging system viewing astronomical objects through atmospheric turbulence the effect of the turbulence is to cause a random aberration in the telescope pupil. A single sample function from the random process constituted by the atmospheric turbulenceinduced aberration is denoted Bi(X,), where x,, is a spatial location in the pupil of the telescope. Assuming the telescope is diffraction limited in the absence of the atmosphere, the OTF resulting from the random aberration 8( xP ) is given by [ 21 H(u)

= N-l

J

x expfi[8(x,,)

dx, fYx,)P~x,, - 0(x,

- uAf,

- uhf) I},

(4)

where f is the distance between the exit pupil and the image plane, A is the wavelength of operation, and the normalizing factor, N, is given by N=

J

(5)

dx, IP(x,

The DWFS method requires an estimate of H(U), referred to here as A(u). In DWFS the estimate, ii(u), is obtained in the following manner: (i) the WFS is exposed to a pupil image which contains the aberration 13(xP ) ; (ii) an estimate of 19(x,, ) , called 6(x,, ), is computed from the WFS measurement; and (iii) the estimate e( x,,) is used to compute Z?(U) using A(u)

= N-’

J

x exp{j[a(x,)

dx, f’(x,)P(x,,

-6(x,

- uA.0

- uAf)]}.

(6)

where the integration is taken over all space. The Fourier transform F(u) of a function f(x) is defined

In the next subsection we present the Hartmann sensor model, and in Section 2.4 we present the estimator for

as 121

ei(Xp).

M.C. Roggemann et al. /Optics Communications I33 (1997) 381-392

384

2.3. Hartnlann

sensor model

A Hartmann-type wavefront sensor was modeled in this effort. The slope measurement provided by the nth subaperture in the qth direction is given by [25,26]

where x,, is a spatial coordinate in the subaperture pupil, xy is a unit vector in the qth orthogonal direction of sensitivity of the Hartmann sensor (i.e., q indicates either the x or the y direction) and 4(x,) is the incident phase aberration. In Eq. (7) W,,( x,) is the phase weighting applied by the nth subaperture and VW,(x,) is the spatial gradient of W,( x,). Wn(x,) is a constant inside the subaperture pupil defined such that

transform of the beacon irradiance distribution normalized to have the value of unity at u, = 0. ib(u,) is related to the image of the beacon in the image plane of the subaperture Ib( x,) by

compute the standard requires the use of a model for the beacon n-radiance distribution. Two were for the presented here: a point and a computer object. For the point source &( u,) = 1. Gaussian it-radiance For the extended results distribution

s

dx, Wn(x,) = 1,

and is zero outside the subaperture pupil. The term a;, in Eq. (7) is the slope measurement error in one direction. For all of the results presented here square Hartmann subapertures with side length d were modeled. An idealized Hartmann sensor detector was assumed here to provide an upper bound on performance. Specifically, the detector was assumed to be a shot noise limited quad cell with detectors that extend infinitely away in the appropriate quadrant from the central point at which the quad cells meet. For this case the standard deviation of the slope measurement error cy,, referred to as a, is given by [ 241 (9) g’a = (Kw)“2Jdu,,

&(U,,,O)H,,(u,,,o)



where KW is the average number of photo-events per integration time in the subaperture, and H,,( u,) is the average tilt-removed CYTFfor the subaperture. Following Ref. [ 241, the form of H,,( u,) derived by Heidbreder in Ref. [ 271 was used. In Eq. (9) u,, is the uX component of the spatial frequency vector u, defined in the Fourier transform domain of the subaperture image plane and normalized by the diffraction limited cutoff frequency for the subaperture d/Afi, where fi is the focal length of the subaperture lenslet, d is the subaperture side length, A is the wavelength of operation of the wavefront sensor, and fb( u,) is the Fourier

distance from the telescope to the beacon and Crbis the e-’ width of the beacon at the beacon altitude. The normalized Fourier transform

longest dimension case.

of the

which is 12 m in the

2.4. Wave front phase estimator

timator, e( xP) , for the incident

obtained from cj =

c

Mjnsnr

(14)

n

where s, is the nth entry in the vector of WFS slope (or equivalently, matrix.

M.C. Roggemann

et al./Optics

One method for computing Mjn, referred to as the least squares method, is to minimize the sum of the mean square values of the error, A, between the actual slope measurements and the slope measurement vector which would be given by a phase front composed of a linear combination of the elementary functions [ 131: A=(s-Hc),

(15)

where c and s represent column vectors of weights and WFS slope measurements, respectively, and H is a Jacobian matrix containing the sensitivities of the slope measurements to the elementary functions. The elements of H, h,j, are given by hnj = 2.

./

(16)

The quantity explicitly minimized is A*A, where the superscript T indicates the transpose operation. The minimization is accomplished by differentiating ATA with respect to cj, setting the result equal to zero, and solving for the optimal choice of cj, cyt. Performing this series of operations yields the result [ 131 cop’ = ( H*H)-‘HTs. Thus, the least squares reconstruction

(17) matrix is given

by Mjn = ( H*H)-‘HT.

(18)

In previous work both Zernike polynomials [ 9,16,22,28] and two-dimensional Gaussian functions with user selected centers and e-l widths [29] have been used as elementary functions to compute the estimate e,( xI, ) . Following Ref. [ 93 we use Zernike polynomials with the ordering and normalization convention first presented by No11 in Ref. [30], with the exception that Noll’s first Zernike polynomial, corresponding to piston, is ignored since the Hartmann-type WFS is insensitive to piston. An upper bound on the number of Zernike polynomials is imposed by the number of WFS subapertures and the WFS geometry. This restriction arises from the need to compute the inverse of the matrix H*H. As the number of Zernike polynomials to be used in wavefront reconstruction is increased the matrix H*H eventually becomes ill-conditioned because the WFS cannot uniquely sense modes beyond some highest order imposed by the WFS geometry. No attempt

Communications

133 (1997) 381-392

I485

has been made here to determine the optimal number of Zernike polynomials to use for wavefront reconstruction in DWFS. For the work presented here the first 40 Zernike polynomials, excluding piston, were used for wavefront phase reconstruction. The choice of 40 Zernike polynomials derives from Ref. [ 161, where the structure functions of 0(x,) and e(x,) were computed for a similar WFS. In Ref. [ 161 40 Zernike polynomials was found to give a good match between the structure function of 0( xp) and the structure function of e( x1,). Additionally, in Ref. [ 3 1 ] it was shown that for a similar WFS and D/r0 = 13.3, which is analogous to the ro = 7 cm case presented later, that minimum wavefront estimation errors were obtained using a similar number of Zernike modes. 2.5. DWFS object spectrum estimator In DWFS simultaneous measurements of an image and the output of a wavefront sensor (WFS) from both the object of interest and a reference star are processed to create an estimate of the object irradiance distribution [ 9,16,22]. The object spectrum estimate do ( f) is obtained from DWFS using

(D(f>fi*(f))

“(‘)=(&df)&df>*) =

loD(f>lexp[dD(u)19

(19)

where D(f) is the spectrum of the detected image, the * indicates the complex conjugate, A(f) is the associated estimate of the GTF obtained from the wavefront sensor measurements of the object, H,f(f) is the CYIF computed from reference star measurements, fir,,r( f) is the associated estimate of the GTF obtained from the wavefront sensor measurements of the reference star, and ( ) is the time or ensemble average operator. In Eq. ( 19) the quantity 4o ( U) denotes the estimate of the object phase spectrum obtained using DWFS. If the detected images from the reference star are denoted by r(x), with associated spectrum R(u), then Href(f> is given by

Hredf)

R(u) R(O).

form the differs that spectrum

estimator in ( in [9] the given Ref. [ 91 was shown

386

M.C. Roggemannet al./Opiics CommunicationsI33 (1997) 381-392

to be biased in Ref. [ 161. The estimator given in Eq. (19) is problem, but requires image and of both the object of interest and

form of the DWFS immune to the bias WFS measurements a reference star.

2.6. Speckle imaging

= 10(~)12~{lW~)12)

+E,

where K is the average number of photo-events image. Hence, the unbiased speckle interferometry timator is defined by [ 191 Q(U) = ID(u) I* - K

(21) per es-

(22)

I21= E.

obtain

an estimate

of

(25)

(26) The bispectrum B( UI, ~2) of a detected image with spectrum D(u) is defined as [ 81 B(uI,u~)

I*.

=D(w)D(u2)D*(w

+u2).

(27)

The phase of the object spectrum is encoded in the phase of the bispectrum, but is not directly available from the bispectrum. Hence, the bispectrum must be processed to recover the phase spectrum from the bispectrum. For all of the results presented the recursive algorithm was applied to the bispectrum to reconstruct the estimated object phase spectrum obtained from speckle imaging 6s (u) [ 18,321. The estimated object spectrum for speckle imaging is then obtained from OS(~) =

I~s(~)lqCi&(~)l.

(28)

In computing the bispectrum the maximum separation between ut and u2 was restricted to be less than or equal to ro/A for the maximum value of 10 = 0.15 m used in this study.

The key metric for comparing DWFS and speckle imaging examined here is the squared error between the estimate of the object phase spectrum and the true object phase spectrum. To mathematically define this squared error, let the true object spectrum O(u) be represented by a modulus lO( u) I and phase 4(u) as = IO(u)

I exp&$(u) I.

(29)

The phase error between the true object the estimated object spectrum is then ing Eq. (29) in conjunction with either Eq. (28) to obtain the two-dimensional error E;,~( u) defined by

10(~)12~{I~~~f(~)12}

= CE{ lKer(u)

we can

W%fW

O(u)

=

(24) I*} using

2.7. Phase spectrum error metric

where

f&f(u)

Eq.

E{ I&f(u)

Finally, the estimate of the modulus of the object spectrum obtained using speckle imaging is given by

Speckle imaging was implemented using Labeyrie’s speckle interferometry technique [ 31 to estimate the modulus of the object spectrum, and the bispectrum technique [ 81 to estimate the phase of the object spectrum. In this section we review the mathematical models used to implement speckle imaging. Speckle interferometry is a technique for estimating the squared modulus of the object spectrum from a data set of short exposure images by computing the second moment of the detected image spectrum. Speckle interferometry requires a data set of images of the object d(x) , and a set of images of a nearby reference star T(X). Let the Fourier transform of the images of the object be denoted by D(u) and the Fourier transform of the reference star images be denoted by R(u). The squared modulus of the object spectrum cannot be estimated directly from the second moment of the object images E{ ID(u) I*} since for photon-limited detection JWW12)

Using

(24)

&(u)

= [arg{o(u)8;,,(u)}]*

9

spectrum and obtained usEq. (19) or square phase

(30)

M.C. Roggemann et al. /Optics Communications 133 (1997) 381-392

where the subscripts D and S are used to indicate that this operation was applied to object spectrum estimates obtained with both DWFS and speckle imaging. Two-dimensional arrays of ei,s( u) were then averaged around circles of constant radius to obtain onedimensional plots of the squared phase error denoted by ~a,~( u), where the scalar frequency variable u represent the radius of the appropriate circle over which the data was averaged. It should be noted that for point sources and other objects with rotational symmetry the squared phase error is expected to be rotationally symmetric in frequency space. Hence, in the symmetric object case the radial averaging approach described above provides an additional averaging step in estimating the true value of the squared phase error at some radius in frequency space. However, for objects lacking rotational symmetry, such as the satellite object used in this study, the squared phase error will not in general be rotationally symmetric. In the case of objects without rotational symmetry the radial averaging technique described above simply provides a convenient means of presenting a function with two independent variables as a function of a single independent variable.

3. Simulations

for DWFS and SI

Simulations were used to compute object estimates using DWFS and SI. Simulations provide a convenient means of estimating performance for cases where the statistical estimators cannot be written in a form which permits easy analytic evaluation, such as is the case for phase spectrum estimation using DWFS and SI. These simulations also provide a tool for evaluating performance across a wide range of input parameters. Separate computer programs were used for the DWFS and SI results. However, these programs were derived from codes which have been carefully tested against theoretical performance [ 131, and have been extensively used in other studies [ 14,22,33]. The phase screens are created using a method known to provide phase screens with the Kolmogorov structure function [ 341. We describe the simulations here. Both simulations have a setup phase where sample spacings are determined, the telescope pupil template is created, and the WFS template is created. The inputs, including ro, pupil diameter, number of subaper-

387

tures across the pupil, number of iterations, average number of photo-events per image, and WFS SNR were also read during the setup phase. The reconstruction matrix is computed in the DWFS simulation. For the results presented here a 1 m diameter pupil was sampled with a sample spacing of 2.08 cm, with 49 samples across the diameter of the pupil. WFS subapertures contained 5 samples per side, and the side length of the subapertures was 8.33 cm, yielding a total of 88 fully illuminated subapertures within the pupil. The pupil was embedded in an 81 x 81 pixel array for processes involving only the pupil phases. The pupil was embedded in a 256 x 256 array for processing using Fourier methods to guarantee that wrap-around error could not occur. The size of the extended object was scaled for the appropriate range so that the spectral components of the Fourier transform of the object and the spectral components of the OTF were on the same grid in the sampled frequency space. The simulation models near field turbulence conditions by creating a random phase screen and placing this phase screen in the simulated pupil of a telescope. The Hartmann sensor modeled here had 12 subapertures spanning the 1 m diameter of the telescope pupil. Hence, the subapertures had side length of d = 8.33 cm. Only subapertures completely contained by the telescope pupil were used, providing a total of 88 subapertures in the Hartmann sensor. The recursive portion of the DWFS simulation runs for a user-specified number of independent random phase screens. For the results presented here 200 independent phase screens were used to obtain the samplebased estimates of the statistical quantities of interest. For each iteration a random phase screen was computed, placed in the pupil of the telescope, and noise free WFS measurements were calculated. A zero mean Gaussian random variable of appropriate variance, as described in Section 2.3 was added to each WFS measurement to simulate the effects of measurement noise. The instantaneous OTF, H(u), was computed by using Fourier methods to compute the autocorrelation of the generalized pupil function [ 21. The noise-free image spectrum was computed by taking the pointwise product of the object spectrum and H(U) The noisefree image spectrum was used to compute a noisefree image, and a photon-limited image with a userspecified average number of photo-events was created. The spectrum of the photon-limited image D ( f ) was

388

M.C. Roggemann et al./Optics Communications 133 (1997) 381-392

then computed. The WFS measurements were used as described in Sections 2.2 and 2.4 to compute an estimate of the pupil phase e(x). The phase estimate e(x) was used as the phase of the generalized pupil function for computing the estiinated OTF fi( U) . The numerator of E$. ( 19) was then computed for the current realizations of D(f) and A( u). Statistical quantities were accumulated to estimate the mean of the numerator of Eq. ( 19). After the recursive portion of the simulation is complete the statistical quantities of interest were computed from the accumulated statistical data, and stored. The recursive portion of the SI code also runs for a user-specified number of iterations. For each iteration of the SI code the steps of computing the instantaneous CJTF through computing a realization of the photonlimited image were repeated. However, in the SI code the unbiased power spectrum estimator Q(f) and the bispectrum B( fl , f *) were computed for each realization of the photon-limited image. Statistical quantities required to compute the mean and the variance of Q(f) and B( f, , f2) were accumulated and the appropriate data was stored at the end of each run. Data sets consisting of 200 independent phase screens were used to obtain all of the results presented here. This process was repeated for both the object of interest and a point source object corresponding to a reference star. An estimate of the object was then reconstructed using the techniques described in Section 2.6. Input parameter values are listed in Table 1. Photoevent rates were calculated from visual magnitude assuming that the objects simulated had the same spectrum as the sun. Visual magnitudes p, of the objects were varied, with m,, = 0,4,8 used in this study to represent a reasonable range of wavefront sensor signal levels. The objects were assumed to have the same spectrum as the sun. To calculate the average number of photo-events per integration time in the wavefront sensor and the image plane some assumptions must be made about mean wavelengths of operation, the spectral bandwidths to be used, and the integration times. For the results presented here the mean wavelength of operation for the wavefront sensor is 600 nm, and the spectral bandwidth was 160 nm. The mean wavelength of operation for the image plane is 700 nm, with the spectral bandwidth of f35 nm. The relatively broad bandwidth of operation for the Hartmann sensor is justified by the fact that Hartmann sensors are es-

Table I Input parameters Parameter

Values

Fried seeing parameter, ru Visual magnitude, mc Extended object range

7 cm, IO cm, 15 cm 0, 4. 8 500 km, 700 km, 900 km

Table 2 KW and Kf as a function of visual magnitude

In,

Kw

4

0 4 8

416994 10472 263

26805639 673243 16910

sentially white light devices. In the case of the image plane the relatively narrow bandwidth of operation is justified by the desire to minimize chromatic effects in the image plane by approximately satisfying the quasimonochromatic conditions [ 21. The integration time for both the wavefront sensor and the image plane is 10 ms. All losses from the top of the atmosphere to the output of the detector were assumed to provide an overall efficiency of 0.5. These assumptions yield the average number of photo-events per integration time in both the wavefront sensor KW and the image plane K[ shown in Table 2.

4. Results In this section we present the results of this study. The phase spectrum error metrics ED(U) and ES(U), defined in Eq. (30), were evaluated as functions of ro, the visual magnitude m,, of the object, and in the case of the extended object, the distance between the telescope and the object. 4.1. Point source results Phase error metric results for the simulated point source object are presented in Fig. 1. In Fig. 1 the subfigure labeled (a) is for ro = 7 cm, the subfigure labeled (b) is for ro = 10 cm, and the subfigure labeled (c) is for ro = 15 cm. In Fig. 1, E;(U) and E:(U) are plotted as a function of normalized spatial frequency, where the normalization factor for the horizontal axis

M.C. Roggemann et al. /Optics Communications 133 (1997) 381-392

is the diffraction limited cutoff angular frequency D/A at the wavelength used for image formation. Inspection of Fig. 1 shows that for objects of visual magnitude m,. I 4 speckle imaging and DWFS provide comparable phase spectrum estimation for normalized spatial frequencies less than approximately 0.9. However, for normalized spatial frequencies greater than 0.9 speckle imaging provides a clear performance advantage for m,, 2 4. This high spatial frequency limitation of D?VFS most likely derives from the inability to accurately reconstruct the high spatial frequencies in the incident turbulence-corrupted wavefront due to finite spatial sampling of the wavefront, and the use of a finite number of Zernike polynomials to reconstruct the wavefront. The m,, = 8 results show that speckle imaging provides much better performance for normalized spatial frequencies greater than approximately 0.2 for all seeing cases examined. The relatively poor performance of DWFS in the m,, = 8 cases is attributable to the fact that the signal to noise ratio of the wavefront sensor measurement is much lower than in the m,, = 0 and m,, = 4 cases.

1x9

3s

3-

Normalized

SpatialFrequency

64 dI

4.2. Extended object results As noted in Ref. [ 221, the comparative performance of DWFS and speckle imaging are object dependent. To explore the comparative phase spectrum reconstruction errors between DWFS and the bispectrum phase reconstruction technique an extended object was input to the simulations. The simulations were then run using the extended object at three different distances from the telescope, specifically, 500 km, 700 km, and 900 km, for the same set of seeing and visual magnitude conditions studied for the point source. A computer rendering of a satellite, shown in Fig. 2, was used as the extended object. The satellite object was placed in a 256 x 256 pixel array with physical side length of 12 m. To simulate the effects of varying distance, the object was scaled so that the spectral components of the Fourier transform of the sampled object and the spectral components of the sampled optical transfer function fell on the same grid in frequency space. Phase error metric results for the extended object are presented in Figs. 3,4, and 5 for object ranges of 500 km, 700 km, and 900 km, respectively. In Figs. 3, 4, and 5 the subfigures labeled (a) are for r-0 = 7 cm, the subfigures labeled (b) are for r-0 = 10 cm, and

Nomalircd

Spatial Frequency

(c)

l

l

Fig. I. Phase error metrics L (u ) and ‘s( u ) for a simulated point source object with visual magnitude m, = 0.4.8: (a) ro = 7 cm; (b) ro = 10 cm; and (c) ro = 15 cm.

390

Fig. 2. Computer rendering object simulations.

M.C. Roggemann et al./Optics Communications 133 (1997) 381-392

of a satellite

using in the extended 2

the subfigures labeled (c) are for ro = 15 cm. The horizontal axis in Figs. 3,4, and 5 is normalized to the diffraction limited cutoff frequency D/A. The results presented in Figs. 3,4, and 5 can be discussed in terms of the noise effective cutoff frequency fen, defined as the normalized spatial frequency at which E;(U) and e:(u) exceed one radian squared. For the 500 km object case shown in Fig. 3 fen > 0.9 for speckle imaging for all of the m, = 0,4 cases, and feff > 0.4 for all of the m, = 8 cases. However, for the 500 km object case, DWFS provides fea > 0.8 for the m,, = 0 and 4 cases, and fen > 0.4 for the M, = 8 cases. For the 700 km object case shown in Fig. 4 fen > 0.9 for speckle imaging for all of the m,, = 0, 4 cases, and feff > 0.6 for all of the m, = 8 cases. At 700 km DWFS provides fen > 0.8 for the m, = 0, 4 cases, and fen > 0.5 for the m, = 8 cases. Finally, for the 900 km object case shown in Fig. 5 we see again that fen 2 0.9 for speckle imaging for the m,, = 0 and 4 cases, and fen > 0.7 for all of the m, = 8 cases. At 900 km DWFS provides fen 2 0.8 for the m,, = 0, 4 cases, and DWFS provides fea > 0.6 for the m,, = 8 cases. Note that the general trend of increasing feff as the object distance increases derives primarily from the fact that as the object distance increases the object appears more point-like. We conclude from these results that even when sufficient light levels exist in the wavefront sensor to provide good wavefront phase reconstruction, DWFS provides object phase spectrum estimates of

Fig. 3. Phase error metrics c;(u) and e;(u) for the satellite object at a range of 500 km with visual magnitude m, = 0,4,8: (a) ru = 7 cm; (b) ru = IO cm; and (c) ru = 15 cm.

M.C. Roggemunn

et d/Optics

Communications

I33 (1997)

381-392

(4

I 0

0,

0.2

03

04 Kml.lm‘l

, 0.5

, 06

j

, 07

0.8

09

1

s,vai’al Fh.qucncv

(b) 2 :

:I

0 0

01

02

03

0.4 0.5 06 No~~,ludSpa,ialF.qurn~y

0.7

08

09

1

Cc) Fig. 4. Phase error metrics E%(U) and E:(U) for the satellite object at a range of 700 km with visual magnitude m,. = 0.4.8: (a) ~‘0= 7 cm; (b) ro = IO cm; and (c) ro = 15 cm.

l

Fig. 5. Phase error metrics f;(u) and :( U) for the satellite object at a range of 900 km with visual magnitude rnr, = 0,4,8: (a) ru = 7 cm; (b) ro = IO cm; and (c) ro = 15 cm.

392

M.C. Roggemann et al. /Optics

lower quality than the object phase spectrum estimates obtained from the bispectrum technique. This conclusion is consistent with the results presented in Refs. [ 22,231, where the signal-to-noise ratio of the unbiased speckle interferometry estimator and the DWFS estimator were compared, though neither of these references directly studied phase spectrum estimation.

5. Conclusion We have examined the comparative object phase spectrum estimation errors provided by the bispectrum technique and DWFS. Results presented here show that for imaging bright objects under good seeing conditions the bispectrum technique provides superior phase spectrum estimates compared to the DWFS technique using least squares wavefront reconstruction. The performance advantage of the bispectrum technique increases as the object becomes dimmer so that insufficient signal-to-noise ratio is obtained in the wavefront sensor for good wavefront reconstruction. One of the central problems in DWFS is photon starvation of the wavefront sensor, resulting in the high object phase spectrum estimation errors when there is not sufficient signal available in the wavefront sensor. This problem may be overcome in the future by using artificial guide stars to provide signal for the wavefront sensor [ 35 1.

Acknowledgements The authors wish to thank the US Air Force Phillips Laboratory, Air Force Maui Optical Station and the Maui High Performance Computer Center for supporting this work.

References [ 1I E Roddier, The effects of atmospheric turbulence in optical astronomy, in: Progress in Optics, Vol. XIX, ed. E. Wolf (North-Holland, New York, 1981) . [2] J.W. Goodman, Introduction to Fourier Optics (McGrawHill, New York, 1968). [ 3 I J.W. Goodman, Statistical Optics (Wiley, New York, 1985). [4] J.H. Hardy, Proc. IBEE 66 (1978) 651.

Communications

133 (1997) 381-392

[5] J.H. Hardy, in: SPIE Proc. on Advanced Technology Optical Telescopes, Vol. 332 (1982) p. 252. [6] A. Labeyrie, A&on. Astrophys. 6 (1970) 85. ]7] K.T. Knox and B.J. Thompson, Astrophys. J. 193 (1974) L45. [8] A.W. Lohmann, G. Weigelt and B. Wimitzer, Appl. Optics 22 (1983) 4028. [9] J. Primot, G. Rousset and J.C. Fontanella, J. Opt. Sot. Am. A 7 (1990) 1589. ] IO] D.L. Fried, in: SPIE Proc. on Digital Image Recovery and Synthesis, Vol. 828 ( 1987) p. 127. [ 111 R. Lane and A. Glindemann, in: SPlE Proc. on Digital Image Recovery and Synthesis, Vol. 2029 (1993) p. 275. [ 121 P Nisenson and R. Barakat, J. Opt. Sot. Am. A 4 ( 1987) 2249. [ 131 M.C. Roggemann, Comp. Elec. Eng. 18 ( 1992) 451. [ 141 M.C. Roggemann and C.L. Matson, J. Opt. Sot. Am. A 9 (1992) 1525. I151 MC. Roggemann, Appl. Optics 30 (1991) 4227. [If31M.C. Roggemann, B.M. Welsh and J. Devey, Appl. Optics 33 (1994) 5754. [I71 K.T. Knox, J. Opt. Sot. Am. 66 ( 1976) 1236. I181 M.J. Notthcott, G.R. Ayers and J.C. Dainty, J. Opt. Sot. Am. A 5 (1988) 986. [I91 J.C. Dainty and A.H. Greenaway, J. Opt. Sot. Am. 69 ( 1979) 786. [201 T. Nakajima, J. Opt. Sot. Am. A 5 (1988) 1477. [211 G.R. Ayers, M.J. Northcott and J.C. Dainty, J. Opt. Sot. Am. A 5 (1988) 963. WI M.C. Roggemann and B.M. Welsh, Appi. Optics 33 (1994) 5400. 1231 B.M. Welsh and M.C. Roggemann, in: SPIE Proc. on Image Reconstruction and Restoration, Vol. 2302 ( 1994) p. 281. v41 B.M. Welsh, B.L. Ellerbroek, M.C. Roggemann and T.L. Pennington, Appl. Optics 34 (1995) 4186. ~251 E.P. Wallner, J. Opt. Sot. Am. 73 (1983) 1771. 1261 B.M. Welsh and C.S. Gardner, J. Opt. Sot. Am. A 6 (1989) 1913. [27] G.R. Heidbreder, IEEE Trans. Ant. Prop. AP-15 (1967) 90. 1281 J.D. Gonglewski, D.G. Voelz, J.S. Fender, DC. Dayton, B.K. Spielbusch and R.E. Pierson, Appl. Optics 29 ( 1990) 4527. (291 B.M. Welsh and R.N. VonNiederhausem, in: SPIE Proc. on Atmospheric Propagation and Remote Sensing, Vol. 1688 (1992) p. 572. [30] R.J. Nell, J. Opt. Sot. Am. 66 ( 1976) 207. [ 311 Guang ming Dai, Theoretical studies and computer simulations of post-detection atmospheric turbulence compensation, Lund Observatory, Lund University, Lund, Sweden, 1995. ]32] CL. Matson, LA. DeLarue. T. M Gray and I.E. Drunzer, Comp. Elec. Eng. 18 ( 1992) 485. [33] M.C. Roggemann and J.A. Meinhardt, J. Opt. Sot. Am. A 10 (1993) 1996. [ 34) G. Cochran, Phase screen generation, Technical report, The Optical Sciences Company, Placentia CA, 1985. [35] R.Q. Fugate, D.L. Fried, G.A. Ameer, B.R. Boeke, S.L. Browne, PH. Roberts, R.E. Ruane, G.A. Tyler and L.M. Wopat, Nature 353 (1991) 144.