COMPUTER "GRAPHICS AND IMAGE PROCESSING
12, 173-186 (1980)
Nonstationary Statistical Image Models (and Their Application to Image Data Compression)* B. R. HUNT Systems Enffineering Department and Optical Sciences Center, University of Arizona, Tucson, Arizona 857~1
1. INTRODUCTION It may not be immediately obvious, but statistical image models are frequently employed in some current procedures of digital image processing. The models are not usually made explicit, but are made implicit by the adoption of assumptions that incorporate certain model assumptions within them. To illustrate, consider the number of algorithms which employ the assumption that the image can be treated as a random process with wide-sense stationary properties. For example, image deblurring with a minimum-mean-square-error (MMSE) filter uses the wide-sense stationary assumption to derive the structure of the deblurring filter ['1]. The parameters of a wide-sense stationary image model are readily specified, being the mean and autocorrelation function. Some may object to the use of the term "model" to refer to the wide-sense stationary assumption and the associated mean and autocorrelation parameters. Nonetheless, it does satisfy most of the properties found in a textbook definition of the word "model," and we will consider the wide-sense stationary description of an image as a "model." Perhaps part of the objection to calling the wide-sense stationary description a "model" is that it is so extremely weak. This is a legitimate complaint and is at the root of our motivation for the work discussed herein. What is of interest is making this model less weak. Little has been done in analyzing or processing images on the basis of assumptions other than a stationary process. What has been done has shown that favorable results come from abandoning the stationary process assumption. For example, Trussell developed an image deblurring algorithm that negated the assumed constancy of signal-to-noise ratio (SNR) throughout the image, and showed that superior deblurred images are produced [-21. Adaptive D P C M for image data compression allows the coefficients of the optimum predictor to vary within the image, and superior performance in data compression results [-31. In almost any situation, the virtue of abandoning the statistically stationary image * This work was performed under the sponsorship of the U.S. Air Force Office of Scientific Research under Grant AFOSR-76-3024.
173 0146-664X/80/020173-14502.00/0 Copyright O 1980 b y Academic Press, Inc. All rights of reproduction in any form reserved.
174
B.R. HUNT
model can be understood if one recalls what an image that is truly stationary looks like. The most convenient example of a statistically stationary image is to tune a television set to channel where no station is broadcasting! What is remarkable is that algorithms incorporating the stationary assumption produce usable results. Given that they work in the presence of clearly nonstationary data, we find the motivation for the ideas discussed in the remainder of this paper: to find transformations of a nonstationary image that will yield an image which satisfies the stationary model assumptions. We would then process the image by a stationary model algorithm and perform a transformation which is inverse to the original transformation to recreate the image. 2. CONVENTIONAL STATISTICAL IMAGE MODELS The conventional statistical image model is simply stated. Let f(x, y) be an image. Then the image is assumed to be described by the mean and autocorrelation statistics = E l ' f (x, y)], (1)
R(~, 7) = E l f ( x , y ) f ( z -'b ~, y + ~)], where E denotes ensemble expectation. Occasionally the autocorrelation statistic will be replaced by the autocovariance F(~, 7) = E l f ( x , y)f(x + ~, y q- 7) - ~2-]
(2)
which is readily related to the autocorrelation. Exactly how greatly an image can violate the conventional stationary assumptions is easily demonstrated. Given any image, a simple exercise is to calculate the mean of the image in blocks of n X n pixels. Ignoring the correlation between pixels, we would approximate the value of the mean in each block as a Gaussian random variable, whose variance is a function of the variance of the original pixels and the number of pixels, n ~. If the image were stationary, then paired comparison statistical tests of the means would show that no mean in any one block was statistically different from any of its neighbors (except for type I errors at the level of significance of the test, a). Performing this little experiment on any image invariably leads to failure. Almost all of the means in the blocks are statistically different from the others, even for very large values of the block size n. The reason why this occurs is obvious if one creates an image out of the means. Figure 1 is an image of size 128 X 128 (interpolated bilinearly to 512 X 512) and Fig. 2 is an image consisting of the means of Fig. 1 computed in 5 X 5 blocks (with the means of the 25 X 25 resulting data array bilinearly interpolated to 512 X 512). It is obvious that Fig. 2 is only a low-pass version of Fig. 1, with a recognizable relationship to Fig. 1. In local regions of the image the mean varies greatly, because the original image possessed great variations in the local value of optical intensity. Unfortunately this behavior is found in most images of any interest. The local variation of the image mean is also seen in higher-order statistics of an image, e.g., the autocorrelation. Breaking the image up into blocks of size n X n and calculating the autocorrelation function within each block yields
NONSTATIONARY STATISTICAL IMAGE MODELS
175
FIG. 1. Original image. autocorrelation functions which change within each block. The fact of locally changing autocorrelations is a prime reason for the development of block-adaptive methods of image data compression [3]. The failure of statistical stationary behavior for an image is also associated with another invalid assumption: the ergodic hypothesis. The derivation of the MMSE filter, for example, rests on the assumption of ensemble expectations to construct the correlation functions. However, an "ensemble of images," in the precise definition of the term, rarely exists. Common practice is to take the limited image sample (typically one image!) and carry out space-domain correlation function calculations. The correlation functions so calculated are then assumed
Flo. 2. Locally varying mean image.
176
B.R. HUNT
to be valid in the MMSE filter. This is the essence of the ergodic hypothesis, interchanging space and ensemble averages. It is clearly invalidated by variability of the image statistics in space. 3. NONSTATIONARY STATISTICAL IMAGE MODELS
The development of nonstationary image models can be characterized through cases of increasing difficulty. The ranking we propose for the nonstationary models would be as follows: Case 1 : Nonstationary mean, stationary autocorreiation. Case 2: Stationary mean, nonstationary autocorrelation. Case 3 : Both mean and autocorrelation nonstationary. In symbols, the three cases would be characterized as Case 1:
Mean --- #N(x, y), autocorrelation =
(3)
RN(~, 7) ;
Case 2:
Mean -- ~N, autocorrelation = RN (x, y, $, ,1) ;
(4)
Case 3:
Mean = #N(x, y), autocorrelation =
(5)
RN(x, y, ~, 7);
where, as in Eqs. (1) and (2), (~, '1) are lag variables in the correlation and (x, y) are coordinates in the image f. The symbolism in each case above includes the coordinates (x, y) to indicate that the specific statistic changes according to the image coordinates (x, y) of the neighborhood N in which the statistic is being calculated. Note the emphasis on the concept of calculating a statistic in a neighborhood. This is to emphasize that nonstationary statistics are defined in local neighborhoods of the points of an image. Since the breakdown of stationarity includes the loss of the ergodic assumption, then it is also necessary to specify the statistics in terms of spatial averages rather than ensemble averages. Thus, the neighborhood N defines the region over which spatial averaging takes place, and it is for this reason that we include the neighborhood N as part of the symbolism. Thus, in spatial averages we have
~N(x, y) = f f f(x, y)dxdy,
(6)
N
R~(x,y,6,.)--//f(x,y)f(x+~,y+.)dxdy.
(7)
N
For cases of stationary behavior, of course ~N = ~ ( x ,
y),
RN($, .) = RN(x, y, $, .).
(8)
NONSTATIONARY STATISTICAL IMAGE MODELS ~ Neighborhood 1
177
Neighborhood 2
~i (x,'.,,~,n)
P~q (x,y,~,~) 2
S\ FIG. 3. Change in width of correlation function. The mean being a single number, the breakdown of stationary behavior is readily described by tabulating the value of ~N for each (x, y) coordinate pair of interest. However, since the autocorrelation is a function, the breakdown of stationarity is associated with the ways in which the correlation function itself m a y change for each (x, y) coordinate pair of interest. There are three principal attributes of the correlation function which can vary with location. The three attributes which we identify as capable of being space-variant are the following : (1) Energy. The correlation function at ~ = ~ = 0 is (by definition) the meansquare energy of the image within the neighborhood N, i.e.,
RN(x, y, O, O) = a~2(x, y) -{- ~N2(X, y),
(9)
where aN2 is the variance of the neighborhood. The mean-square energy establishes t h e vertical scale of the correlation function. (2) Width. The correlation function may change in width as a function of (x, y), which we illustrate in Fig. 3 with a one-dimensional plot along a singlevariable (4) for two neighborhoods. (3) Shape. The correlation function m a y change in shape as a function of (x, y), which we illustrate in Fig. 4. Note t h a t the widths at half-maximum are equal, ~, = ~2, even though there are distinct differences in the shapes of the correlation functions. Variation in image space of any of these three attributes would be sufficient to invalidate assumptions of stationary behavior. In the next section we consider methods of transforming an image so t h a t the attributes become stationary. We note t h a t the choice of attributes for space variability is limited by our acceptance of stationary behavior as exemplified by "wide-sense" stationarity,
ElI1(x,Y,~, ~)
)
,
RN2(x,Y,~,n)
>
,
FIG. 4. Change in shape of correlation function.
>
17S
B.R. HUNT
i.e., the first two statistical moments of the random process. True stationarity involves all moments of a process, and would necessitate an infinite sef of attributes changing in image space. Note that even with the three attributes chosen above it is possible to construct subattributes. For example, "shape" could be decomposed into subattributes such as monotonicity, convexity, compact versus extended shapes. We have not done so because of the prospects for normalizing shape, as we discuss in the next section. 4. TRANSFORMATION TO STATIONARY BEHAVIOR For the obvious reasons of mathematical tractability, it is desirable to employ stationary image models. Thus, we are led to inquire about the prospects "of transforming a nonstationary image model into a stationary one. Suppose that Case 1 of the previous section prevails, i.e., stationary autocorrelation but nonstationary mean. The transformation
f~(x, y) = f(x, y) - ~ ( x , y)
(10)
will create an image which has a stationary (zero) mean with respect to the ~eighborhood N. Choice of a different neighborhood would require a different mean be subtracted. A key question in the transformation of Eq. (10) is the relative invariance of the operation to the specific choice of N. Obviously, the invariance is dependent upon the size and shape of the neighborhood N and the specific image ] as well ; this is a topic which merits research. Consider Case 2 discussed above, stationary mean and nonstationary autocorrelation. The breakdown in stationarity may be any one (or all) of the three phenomena discussed immediately above. A transformation to produce stationary behavior of the energy is straightforward
fN(1)(X, y) =
f(x, y)
(11)
RN(x, y, O, O) which yields an image normalized to unit energy with respect to the neighborhood N. Treatment of the other two possibilities for variation in the autocorrelation is not so simple. The most difficult case is the third above, i.e., where the autocorrelation varies in shape. In principle, any autocorrelation shape can be synthesized, using the power spectrum relations :
(I)g(~., ~ ) = Ig(¢0~, ¢0~)I~(I)s(~, ~ ) ,
(12)
where g, h, f are related as
g(x, y) = h(z, y)**f(x, y)
(13)
with h being the point-spread function of a filter. ~g, ~I are the power spectra of the associated processes, i.e., the Fourier transforms of the associated autocorrelation functions. The utilization of Eq. (12) is straightforward. If 4~o represents the Fourier transform of the "standardized" autocorrelation function, which must be uniform throughout the image, then a digital filter which produces the
NONSTATIONARY STATISTICAL IMAGE MODELS
179
standardized correlation can be constructed from the relation
~.(~, ~y) IH ( ~ , ~v)i 2 --
•
(14)
¢I (~, ~y) Equation (14) provides the theoretical grounds for "standardizing" the autocorrelation shapes. Note in the previous paragraph we stated that "in principle" the above relations provide the "theoretical" grounds for transforming an image to a common autocorrelation throughout the frame. In practice we believe the prospects for using Eq. (14) are limited, for the following reasons. First, the quotient in (14) need not be well behaved ; if Cg is more wide-band than ~s, then small denominator values can lead to gross amplification of any noise. A solution to this problem is to make the standard ~g more narrow-band than any spatial segments of the image. Since this implies low-pass filtering and loss of image detail, such a solution is considered to be unacceptable. Second, a transformation such as the above must be invertible (as we shall see below) and inversion of filters on images falls in the class of problems known as image restoration. This, in general, is a very difficult class of problems, and to willingly induce a transformation of this sort should be considered unwise ]-1]. Fortunately, there is reason to believe that the transformation to a standard autocorrelation/power-spectrum shape is unnecessary. Considerable studies of imagery for the purpose of adaptive data compression has shown that an image can be successfully treated as a single random process/autocorrelation model, with the parameters of the model varying in the image space. For example, adaptive DPCM is a successful data compression scheme because of the tendency of the image to fit a simple Markov process model (first- to third-order Markov), where the process parameters change within the image [-3]. Therefore, we will assume that the necessity to normalize the autocorrelation function is not present. Instead, we will assume a simple first-order Markov model for the image process. The autocorrelation function of such a model is given as R(~, 7) -- a7 e x p ( - p ( ~ 2 -~ ~2)~)
(15)
which is a form chosen for rotational symmetry, i.e., no spatially preferential directions, and not spatially separable. The assumed model for the image autocorrelation gives a space-variant form RN(x, y, ~, ~) = ~/~2(x, y) exp(--p~(x, y)(~2 + ~2)~),
(16)
in which the parameters varying with respect to the neighborhood N are the variance a1~2 and the correlation parameter pN(x, y). Since we have discussed above a transformation to normalize energy (and hence variance), we can assume the simpler form
RN(x, y, ~, 7) = exp(--pN(x, y)(~2 ~_ ~2)½),
(17)
which leaves only the correlation parameter to vary with the neighborhood N and location (x, y) of N. Clearly, we are assuming a case in which the correlation
180
B.R.
HUNT Inv SpaL~ Wa
f{x,y) I
Varying f ' [ x , y }
/
/
Measure Space-Varying Parameters Autocorrelation width
Fla. 5. A system for processing nonstationary images. shape remains constant, with correlation width varying in space. (See the previous section.) Under these assumptions, the problem of a transformation to render the image stationary becomes simple. One possibility is, of course, the filtering posed in Eq. (14). This we consider unacceptable, for the reason discussed above. Instead, we note the following. Since the correlation shape is assumed constant, then differences such as Fig. 3 can be equated through a scale factor. That is, the widths 001and 002in Fig. 3 are related as 001 = a00~.
(18)
A resampling (interpolation) of the image data to incorporate the scale factor a makes the correlations of equal width and produces stationary statistics. We can now describe a transformation, given a first-order Markov model, to produce stationary image statistics. For neighborhoods of size N, calculate autocorrelation functions and estimate the parameters on of correlation for the neighborhoods. The changes in pn are then used as control points in a leastsquares fit for a spatial-warp polynomial [3]. The transformed image is obtained by interpolating the original image via the warp to produce an image with constant 0 parameter throughout. 5. A SYSTEM FOR PROCESSING NONSTATIONARY IMAGES Figure 5 is a schematic diagram of a system that will carry out the processing of nonstationary images. The system incorporates the model of a nonstationary mean and nonstationary autocorrelation, with the nonstationary autocorrelation model being that of Eq. (17). The first stage of the model measures the spatial variation in model parameters. Using the spatial variation measurements, the successive stages correct for the nonstationary mean and autocorrelation. The nonstationary autocorrelation is corrected by a spatial warp. Following the correlations, the image is processed, through a given algorithm such as deblur or data compression, using stationary statistical assumptions. Then the transformations which created the stationary image are inverted, i.e., inverse spatial warp and reinsertion of the nonstationary mean. 6. EXAMPLES OF APPLICATIONS OF NONSTATIONARY MODELS In the following we will present two different examples of the application of a nonstationary image model to particular image processing problems.
NONSTATIONARY
STATISTICAL
IMAGE
MODELS
181
The first problem we consider is that of image restoration. Image restoration under optimum conditions usually incorporates assumptions that require stationary image statistics. For example, the most common image restoration method is the minimum-mean-square-error (MMSE) or Wiener filter. The filter is derived under an image formation model g(x, y) =
f[f[ oo
h ( x -- :v,, y -- y l ) f ( x l , y l ) d x l d y ~ + n(x~, y~)
(19)
oo
and is based upon the M M S E criterion Minimize E I [ f ( x ,
y) -
] ( x , y)]~}
{l (x.y) I
where ] ( x , y) =
{°I:
l ( x -- xl, y -- yl)g(Xl, y z ) d x l d y l .
(20)
The solution of this problem yields the Fourier domain description of the M M S E filter as H* (o~, wy) L(wx, o~y) =
pH(~x, ~,)j~ + (~,(~, ~ , ) / ~ ( ~ , , ~,))
(21)
where ~)~and ~I are the power spectra of noise and signal, respectively. The power spectra are defined, of course, only under spatially stationary statistical assumptions, and these are the assumptions required to solve the problem in the form of a convolution as in Eq. (20). The solution of the problem in the form above requires only a simple model: the image formation model of Eq. (19) plus spatially stationary statistics. In particular, no assumption concerning the distribution of amplitude values of the image has been made. By making amplitude distribution assumptions we can significantly expand the sophistication of the image restoration model. First, we represent the image formation model in terms of discrete operations. If we assume the image is sampled (at the appropriate Nyquist rate) then it is direct to show that the image formation model of Eq. (20) can be expressed as a vector-matrix operation g = [-H]f + ~,
(22)
where the matrix EHJ has special structure, e.g., block Toeplitz EIJ. Equation (22) is a sampled form of a linear image formation model. It is possible to relax the assumption of linearity, in the case of image sensors which are nonlinear (e.g., photographic film). A model for image formation and recording by a nonlinear sensor thus has the model
g = s{ EHJf } + ~
(23)
where s{ } is a point nonlinearity, such as the D - log E curve of a photographic
film I-i]. We have greatly complicated our image formation model. We now introduce
182
B.R. HUNT
an amplitude statistics model and allow for a maior nonstationary model component. We will assume that image statistics of the vector f can be modeled by a multivariate Gaussian probability density function p(f) = g exp(-- (f - ~)r[-Ri~-l(f - ~)),
(24)
where K is the standard normalizing constant, ERI~ is the covariance matrix of the multivariate image process, and f is the nonstationary mean of the image. The covariance matrix I-R1] can be readily related to the correlation function of the process I-l J, under the assumption that f is stationary in correlation although not in the mean. By adopting a solution criterion of finding the vector which maximizes the posterior probability density constructed from the prior density of Eq. (24), it is possible to construct an iterative solution to the restoration problem of Eq. (23). See [-4J for details and examples of solutions. We note that the model of Eq. (23), incorporating nonlinear image formation/recording processes, cannot be directly solved in an optimum fashion (e.g., maximum a posteriori probability). It is the introduction of our Gaussian statistical model, with the nonstationary mean, which makes possible an optimum solution of the nonlinear problem. It is relatively simple to construct the nonstationary mean ~ in Eq. (24). Since the basic behavior of the nonstationary mean is to change with spatial location, then an estimate of the nonstationary mean can be constructed by forming a local average, e.g., convolve the sampled image f with the sampled point-spread function of a low-pass filter 1 = EL]f, (25) where ELI is the matrix description for the convolution with the low-pass filter. It is legitimate to inquire whether the simple process of Eq. (25) constructs a mean that has any relation to Gaussian assumptions. The following experiment is relevant. Construct an image which is the difference between the image and the nonstationary mean | constructed as in Eq. (25), i.e., d = f -
1.
(26)
Then tabulate the amplitude histogram of the difference image d. The amplitude histogram is surprisingly close to a Gaussian density function, as demonstrated in [-91, for virtually any type of image. Although the model of a Gaussian density for image amplitude statistics is convenient to simplify the mathematics and obtain a solution for Eq. (23), it is not a gross distortion for amplitude statistics of difference images constructed as in Eq. (26). Thus, the assumed Gaussian model, with nonstationary mean, is useful and reflects something actually observed in experiments with real data. The above example reflects a simple image model for the problem of image restoration. We now consider another common problem in image processing: image data compression. The importance of nonstationary image statistics has been long recognized in image data compression. The most powerful image data compression schemes are those known as adaptive, e.g., adaptive transform coding or adaptive DPCM coding. Adaptive compression schemes have the property of being able to vary
183
NONSTATIONARY STATISTICAL IMAGE MODELS Lens Be&
Image Array
~a • sk
> Controller
1
I
f
FIG. 6. Schematic for hybrid digital/optical system to measure space-varying parameters. the important parameters of the compression algorithm as a function of localized behavior within the image. For example, an adaptive transform code will calculate the image transform in a given block, say 16 X 16 pixels, and then vary the assignment of code bits to the transform values as a function of the image structure within the block [3]. The general structure of the block diagram in Fig. 5 is suited for an image data compression system. The middle block in the diagram would be replaced by a transform or D P C M data compression system which had constant operating parameters, e.g., was spatially nonadaptive. Since the general structure of Fig. 5 is suited for an adaptive data compression system, we wish to examine a particular implementation. It is straightforward to envision hybrid digital/ optical hardware that can implement the various stages in the block diagram at extremely high data rates. This is most advantageous since the spatial warp operation can be a very costly process when implemented by digital computation. We consider first the box in Fig. 5 which derives all the critical information for the remainder of the system, i.e., the box with the function "measure spacevarying parameters." Two parameters are the outputs of this box, the spacevarying mean and the space-varying autocorrelation width. Both of these quantities must be measured in local image regions. Figure 6 shows the schematic outline of an optical system which can do this. The incoming image is intercepted by a mask. This is an electrooptic device capable of being spatially programmed for either full or zero transmission, for example, a Hughes liquid crystal or PLZT crystal. An aperture of full transmission is written onto the mask at the position in the image plane where it is desired to take a space-varying measurement; the rest of the mask is written at zero transmission. The beam-splitter diverts a portion of the masked image to a photodiode and the output of the photodiode is, by definition, the integrated or mean image intensity over the region selected by the mask. To measure the other parameter, which is autoeorrelation width, we use the Fourier transform properties of coherently illuminated lenses [6]. Since the
184
B, R. HUNT
input scene will almost always be observed in incoherent illumination, it is necessary to convert the masked portion of the input image from an incoherent to a coherent field. This can be done with a device such as the Hughes liquid crystal. A transform lens calculates the Fourier transform, which is sensed by a discrete array detector. The detector responds to intensity i.e., the detector output is FD(w~, co~) = F(~o~, ~0~)F(~0x,o~y)*, (27) where F is the complex field amplitude and FD is the detected transform. The detector thus observes the Fourier transform of the autocorrelation function. Since we have assumed a first-order Markov model for the autocorrelation (see Eq. (17)), then the width of the Fourier spectrum observed on the detector is directly related to the parameter oN in Eq. (17) [7]. Thus, the controller can use the measured data from the detector to calculate the space-varying autocorrelation width. With the space-variant measurements completed the next step is to carry out the operations which adjust the space-variant image properties. First, we cannot directly subtract the space-variant mean, because an optical system such as described here could not deal with any negative light values which could arise. Instead we add to the image a quantity which is the difference between the spacevariant mean and a bias sufficient to create everywhere positive light. Thus, the output of this adjustment would be f N ( x , y) = f ( x , y) -- ~ ( x ,
y) + b
(28)
which differs from the original in Eq. (10) only by the bias. Obviously, we would implement processing by adding the space-variant bias f•(x,
y) = f ( x , y) -t- bN(x, y),
(29)
where b~(x, y) = b -- ~N(X, y).
(30)
Finally, the spatial warp must be implemented. Warping is literally a "rubbersheet" transformation. We conceive of the image as being on a sheet of rubber and find a mapping of coordinates to distort the sheet in some desired way. Let the coordinates of the original image be x, y and the coordinates of the warped image be x', y'. Then we assume a polynomial coordinate warp El3] x ' = a o ~ a l x + a2y "1- a~x 2 -'1- a4x 2 -I- a~xy -t- " " ,
(31)
y' = bo -t- blx ~c b2y nt- b~x ~ --t- b4y 2 nu bsxy -t- " ' .
The problem is to specify the polynomial order and the coefficients. Current practice has shown that a third-order polynomial is usually sufficient, and the coefficients can be calculated by least-squares techniques [3]. The calculation of least-squares coefficients requires control-point pairs, i.e., pairs in the original and warped images which are the same pixel. This is simple for our problem, since we only wish to change image scales to make the autocorrelation length constant in the warped image. Thus, we would make a two-dimensional map of the autecorrelation width, and from the map determine control-point pairs. For
NONSTATIONARY STATISTICAL IMAGE MODELS
~
185
Display
InterpolatingSpot
~ Subtract1 ~ Flying ~Space-Varyin~ .~] Spot ! 8i°s ! J Scanner
Ixl,
1~'ransformatio|
I Coordinate Scan Generator
Space-VaryinqAutocorrelati~n Width Fro. 7. System for electrooptic warping of imagery. example, the variation in correlation length could be mapped to a constant value equal to the largest correlation length; such a choice would require interpolating extra pixels into image regions which possessed a smaller correlation length. Digital computer implementation of the warp is a very costly process due to the interpolation calculations and the extensive I/O requirements resulting from the distortion of the image space. However, it can be implemented very easily by electrooptics. Interpolation is only a weighted sum of pixels in a region, and a flying-spot scan of an image plane can interpolate if the spot profile (apodization profile) is equal to the interpolator weights. Likewise, the mapping of coordinates in Eqs. (31) can be readily performed by either analog or digital hardware. As the scanning spot samples one image plane it extracts the interpolated pixel from coordinate x, y, the coordinates x, y are transformed into coordinates x', y' via Eqs. (31), and then x', y' are used to position a writing spot in a new plane. Figure 7 shows a block-diagram schematic to carry out this process. The input scene is imaged on an electrooptic device which is capable of storing the image frame over the time required for processing. In the frame store a flying spot scans, under control of the coordinate generator, extracting from the spot read-out the interpolated image pixel values. The coordinate generator also passes values to a simple computer which generates new coordinate pairs in the warped plane, based on coefficients calculated from the autocorrelation width values. The new coordinate pairs drive a flying spot scan which writes the warped image in the output of an electrooptic image display. The value written to the output display is first modified by Eq. (28), to correct for the space-varying mean. The image in the output display plane can now be the input to a bandwidth compression process, e.g., transform or DPCM, which is nonadaptive. In fact, the transform process can have fixed parameters and the warping system in Fig. 7
186
B.R. HUNT
can be operated to a d a p t the image to the fixed parameters of the compression process. 7. CLOSING REMARKS T h e introduction of nonstationary statistics for image models is a logical step, when we consider how m a n y image processing analyses use stationary models in the face of certain nonstationarity in the image data. We have seen herein how such assumptions can be introduced into simple models and how applications of such models can be quickly derived. T h e exploitation of the models in specialpurpose hardware is a prospect t h a t could make such models of great utility for applications in real problems. REFERENCES 1. H. C. Andrews and B. R. Hunt, Digital Image Restoration, Prentice-Hall, Englewood Cliffs, N.J., 1977. 2. H. J. Trussell and B. R. Hunt, Sectioned methods for image restoration, IEEE Trans. Acoustics Speech Signal Processing ASSP-25, 1978, 157-164. 3. W. K. Pratt, Digital Image Processing, Wiley, New York, 1978. 4. B. R. Hunt, Bayesian methods in nonlinear digital image restoration, IEEE Trans. Computers C°25, 1977, 219-229. 5. B. R. Hunt and T. M. Cannon, Nonstationary assumptions for Gaussian models of images, IEEE Trans. Systems Man Cybernst. SMC-6, 1976, 876--882. 6. J. W. Goodman, Introduction to Fourier Optics, McGraw-Hill, New York, 1968. 7. A. Papoulis, The Fourier Integral and Its Applications, McGraw-Hill, New York, 1964.