Image Restoration Using Generalized Deterministic Annealing

Image Restoration Using Generalized Deterministic Annealing

DIGITAL SIGNAL PROCESSING ARTICLE NO. 7, 94–104 (1997) SP970283 Image Restoration Using Generalized Deterministic Annealing Scott T. Acton1 School ...

479KB Sizes 0 Downloads 89 Views

DIGITAL SIGNAL PROCESSING ARTICLE NO.

7, 94–104 (1997)

SP970283

Image Restoration Using Generalized Deterministic Annealing Scott T. Acton1 School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, Oklahoma 74078 two extremes, such as the Wiener filter, often produce a visually unappealing result, especially in the presence of heavy-tailed noise. To acquire higherquality solutions to this ill-posed image restoration problem, a regularization technique may be employed. Regularization is the process of forming a well-posed problem that has solutions which are acceptable approximations of the solutions to the actual ill-posed problem [1]. In image restoration, regularization is achieved by simultaneously satisfying two constraints. The first constraint, called the data constraint, essentially yields deconvolution (inverse filtering) while the second constraint, usually called the smoothness constraint, enforces smoothing on the solution. The input image to the image restoration process may be modeled as

Acton, S. T., Image Restoration Using Generalized Deterministic Annealing, Digital Signal Processing 7 (1997), 94–104. Regularization provides an effective method by which to generate high-quality solutions to the image restoration problem. Typically, an energy functional that balances deconvolution (sharpening) and smoothing is used to evaluate the quality of an iteratively generated solution. The success of the regularization approach is strongly tied to the method used to generate solutions of lower energy and, thus, higher quality. The generalized deterministic annealing (GDA) algorithm is developed here specifically for image restoration. Compared to other regularization techniques, GDA yields improvements to the quality of the restored image and in the computational expense of the restoration process. This paper specifies the numerical method, the parameters, and the energy functional used to generate image restoration solutions via GDA. Results are given that contrast the superior performance of GDA in image restoration against the traditional constrained least squares method, the steepest descent method, and the simulated annealing method. r 1997 Academic Press

g (i, j) 5 b(i, j) p f(i, j) 1 n(i, j), or in matrix form as g 5 b p f 1 n.

1. INTRODUCTION AND BACKGROUND

(2)

Here g is an M 3 N image with elements g(i, j ) at row i and column j, f is the original image, b is a spatially invariant blurring kernel, and n represents zero-mean independent and identically distributed (i.i.d.) additive noise. The goal of regularization is to produce fˆ, an acceptable approximation of f. One approach is to form a cost (energy) functional that enforces both the data and smoothness constraints, and then to compute solutions which minimize the cost or energy. For regularized image restoration, a possible energy functional that measures the solution quality of the

Given an image that is both blurred and corrupted with additive noise, the problem of restoring the image to its original state is difficult. Applying a low-pass filter to eliminate the additive noise only results in further blurring, and using a high-pass filter to invert the blurring process only amplifies the noise. Filters that attain a compromise between the

1

(1)

E-mail: [email protected]. Fax: (405) 744-9198.

1051-2004/97 $25.00 Copyright r 1997 by Academic Press All rights of reproduction in any form reserved.

94

candidate solution fˆ is E(fˆ ) 5 d (g, b p fˆ ) 1 l5d [R(fˆ ), 0]6,

image restoration problem. The GDA method is outlined and a specific implementation is given in Section 2. The energy functional used for image restoration is developed in Section 3. Then, the GDA-based method is compared to a standard constrained least squares method, a gradient descentbased method, and a stochastic simulated annealingbased restoration technique.

(3)

where g is the sensed input image, R(fˆ ) is the regularization operator, 0 is an M 3 N element matrix of zeros, d(·, ·) is a p-metric, and l is the regularization parameter. The term d(g, b p fˆ ) enforces the data constraint by penalizing solutions that deviate from the deblurred version of the input image. d[R(fˆ ), 0] assesses a penalty where the solution is not smooth. The matrix R(fˆ ) is nonzero where the solution fˆ violates the smoothness constraint. The most common regularization operator is the digital Laplacian, which encourages a smooth signal by penalizing the image regions with high frequency content [2–4]. To balance the conflicting goals of sharpening and smoothing, a regularization parameter l is used. Regularization in image restoration is a wellresearched subject. Much scrutiny has been applied to estimating the blurring function and estimating the statistics of the noise process [3], to selecting the regularization parameter [2,4–9], and to developing adaptive regularization operators [1,10–13]. Considerably less attention has been paid to the method of minimizing the energy functional, once it is constructed. The actual minimization process has grave effects on both the solution quality and the computational expense. Typical implementations of the regularization technique utilize gradient descent or a conjugate gradient method to find solutions fˆ that correspond to local minima of E(fˆ ). Because of the conflicting constraints, this minimization problem is nonconvex. With a nonconvex energy function, the descent methods can be easily trapped in suboptimal local minima, which correspond to poor restoration solutions. Therefore, more computationally intensive methods, which probabilistically sample the vast solution space have been explored. One such method, stochastic simulated annealing, can produce a solution which is arbitrarily close to the optimal solution. The processing time for reasonably sized images is prohibitive. Even on the most advanced workstations, simulated annealing algorithms can take hours or days to produce a single restoration result. This paper introduces a method by which to produce image restoration results with solution quality equivalent to results produced by simulated annealing, with a dramatic improvement in computational cost. The technique, generalized deterministic annealing (GDA), is applied to a regularized version of the

2. GENERALIZED DETERMINISTIC ANNEALING FOR IMAGE RESTORATION GDA was introduced in [14] as a tool for combinatorial optimization. This paper details the application of GDA to the image restoration problem. The principle behind GDA is to compute a deterministic approximation of the simulated annealing algorithm. Simulated annealing essentially takes a probabilistically guided search of a vast solution space. First, new solutions are generated randomly according to the distribution of a generating function G(·). For image restoration, this typically involves a random change of intensity at one pixel site. After generation, the new solution is accepted or rejected with a probability dictated by an acceptance function A(·) [15]. The annealing process is initialized at a high temperature T at which changes to solutions that involve an increase in energy (poorer quality) are accepted with high probability. This probability is reduced as the temperature is reduced and the solution becomes ‘‘frozen’’ in a local minimum of the energy.

2.1. GDA Theory For an M 3 N image with K possible pixel intensities, the set of possible image restoration solutions has a cardinality of K MN. Therefore, computing the probability of each solution directly is as expensive as conducting an exhaustive search of K MN solutions. With GDA, the solution space is first broken into several localized (one-pixel) problems. Then, the generation and acceptance functions of simulated annealing can be used to determine the stationary distribution for the pixel intensity of each image pixel at a given annealing temperature. As the annealing temperature is reduced, the GDA solutions, like the simulated annealing solutions, converge to a solution image that corresponds to a local minimum in the energy functional. The pixel intensity values can be estimated from the probability distributions by taking expected values. The estimate for the intensity value at location

95

(i, j ) can be expressed as the expected value

For the image restoration solution, the generation function is given by

l (i, j)1W21

ˆf (i, j) 5

o

npi, j (n),

(4) G (m, n) 5

n5l (i, j )

where pi, j (n) represents the discrete probability that the pixel at location (i, j ) has an intensity of n. pi, j (n) is computed using the generation and acceptance functions of simulated annealing, as discussed below. Because of the computational burden of evaluating probability densities for all possible pixel intensities, only a subset of pixel intensities are considered at one time. This subset, or window, is centered at the estimate ˆf (i, j ). The minimum pixel intensity considered at location (i, j ) is denoted by l(i, j ), which is initialized as l(i, j ) ; g (i, j) 2 (W 2 1)/2.

A i,T j (m, n) 5

(5)

(6)

(7)

P i,T j (m, n).

(11)

(12)

l (i, j )1W21

o

pi, j (n) ;

P i,T j (m, n)pi, j (m)

m5l (i, j ), mfin

3

1 pi, j (n) 1 2

l (i, j )1W21

o

m5l (i, j ), mfin

4

P i,T j (n, m) ,

(13)

which simplifies to pi, j (n) ;

(8)

1 W

l (i, j )1W21

o

A i,T j (m, n)[pi, j (m) 1 pi, j (n)].

m5l (i, j )

(14) The GDA algorithm for image restoration is enacted by forming the appropriate energy functional (3) and then iterating on (14) for each pixel at each annealing temperature. To actually implement this image restoration process, the algorithm parameters and the annealing schedule (the exact progression of temperatures) must be given. Also, the number of times each probability density for each pixel is updated to obtain a stationary distribution must be given.

l (i, j )1W21

o

,

Note that =Ei, j (m, n) only depends on a small local neighborhood of pixels restricted to the spatial support of the blurring operator and the regularization operator. An update function for pi, j (n) can be constructed summing the probabilities of transitions to pixel intensity n using (8), including possible self-transitions using (9). This update function is

where G(m, n) is the probability of generating the intensity n given an initial intensity of m. A i,T j (m, n) is the probability of accepting a transition from intensity m to intensity n at temperature T for the pixel at location (i, j ). The probability of a selftransition is P i,T j (m, m) 5 1 2

1 1 exp [2=Ei, j (m, n)/T]

=Ei, j (m, n) ; E[fˆ: ˆf (i, j) 5 n]2 E[fˆ: ˆf (i, j) 5 m].

if ˆf (i, j ) $ l(i, j ) 1 (W 2 1)/2 1 1. The changes are limited to single intensity value changes to avoid possible oscillatory effects in the update of the probability densities. To compute the pixel intensity estimates in the image restoration process, the probability density function pi, j (n) is required for (4). The stationary distribution of pi, j (n) at annealing temperature T can be formed as a function of the transition probabilities dictated by simulated annealing. The transition probability of the pixel at (i, j ) with intensity m changing intensity to a value of n is given by (m fi n) P i,T j (m, n) 5 G (m, n)A i,T j (m, n),

1

where =Ei, j (m, n) is the change in energy produced by changing the pixel intensity at location (i, j ) from m to n. =Ei, j (m, n) is defined by

if ˆf (i, j ) # l(i, j ) 1 (W 2 1)/2 2 1, and l(i, j ) ; l(i, j ) 1 1

(10)

W

so that each pixel intensity is equally probable as a candidate solution. The acceptance function is a sigmoid dependent on the change in the energy functional produced by a potential transition

Since the mean field estimate ˆf (i, j ) will change as the density function pi, j (n) changes, l(i, j ) is recomputed after each update according to the rule l(i, j) ; l(i, j ) 2 1

1

(9)

n5l (i, j ), nfim

96

which, according to [14], gives an initial temperature of

2.2. Guidelines for the Implementation of Image Restoration with GDA First, W, the number of pixel intensities considered at each pixel site, must be selected. At one time, GDA evaluates the probability density associated with W different solutions for the intensity at each image location. Since the window of possible intensities is centered at the expected value of the intensity for the particular pixel, W is essentially a function of the uncertainty in the pixel intensity due to noise. Given an additive noise process with a zero-mean and standard deviation equal to s, a 95% confidence interval is provided by W 5 4s.

TI 5

b

3 1 ln

W 2 1 1 W 2e

W 2 1 2 W 2e

TI 5

(15)

b W

3ln 1W 2 224

0 # j , N; l(i, j ) # m , l (i, j)1 W; l (i, j) TF 5

(16)

where Z is the set of integers, and define the maximum possible energy change as

# j , N; l (i, j ) # m , l (i, j )1 W; l(i, j)

TF 5

2

1

, e, W0

a

(21)

W 2 1 2 We

3ln 1

24

We

a 2

[ln (W 2 W 2 1)]

.

(22)

(17) Since a and b depend on the actual energy functional (3), TI and TF also depend on (3). The exact expressions of (20) and (22) will be defined after the energy functional is defined. The final temperature TF can also be determined at run time, by examining the change in probability density between iterations, =pi, j (n), for all image locations (i, j ) and possible intensities n. A stopping condition can be then given by

At the initial annealing temperature TI, it is desired that all solutions have equal probability. The reasoning behind this condition is that the algorithm should initially be able to make changes to solutions that involve both increases and reductions in the energy. Otherwise, the image restoration algorithm would not be able to escape local minima in the energy functional which correspond to suboptimal image solutions. If all solutions for a given pixel were given equal probability, then (;i, j, n): pi, j (n) 5 1/W. However, this condition is true only if T 5 `. So, this condition is relaxed by an e-bound

i, j (n)

(20)

is needed, given a sufficiently small e in the range 0 , e , 1/2 2 1/2W. Again, e 5 1/W 2 for the implementation of image restoration, giving

b ; sup 5=Ei, j (m, n): 0 # i , M; 0

0p

.

The final temperature in the annealing process should be low enough that the image restoration solution has converged and is ‘‘frozen.’’ At this temperature, GDA functions as steepest descent algorithm, where only changes to lower-energy solutions are accepted. For uniform convergence to within an e-bound of the final probability density for each pi, j (n), a final temperature of

a ; inf 5=Ei, j (m, n): 0 # i , M;

# n , l (i, j ) 1 W; i, j, m, n [ Z6.

(19)

A sufficient e-bound for this image restoration application is provided by e 5 1/W 2. Then (18) becomes 0 pi, j (n) 2 1/W 0 , 1/W 2, and

The exact annealing schedule and the number of iterations needed at each temperature depend on the energy functional (3) used in image restoration. Define the minimum possible change in the energy produced by a change in a pixel intensity as

# n , l (i, j ) 1 W; i, j, m, n [ Z6,

24

.

0=pi, j (n)0 , e

(23)

or 0=pi, j (n)0 ,

(18)

97

1 W2

.

(24)

In applying GDA to the image restoration problem, one should be aware of implementation details that result in computational savings. First, the most expensive part of the update computation is the evaluation of (11), the acceptance function. The sigmoidal acceptance function values can be stored in a lookup table to enhance speed. Then, only simple addition and multiplication operations are needed for each pixel update. For the updates, it may be noted that A i,T j (m, n) 5 1 2 A i,T j (n, m). Therefore, only half of the required acceptance function values need to be computed.

At this point, the initial and final temperatures for using GDA to generate image restoration solutions are defined. To implement the annealing process, the method of generating the intermediate temperatures must also be defined. A fast geometric annealing schedule is used with GDA, according to the rule T ; µT.

(25)

The parameter µ is generally set between 0.9 and 0.95 in the annealing literature [15]. If analytical selection of µ is desired, it can be determined by the computational resources available for the image restoration process. Since the number of updates per temperature is known, the computational expense can be limited by the number of temperatures used in the annealing process. If the number of temperatures is restricted by t, then TF

µt 5

TI

3. SPECIFICATION OF THE ENERGY FUNCTIONAL FOR IMAGE RESTORATION 3.1. Construction of the Data Constraint

,

(26)

To accomplish the image restoration process using GDA, the energy functional must be explicitly defined. Again, the energy function of (3) essentially balances deconvolution and smoothing. For deconvolution via a data constraint, a penalty is assessed by measuring (with a prescribed p-metric) the distance between the blurred version of the current solution and the input image. The p-metrics for the data constraint can be expressed as

and this leads to µ5

Œ t

TF TI

.

(27)

The number of updates to each probability density pi, j (n) using (14) is different for each temperature T. Let h(T) represent an analytical bound of the number of iterative updates of (14) at temperature T to achieve uniform convergence within an e-bound of the stationary distribution. From [14], this bound is

ln

W22

31 2W 2 e

2T/b

M21 N21

5

1

1

24

,

(28)

h(T ) 5

f(i) 5 g exp (2z 0i 0 p ),

1

1W 2 W22 ln 31 e 2W 2 2

2T/b

1

1 2

4

.

M21 N21 l (i, j )1W21

MN o o 3 o i50 j50

n5l (i, j )

4

0 =pi, j (n) 0 ,

1 W

.

2

1/p

(31)

(32)

where p [ (0, `), and g and z are constants that depend on p only, the metric with parameter p should be used in the restoration process [16]. This means that p 5 1 should be used for Laplaciandistributed noise, p 5 2 should be used for Gaussiandistributed noise, and so on.

(29)

In general, this produces a number of iterations h(T ) < W. For a run-time stopping rule, the change in density =pi, j (n) can be observed at a given temperature T. The stopping condition is 1

0h(i, j) 2 g (i, j)0 p

for p [ (0, `), where h 5 b p fˆ. The proper p-metric can be chosen according to a priori information about the additive noise distribution. Given noise with a probability density function of

and for the image restoration implementation with e 5 1/W 2, ln

1o o i50 j50

ln (e)

h(T) 5

d(b p fˆ, g) 5 d (h, g)

3.2. Construction of the Smoothing Constraint Several possibilities exist for the regularization operator R(fˆ ) that enforces a smoothing constraint on the solution. To demonstrate that the GDA method described here can be used in conjunction with

(30)

98

difficult nonlinear regularization operators, a nonlinear operator will be employed. A possible measure of signal smoothness that penalizes outliers and preserves edges is local monotonicity. A 1-D signal is locally monotonic if it is increasing, decreasing, or constant within every subsequence of length m. For images, local monotonicity must be evaluated along 1-D paths (such as along rows and columns). The element of R(fˆ ) at location (i, j ) is denoted by Ri, j (fˆ ). One possible method for computing Ri, j (fˆ ) is Ri, j (fˆ ) 5

i

k1m23

o

o

k5i2m11

the noise process. A second limitation is the restriction of the regularization operator to linear operators. Here, a method that does not require knowledge of the corruptive process and allows nonlinear regularization operators is utilized. Cross validation, introduced by Reeves and Mersereau [4,17], divides the image pixels into an estimation set and a validation set. To evaluate the effectiveness of a certain parameter value for l, the restoration process is carried out as normal on the pixels in the estimation set. At the same time, image restoration is computed for the pixels in the validation set, but with an energy functional that does not have a regularization operator (the smoothness constraint is relaxed). Then, the pixels in the validation set are used to predict the error (the distance between the solution fˆ and the original image f ). Let Q represent the set of all image coordinates, and let Qv represent the set of pixels in the validation set. Reeves and Mersereau have reported that a 1 of the total validation set with approximately 10 pixels is effective in implementing an approximate cross validation method [17]. Evaluation of a particular regularization parameter value is achieved by first finding the solution that (approximately) minimizes

neg 5[ fˆ (l, j ) 2 fˆ (l 1 1, j)]

l5k

3 [fˆ (l 1 1, j ) 2 ˆf (l 1 2, j )]6 0 0 ˆf (l, j) 2 ˆf (l 1 1, j ) 0 2 0 ˆf (l 1 1, j) 2 ˆf (l 1 2, j )00 1

j

k1m23

o

o

k5j2m11

neg 5[ fˆ (i, l) 2 fˆ (i, l 1 1)]

l5k

3 [fˆ (i, l 1 1) 2 ˆf (i, l 1 2)]6 0 0 ˆf (i, l) 2 ˆf (i, l 1 1) 0 2 0 ˆf (i, l 1 1) 2 fˆ (i, l 1 2)0 0 ,

(33)

where the function neg (x) 5 1 for x , 0 and neg (x) 5 0 otherwise. The first term of (33) gives the penalty for the deviation from monotonicity along the columns, and the second term corresponds to the row subsequences. The penalty (33) is nonzero when a sign change exists between successive discrete differences in the signal. This sign change of the difference signal indicates that the signal has gone from increasing to decreasing or vice versa within a length-m subsequence. The complete regularization operator is given by computing (33) for each image location (i, j ). Then, R(fˆ ) is measured by M21 N21

d[R(fˆ ), 0] 5

1o o i50 j50

0 Ri, j (fˆ )0 p

2

E v (fˆ ) 5 d e (g, h) 1 ld[R(fˆ ), 0],

(35)

where d e (g, h) only pertains to the pixels in the estimation set as follows: d e (g, h) 5

1

o

2

0 g (i, j) 2 h(i, j) 0 p

(i, j )[Q2Qv

1/p

.

(36)

Again, h 5 b p fˆ. Then, the validation error is expressed as V (l) 5 d v (g, h),

(37)

1/p

.

(34)

where d v (g, h) is evaluated over the validation set pixels by

Now, with the energy functional in hand, the regularization parameter l must be determined.

d v (g, h) 5

3.3. Selection of the Regularization Parameter In the literature, the regularization parameter has been selected using the statistics of the corruptive process [4,5,8,9]. Iterative and adaptive methods have also been applied to improve image quality and to limit oversmoothing [1,2,6,7,10–12]. One drawback of these methods is the required knowledge of

1

o

(i, j )[Qv

0 g (i, j) 2 h(i, j) 0 p

2

1/p

.

(38)

For different values of l, the validation error will have a minimum value approximately collocated with the minimum error between the solution fˆ and the original image f. So, the regularization parameter l that gives the minimum cross validation error is used in the image restoration process. 99

3.4. Specification of the Minimum and Maximum Energy Changes As stated in Section 2, the specific annealing schedule and number of updates required at each temperature are dependent on the minimum and maximum possible changes in energy associated with one solution change. Now that the energy functional is completely defined for image restoration, the minimum and maximum energy changes, a and b, respectively, will be provided. The minimum change using (16) is a 5 1,

(39)

which represents the minimum change in the data constraint term of (31). The maximum change is given by the sum of the maximum change in (31), W 2 1, and the maximum change in (33), 2l(K 2 1): FIG. 1.

b 5 W 2 1 1 2l(K 2 1).

Now, a and b can be applied to (20), (22), and (29) to obtain the initial annealing temperature, the final temperature, and the number of iterations needed per temperature.

blurred, corrupted version of the cameraman image is shown in Fig. 2. The comparative results are evaluated by visual quality subjectively and by the improvement in SNR (ISNR):

4. COMPARISON TO OTHER METHODS; RESULTS

ISNR 5 10 log10

The purpose of this paper is to establish a method by which to generate image restoration solutions of high quality and relatively low computational expense. In the comparative analysis presented here, three classes of techniques are used to produce solutions to the regularized version of the image restoration problem. Those three methods include a greedy algorithm (steepest descent), a stochastic algorithm (simulated annealing), and a deterministic annealing algorithm (GDA). To first establish the superiority in solution quality of the regularized image restoration solutions, a result from a standard constrained least squares method is also presented. The example used here is the ‘‘cameraman’’ image (Fig. 1) which has been blurred by a 1 3 9 uniform motion blur (i.e., filtered with a 1 3 9 average filter). The blurred image has also been corrupted by 15-dB Gaussian-distributed additive noise, where the signal-to-noise ratio (SNR) is computed using SNR 5 10 · log10 (sb2 p f /s2 ),

The original cameraman image.

(40)

\g 2 f\ 2 \fˆ 2 f \ 2

.

(42)

The specification of the method used and the results obtained for the constrained least squares,

(41)

where sb2 pf is the variance of the blurred but uncorrupted image and s2 is the variance of the noise. The

FIG. 2.

100

The blurred, noisy image (SNR 5 15 dB).

where L(u, v) is the DFT of the spatial domain Laplacian kernel, and g is an unknown regularization parameter. g is determined by iteratively producing solutions using (45) until the minimal error from (31) is discovered [19]. The CLS method is extremely inexpensive computationally, requiring only 17 seconds on a Sun Sparc Ultra I. However, the change in SNR of 23.9 dB is unacceptable. Furthermore, the result shown in Fig. 3 depicts a wavy, noisy background and the absence of sharp edges.

TABLE 1 Results

Method

MSE

ISNR (dB)

CLS Steepest descent (10 iterations) SA (100 iterations/temp.) SA (256 iterations/temp.) GDA (W 5 9) GDA (W 5 21)

1243.1 474.1 397.6 393.2 401.2 364.4

23.9 10.6 11.3 11.4 11.3 11.7

a

Execution time a (HH:MM:SS) 00:00:17 02:16:21 06:37:31 09:02:21 00:13:54 01:48:28

All times for execution on a Sun Sparc Ultra I.

4.2. Steepest Descent The steepest descent algorithm belongs to the family of ‘‘greedy’’ optimization algorithms. A greedy method only makes solution changes that result in a reduction in energy. Therefore, greedy algorithms cannot escape local minima in the energy function and result in suboptimal solutions. Furthermore, the steepest descent algorithm and greedy algorithms in general are susceptible to oscillations in the iterative solution that significantly increase the computational expense. (It is possible to revisit the same solution several times.) The steepest descent algorithm used here simply updates each pixel in the image in a random order, then changes the pixel intensity to the value which results in the greatest reduction of energy. The updates may be expressed as

steepest descent, and simulated annealing approaches are given here. The results of each method are compared to those given by GDA. Table 1 provides the mean-squared error (MSE), the ISNR, and the execution time for each technique.

4.1. Constrained Least Squares (CLS) Restoration The CLS method for image restoration is motivated by the same ideas of the regularized method described in the Introduction—attempting to achieve a balance between deconvolution and smoothing. Using a fast Fourier transform (FFT)-based approach, solutions can be rapidly produced. Given (1), the blurred, noisy image can be expressed in the frequency domain by G(u, v) 5 B (u, v)F (u, v) 1 N(u, v),

fˆ (i, j) 5 argn (min 5E[fˆ: fˆ (i, j) 5 n]6),

(43)

where 0 # n # K 2 1.

where G(u, v) is the DFT (discrete Fourier transform) of g(i, j ), B(u, v) is the DFT of b(i, j ), F(u, v) is the DFT of the original image f (i, j ), and N(u, v) is the DFT of n(i, j ). Image restoration can be attempted using the classical Wiener approach ˆ (u, v) 5 F

(1 1 h)B*(u, v)

3 0 B(u, v)0

2

1h

4G(u, v),

(44)

ˆ (u, v) is the DFT of the where h is noise power, F restoration result, and B*(u, v) is the complex conjugate of B(u, v). Unfortunately, the Wiener filter is ineffective for heavy-tailed noise [3]. The CLS approach, on the other hand, attempts to minimize the response of the Laplacian to the solution, as with the traditional regularization approach ˆ (u, v) is computed using [6,19]. Here, F ˆ (u, v) 5 F

B *(u, v)

30 B(u, v)0

2

4G(u, v),

1 gL(u, v)

(45)

FIG. 3.

101

Image restoration result using CLS.

(46)

The high computational expense of the descent algorithm (over 2 h) is also due to the K energy computations that must be performed with each update. The quantitative solution quality of 10.8 dB improves on the CLS method, but falls significantly short of the GDA result (11.7 dB) given in less time (see Table 1). Qualitatively speaking, the noise still evident in the steepest descent result shown in Fig. 4 is the visual result of a poor minimization of the image restoration energy functional.

4.3. Simulated Annealing The principles of the simulated annealing algorithm have already been discussed in Section 2. Each simulated annealing update to the solution evaluates the acceptance or rejection of a random move for a randomly selected pixel. At a particular pixel site (i, j ), a random change is generated from ˆf (i, j ) 5 m to ˆf (i, j ) 5 n according to the distribution of the generating function G(m, n). Given a uniformly distributed random number r [ [0, 1], if A i,T j (m, n) # r,

FIG. 5. Image restoration result using simulated annealing (100 iterations/temperature).

(47)

with a geometric annealing schedule is given here. The annealing schedule used with the GDA examples is also used for simulated annealing. The major expense of the simulated annealing algorithm is the prohibitive number of random updates that are required to achieve convergence to a stationary solution at each temperature. The practical algorithm limits the number of updates. With just 100 iterations (full sweeps of updates) per temperature, the simulated annealing method requires over 6 h to generate a restored image with an ISNR of 11.3 dB (Fig. 5). With 256 iterations per temperature, a modest improvement in solution quality (to 11.4 dB) is obtained for an additional 3 h of processing time (Fig. 6). The improvement in SNR is less than that provided by GDA with W 5 21 (an ISNR of 11.7 dB), and simulated annealing requires approximately 40 times the amount of computational time to produce the same solution quality as the W 5 9 GDA implementation (Fig. 7). (At W 5 51, GDA yields an ISNR of 12.0 dB.) In terms of subjective quality, the GDA result in Fig. 8 appears less noisy than either of the simulated annealing results in Figs. 5 and 6. Features in the face of the cameraman and in the camera itself are more clear than in the simulated annealinggenerated images.

then the change is accepted. Otherwise, the change in intensity at location (i, j ) is rejected. At each annealing temperature, a number of stochastic updates are performed until convergence to a stationary distribution is achieved. Annealing continues until the final temperature is reached, where only downhill moves (changes which result in decreases in the energy) are accepted [15]. A comparison to a practical simulated annealing

5. CONCLUSIONS A specific method by which to obtain a solution to the regularized image restoration problem is prescribed. The numerical method for using GDA to

FIG. 4. Image restoration result using steepest descent.

102

FIG. 6. Image restoration result using simulated annealing (256 iterations/temperature).

FIG. 8.

generate restored images is given in Section 2. Methods for selecting the GDA parameters and the annealing schedule are also given in Section 2. The GDA-based restoration approach is based on the minimization of an energy functional that encapsulates the notions of deconvolution and smoothing. This energy functional, along with the regularization parameter, is explicitly defined in Section 3. To demonstrate the efficacy of this method, the GDAgenerated results are compared to that of a CLS method, a steepest descent method, and a simulated annealing method. The GDA results uniquely give

Image restoration result using GDA (W 5 21).

high quality for reasonable computational times. As shown in Table 1, GDA provides the highest ISNR and requires less computational time than the steepest descent or simulated annealing methods. Future studies will focus on the use of nonlinear regularization operators in conjunction with GDA.

ACKNOWLEDGMENTS This work is supported in part by NASA and in part by the U.S. Army Research Office.

REFERENCES

1. Katsaggelos, A. K. Iterative image restoration algorithms. Opt. Eng. 28 (1989), 735–748. 2. Kang, M. G., and Katsaggelos, A. K. General choice of the regularization functional in regularized image restoration. IEEE Trans. Image Process. 4 (1995), 594–602. 3. Katsaggelos, A. K. Digital Image Restoration. SpringerVerlag, Berlin, 1991. 4. Reeves, S. J., and Higdon, A. C. Perceptual evaluation of the mean-square error choice of regularization parameter. IEEE Trans. Image Process. 4 (1995), 107–111. 5. Galatsanos, N. P., and Katsaggelos, A. K. Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation. IEEE Trans. Image Process. 1 (1992), 322–336. 6. Kang, M. G., and Katsaggelos, A. K. Simultaneous iterative image restoration and evaluation of the regularization parameter. IEEE Trans. Signal Process. 40 (1992), 2329–2334.

FIG. 7. Image restoration result using GDA (W 5 9).

103

7. Katsaggelos, A. K., and Kang, M. G. Iterative evaluation of the regularization parameter in regularized image restoration. J. Visual Comm. Image Rep. 3 (1992), 446–455. 8. Reeves, S. J., and Mersereau, R. M. Optimal estimation of the regularization parameter and stabilizing functional for regularized image restoration. Opt. Eng. 29 (1990), 446–454. 9. Thompson, A. M., Brown, J. C., Kay, J. W., and Titterington, D. M. A study of methods of choosing the smoothing parameter in image restoration by regularization. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-13 (1991), 326–339. 10. Ichioka, Y., and Nakajima, N. Iterative image restoration considering visibility. J. Opt. Soc. Amer. 71 (1983), 983–988. 11. Kang, M. G., and Katsaggelos, A. K. A generalized formulation of the weighted smoothing functional for regularized image restoration. In Proc. IEEE International Conference on Image Processing, Austin, TX, 1994, pp. 695–699. 12. Lagendijk, R. L., Biemond, J., and Boekee, D. E. Regularized iterative image restoration with ringing reduction. IEEE Trans. Acoust. Speech Signal Process. ASSP-36 (1988), 1804– 1887. 13. Rajala, S. S., and deFigueiredo, R. J. P. Adaptive nonlinear image restoration by a modified Kalman filtering approach. IEEE Trans. Acoust. Speech Signal Process. ASSP-29 (1981), 1033–1042. 14. Acton, S. T., and Bovik, A. C. Generalized deterministic annealing. IEEE Trans. Neural Networks 7 (1996), 686–699. 15. Aarts, E. H. L., and Korst, J. Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing. Wiley, New York, 1987.

16. Restrepo (Palacios), A., and Bovik, A. C. Locally monotonic regression. IEEE Trans. Signal Process. ASSP-41 (1993), 2796–2810. 17. Reeves, S. J., and Mersereau, R. M. Automatic assessment of constraint sets in image restoration. IEEE Trans. Image Process. 1 (1992), 119–122. 18. Hunt, B. R. The application of constrained least squares estimation to image restoration. IEEE Trans. Comput. 22 (1973), 805–812. 19. Gonzales, R., and Woods, R. Digital Image Processing, Addison–Wesley, Reading, MA, 1992.

SCOTT T. ACTON received the B.S. degree in electrical engineering from the Virginia Polytechnic Institute and State University in 1988. He received the M.S. degree and the Ph.D. degree in Electrical Engineering from the University of Texas at Austin in 1990 and 1993, respectively. He has worked in industry for AT&T in Oakton, Virginia; for the Mitre Corporation in McLean, Virginia; and for Motorola, Inc., in Phoenix, Arizona. Dr. Acton is the 1996 Outstanding Young Electrical Engineer in the U.S., as selected by Eta Kappa Nu. Currently, Dr. Acton is an assistant professor in the School of Electrical and Computer Engineering at Oklahoma State University, where he directs the Oklahoma Imaging Laboratory. The laboratory is sponsored by several organizations including the Army Research Office (ARO), NASA, NSF, and Lucent Technologies. He is an active participant in the IEEE, SME, and Eta Kappa Nu. His research interests include multiresolution image representations, diffusion algorithms, image morphology, and image restoration.

104