Pattern Recognition 35 (2002) 455–463
www.elsevier.com/locate/patcog
New maximum likelihood motion estimation schemes for noisy ultrasound images Boaz Cohen∗ , Its’hak Dinstein Electrical and Computer Engineering Department, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel Received 25 January 2000; received in revised form 20 February 2001; accepted 20 February 2001
Abstract When performing block-matching based motion estimation with the ML estimator, one would try to match blocks from the two images, within a prede5ned search area. The estimated motion vector is that which maximizes a likelihood function, formulated according to the image formation model. Two new maximum likelihood motion estimation schemes for ultrasound images are presented. The new likelihood functions are based on the assumption that both images are contaminated by a Rayleigh distributed multiplicative noise. The new approach enables motion estimation in cases where a noiseless reference image is not available. Experimental results show a motion estimation improvement with regards to other known ML estimation methods. ? 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. Keywords: Ultrasound images; Motion estimation; Block matching; Maximum likelihood; Rayleigh distributed noise
1. Introduction Medical ultrasound images are used as a very important diagnostic tool for physicians. The acquisition of an ultrasound image is fast, cheap, and safe. Each ultrasound image is generated by re
used for speckle suppression. Motion estimation can also be used for image sequence compression (usually block-based motion estimation) [3] and as a clinical diagnostic powerful tool for the identi5cation of pathological abnormalities [4]. Substantial e=ort has been made to reduce speckle noise in single ultrasound images (for example, see Refs. [2,5 –10]). As in many other noise reduction issues, good speckle noise reduction results can be achieved by integrating sequence processing [3,11–14]. One possible approach for such processing is to estimate the local motion between the ultrasound frames and then perform temporal 5ltering to the motion-compensated data. Kontogeorgakis et al. [4] estimated the motion between blocks of ultrasound images using the minimum absolute di=erence of their luminance values in order to estimate a motion-magnitude map. Yeung et al. [15] estimated motion in ultrasound sequences by block matching with the sum of absolute di=erences used as the matching criteria. They worked with variable matching blocks and search window sizes in a coarse-to-5ne scheme, to preserve the immunity to noise associated with the use of a large
0031-3203/01/$20.00 ? 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 3 1 - 3 2 0 3 ( 0 1 ) 0 0 0 5 3 - X
456
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
matching blocks while preserving the motion 5eld associated with the use of small matching blocks. Strintzis and Kokkinidis [3] based their work on the assumption that speckle patterns in ultrasound images can be represented by multiplicative Rayleigh distributed noise or by signal dependent additive noise [6]. They have used these statistical models for the evaluation of the maximum likelihood (ML) estimate of the motion 5eld. They claimed that the SNR of a reconstructed image obtained by the ML motion estimator based on these models is higher than the one obtained by the classical block matching ML algorithm using the L1 norm. According to their models, only one of the two images used to compute the motion 5eld contains noise. In this paper we extend the model proposed in Ref. [3], and present a ML motion estimator for the case of two ultrasound images contaminated by multiplicative Rayleigh distributed noise. The outline of this paper is as follows. Motion estimation by ML is described in Section 2. In Section 3, various conditional probability density functions for the ML motion estimation are presented. Section 4 contains some simulated and in vivo test results, and the summary and conclusions appear in Section 5.
According to the description and notation given in Ref. [3], we let Xt be a set of coordinates of an image at an instant time t. Let It (xt ) represent the intensity of a pixel at the coordinates xt , xt ∈ Xt , and let It = {It (xt )}xt ∈Xt represent the intensity image at time t. The transition of a pixel between t −1 and t is described by the motion vector Ct (xt ), xt ∈ Xt . If xt −1 ∈ Xt −1 is the pixel corresponding to xt ∈ Xt , the following stands: (1)
Let Ct = {Ct (xt )}xt ∈Xt represent the vector 5eld describing the motion of pixels Xt between t − 1 and t. Since this vector 5eld is composed from single pixel motions, it can represent any kind of desired motion model. According to the maximum likelihood method for parameter estimation [16], the ML estimate of Ct , CˆML , is obtained by maximization of the conditional probability density max p (It −1 |It ; Ct ) vt
at Ct = CˆML :
(2)
The vector 5eld for non-rigid motion Ct is not usually constant, so it must be estimated locally. In order to do this we assume that Xt is composed of non-overlapping sets Bi , i = 1; : : : ; N , of pixels with uniform motion Xt =
N
Bi ;
Bi ∩ Bj = ∅;
i = j;
(3)
i=1
Ct (xt ) = Ci
for all xt ∈ Bi :
Ai = {xt −1 : xt −1 = xt − Ci ; xt ∈ Bi } :
(5)
If the motion is not constant in the whole image, the sets Ai can be overlapping, and all of Xt −1 may not to be covered. The pixel sets Bi and Ai considered here represent square “blocks” of pixels or more irregularly shaped “objects” whose criterion for selection is the uniformity of their motion, besides the practical considerations having to do with the eOcient implementation of the motion estimation algorithm. Let ai = [ai1 · · · aik ]T represent the vector for all intensities It −1 (xt −1 ), xt −1 ∈ Ai , and bi = [bi1 · · · bik ]T the corresponding vector of intensities It (xt ), where xt −1 = xt + Ci . The maximization of Eq. (2) is obviously equivalent to the maximization for each i of the following conditional probability density functions: max p (ai |bi ; Ci) vi
at Ci = CˆiML :
(6)
3. Various conditional probability density functions for ML motion estimation 3.1. Independent additive noise
2. Maximum likelihood motion estimation
Ct (xt ) = xt − xt −1 :
Let Ai be the corresponding sets of pixels in Xt −1 , as follows:
(4)
In the general case of imaging systems, the following assumption is made [3]: ai = bi + ni ;
(7)
where ni is a noise process independent of bi with independent elements. If pn (n) is the probability density function of each element of the vector ni , the maximization of Eq. (6) is equivalent to the maximization of Eq. (8): max vi
k
pn (aij − bij )
at Ci = CˆiML :
(8)
j=1
If n is a generalized Gaussian process that is assumed 5t to characterize a general case of imaging systems, that c is pn (n) = e−|n| , the maximization of Eq. (8) can be done by an estimator from the M-estimators [17] family. A common approach is the minimization of the following objective function [3]: min vi
k
|aij − bij |c
at Ci = CˆiML :
(9)
j=1
The objective function usually used in standard block-matching procedures are those corresponding to c = 1 (Laplacian density) or c = 2 (Gaussian density), i.e., the L1 , L2 norms, respectively. These two ML methods will be referred to henceforth by BM1 (c = 1) and BM2 (c = 2).
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
3.2. The functions calculated by Strintzis and Kokkinidis [3] A common model for characterizing ultrasound images is [6]; aij = nij bij
(10)
and with nij a multiplicative noise with a Rayleigh density
pn (n) =
n n2 exp − 2 4
;
n¿0
(11)
with independent elements nij . Strintzis and Kokkinidis used this model, and calculated the following conditional probability density function: p (ai |bi ; Ci ) =
k
1 pn bij j=1
aij bij
=
k aij
b2 j=1 ij
exp −
a2ij 4b2ij
: (12)
The conditional probability density Eq. (12) has this form since the probability density function of a vector with independent elements is the multiplication of each one of the individual probability density functions. A single element’s probability density function is calculated using the following theorem [16]: py (y) =
px (x1 )
|g (x1 )|
+ ··· +
px (x n )
|g (x n )|
;
(13)
where g(x1 ) : : : g(x n ) are the real solutions of the random variable x s function y = g(x), and g (x) is the derivative of g(x). In case of Eq. (12), since aij = nij bij , g (x) is equal to bij: . The maximization of Eq. (6) is equal to the maximization of Eq. (12). The ML motion estimation based on this equation will be referred to as SK 1 . A di=erent model, also used by Strintzis and Kokkinidis [6], is aij = bij +
bij ;
457
noiseless image is unobtainable, we propose an alternative method, a method that allows computation of the motion 5eld between two noisy ultrasound images. 3.3. New conditional probability density functions for ultrasound ML motion estimation Under the same assumption of multiplicative Rayleigh noise with the density function given in Eq. (11), and when the noiseless value of the j th pixel in “block” i, at time t, is sij , the following model for the observed pixel at time t, bij , is bij = n1ij sij :
(16)
The model for the observation of the transformed corresponding pixel at time t − 1, aij , is aij = n2ij sij ;
(17)
where n1ij and n2ij are two independent noise elements with the Rayleigh density function given in Eq. (11). Given these two models, by isolating the noiseless pixel value sij , the following equation is obtained: aij = ij bij ;
where
ij ≡ n1ij =n2ij :
(18)
Taking into account that the noise term ij is a division of two Rayleigh distributed random variables, the noise term has the following density function [16]: p () =
2 ; (2 + 1)2
¿ 0:
(19)
Plots of this density function as well as the Rayleigh one can be visualized in Fig. 1. The conditional probability density function for this noise distribution takes the
(14)
in which nij is a zero-mean Gaussian noise with a variance 2 that has independent elements. In this case, using the same tools described above, the conditional probability density function takes the following form [3]: k (aij − bij )2 1
exp − p (ai |bi ; Ci ) = : 2 j=1
bij
2 bij
(15)
Again, maximization of Eq. (6) is equal to maximization of Eq. (15). This model will be referred to as SK 2 . According to the two models that appear in Eqs. (10) and (14), only the image at time t − 1 is noisy. In order to eOciently use this ML motion estimation algorithm, one must have a noiseless image for time t. Since this
Fig. 1. Rayleigh distribution and the distribution of a random variable that is a division of two independent Rayleigh distributed random variables.
458
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
following form: p (ai |bi ; Ci ) =
k 1 j=1
=
bij
p
k
j=1 b2 ij
aij bij
2aij
2
(aij =bij )2 + 1
(20) :
This equation was obtained using the properties used in building Eq. (12). The maximization of this equation is equivalent to the maximization of Eq. (6). The motion estimation based on this equation is denoted CD1 . Taking the natural logarithm out of both sides of Eq. (18), and marking ln (aij ) by ij , and ln (bij ) by ij , the following model is derived: ij = ij + ˆij ;
ˆij ≡ ln (ij ) :
where
(21)
In this case, using the same basic tools as in Eq. (20), the conditional probability density function is given by [16] p (ai |bi ; Ci ) =
k aij j=1
=
bij
k
j=1
p
aij bij
2
2 (aij =bij )
2
(22)
(aij =bij )2 + 1
and the maximization of this equation is equivalent to the maximization of Eq. (6). Motion estimation based on this equation is denoted by CD2 . Six di=erent ML motion estimation schemes are described in this section, all of which di=er by their conditional probability density functions. These functions are the base of the ML estimators.
4. Experimental results There are two di=erent situations for evaluation of motion estimators. The 5rst is the synthetic case, where the correct motion parameters are known. Under this condition the estimated motion parameters can be compared to the known ones, and the estimation error can be quanti5ed. In the second case, the motion is not known, and the evaluation must rely solely on the given noisy images. In this case, given that the noise is independent from the images, the correlation between the 5ltered image and the noise can serve as an indication of the motion estimation quality. The 5ltered image is an estimation of the noiseless one. Therefore, since the 5ltering is based on the motion estimation, the better the motion estimation,
Fig. 2. Images used for the motion estimation test: (a) the synthetic image; (b) the simulated image at time t. Table 1 Correct motion estimations using the di=erent conditional density functionsa Block size
BM1
(%) 9×9 15 × 15 21 × 21
24 38 34
BM2
SK 1
SK 2
CD1
CD2
(%)
(%)
(%)
(%)
(%)
24 37 29
2 4 3
18 17 25
14 34 21
46 77 83
a BM —block matching based on Eq. (9) with c = 1, BM — 1 2 block matching based on Eq. (9) with c = 2, SK 1 —ML according to Strintzis and Kokkinidis based on Eq. (12), SK 2 — ML according to Strintzis and Kokkinidis based on Eq. (15), CD1 —our ML estimation based on Eq. (20), CD2 —our ML estimation based on Eq. (22).
the lower is the correlation between the 5ltered image and the noise. 4.1. Two synthetic images In order to evaluate the performance of an ML motion estimation based on the six di=erent conditional probability density functions (Eqs. (9), (12), (15), (20) and (22)) we have constructed a synthetic image that contains a high-valued background and a low-valued disk-shaped center part. This synthetic image has been multiplied by a Rayleigh distributed noise term in order to obtain the simulated image at time t − 1, and a shifted version of the synthetic image was multiplied by a second independent noise term to create the simulated image at time t. The synthetic image and the simulated image at time t can bee seen in Fig. 2. The ML motion estimation technique with the di=erent conditional probability density functions was performed in order to estimate the motion between the two images in the points spread along the borders of the low-leveled disk-shaped part. The percentage of exact motion estimation results for each one of the conditional probability density functions is presented in Table 1 . As seen in Table 1, the percentage of correct motion estimation obtained by our CD2 approach outper-
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
459
Table 2 Mean square tracking error of motion estimations using the di=erent conditional density functions
Table 3 Maximum likelihood estimation of the original image from the noisy motion compensated sequences
Block size
BM1
BM2
SK 1
SK 2
CD1
CD2
9×9 15 × 15 21 × 21
3.85 2.14 2.13
4.03 2.45 2.68
8.45 8.69 8.86
4.62 4.20 3.29
5.41 3.71 4.04
2.37 0.44 0.27
Method
Image reconstruction equation
BM1 , BM2
IˆML (x) =
SK 1 , CD1 , CD2
IˆML (x) =
SK 2
2 IˆML (x) = − 2 +
Fig. 3. Images used for motion estimation within a synthetic image sequence: (a) the original image was taken from Brodatz’s texture album [18]; (b) a simulated noisy image.
forms the other approaches, for all the di=erent block sizes. Our approach CD1 also outperformed one of the approaches presented by Strintzis and Kokkinidis (SK 1 ). In order to give the reader a sense of the size of the motion estimation error, we have used the following mean squared tracking error [15]: v =
N
1 vi − vˆiML 2 : N i=1
(23)
As can be seen in Table 2 , the mean square tracking error of the CD2 motion estimations is much less than all the others. Not only is the percentage of correct estimations higher, but when inaccurate estimations occur, they are closer to the correct values than the errors introduced by the other methods. 4.2. Synthetic image sequence We have synthesized an image sequence by repeating the operations of shifting and noising the image shown in Fig. 3(a). We have made 200 simulated images, with uniformly distributed random translations with a row displacement varying between −3 and 3 and a column displacement also varying between −3 and 3. The di=erent motion estimation and compensation algorithm where implemented, all with a block size of 9 × 9 and a search area of 15 × 15. After reconstructing the motion-compensated image sequences, we have obtained
1 N √
N
2
i=1 Ii (x)
1 N
N
i=1 Ii (x)
4 4
+
1 N
2
N
i=1 Ii (x)
2
a set of observations I1 (x); I2 (x); : : : ; IN (x) corresponding each location x in the desired reconstructed image. For each of the motion compensated sequences, the value of the pixel intensities in the reconstructed image is estimated according to the ML method. These estimations, based on the motion compensated data, are made as described in Table 3 . We have used the ML estimation equations described in Ref. [6] for the estimation of the correct pixel value in the case of multiplicative Rayleigh noise (SK 1 , CD1 , and CD2 models) and in case of additive signal dependent Gaussian noise (SK 2 model). The reconstructed images are depicted in Fig. 4. As expected, the image reconstructed without motion compensation (Fig. 4(a)) is very blurred. The three images reconstructed using the SK 1 , SK 2 , and CD1 motion estimation methods (Fig. 4(d), (e) and (f), respectively) failed to reconstruct the relatively large dark part of the texture. The results obtained with the two BM techniques (Fig. 4(b) and (c)) seem to outperform the SK 1 , SK 2 , and CD1 results, but have some arti5cial small white speckles that do not appear in the reconstructed image that is based on our CD2 motion estimation scheme (Fig. 4(g)). The e=ects described here are also represented by the mean square error (MSE) between each reconstructed image and the original noiseless image. For each scheme, we extracted the noise term from the reconstructed image and the noisy reference image and calculated the cross-correlation coeOcient between it and the reconstructed image. Since the noise is independent, if the reconstruction is good (due to correct motion estimation) this cross-correlation coeOcient will be small. Both the MSE values and correlation coeOcient for each of the reconstructed images appear in Table 4 . As can be seen in Table 4, in terms of MSE, the reconstruction made according to the proposed CD2 method is better than those of the methods. Both the BM methods and our CD1 method performed better than the two methods proposed by Strintzis and Kokkinidis. For each of the reconstructed images, we have also calculated the noise component as a function of the reference noisy
460
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
Fig. 4. Reconstructed images by di=erent motion estimation schemes: (a) without motion compensation; (b) BM1 motion estimation; (c) BM2 motion estimation; (d) SK 1 motion estimation; (e) SK 2 motion estimation; (f) CD1 motion estimation; (g) CD2 motion estimation; (h) the original noiseless image. Table 4 MSE between the original and reconstructed image and the correlation coeOcient between the noise and the reconstructed image Method
MSE
Correlation coeOcient
BM1 BM2 SK 1 SK 2 CD1 CD2
228 290 505 457 377 68
0.087 0.157 0.174 0.118 0.190 0.063
image and the reconstructed one. We assume that if the motion estimation is correct and the reconstruction scheme is eOcient, the correlation coeOcient between the noise term and the reconstructed image will be zero. Our assumption regarding the correlation coeOcient as a reconstruction quality measure is ascertained by the fact that the quality ranking based on the correlation coeOcient values is consistent with the MSE criteria, as can be seen in Table 4. In the results presented in this section and in the previous one (as well in the in vivo case presented in the next section), the CD2 approach outperformed the CD1 approach. It also outperformed all the other approaches. The CD2 approach performs better since the non-linear natural logarithm element introduced into the model (Eq. (20)) yields an emphasis of the most likely probability value, corresponding to the correct motion estimation. This phenomena helps to achieve better results when the
noise is not exactly Rayleigh distributed, as may happen in small sets of random variable realizations. The improvement achieved by the integration of the natural logarithm element into Eq. (20) is presented. The motion estimations for all the blocks of one of the images in our synthetic sequence were examined with respect to the reference image. The di=erence in likelihood probability between the most likely and the second most likely motion vector estimation per block was calculated, after normalizing the di=erence by dividing it by the value of the most likely probability. The histogram of the normalized di=erences, for both CD1 and CD2 , is shown in Fig. 5. As can be seen, in the case of CD1 the normalized di=erences are much smaller than those in CD2 , so the motion estimation based on CD2 is more robust. 4.3. In vivo test We have also tested the six di=erent ML functions for block based motion estimation on a real ultrasound image sequence. The ultrasound image sequence contains a fetus’s head and 5st, moving inside a woman’s uterus, surrounded by amniotic
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
461
Fig. 5. Histogram of normalized di=erences between the most likely and the second most likely: (a) normalized di=erences histogram for CD1 motion estimation; (b) normalized di=erences histogram for CD2 motion estimation.
Fig. 6. Reconstructed images by di=erent motion estimation schemes (black areas inside the pictures represent locations that did not have any motion compensated values): (a) without motion compensation; (b) BM1 motion estimation; (c) BM2 motion estimation; (d) SK 1 motion estimation; (e) SK 2 motion estimation; (f) CD1 motion estimation. (g) CD2 motion estimation; (h) the noisy reference image.
Observing the correlation coeOcients, one can see that again, as in the synthetic image sequence, in terms of noise independence, the best result was obtained by the CD2 method. Since a very small correlation coef-
5cient indicates a good reconstruction, this con5rms that our improved ML formulation for the block matching motion estimation has yielded better reconstruction results.
462
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
Table 5 The correlation coeOcient between the noise and the reconstructed image Method
Correlation coeOcient
BM1 BM2 SK 1 SK 2 CD1 CD2
0.092 0.104 0.221 0.215 0.341 0.042
5. Summary and conclusions The issue of maximum likelihood block matching motion estimation methods for ultrasound image sequences has been examined. Classical maximum likelihood block matching algorithms based on the L1 or L2 norms have been used, aside two methods proposed by Strintzis and Kokkinidis [3]. Their methods assume that a noiseless reference image is available, and thus only one of the input frames to the motion estimation algorithm is noisy. Since in the practical case, only noisy observations are given, we have found it necessary to derive a model that is adequate for motion estimation between two noisy ultrasound images. We have proposed here two maximum likelihood block matching motion estimation methods that are suitable for noisy ultrasound frames. In experiments based on synthetic data, both methods outperformed those of Strintzis and Kokkinidis. Our second method, which integrated the non-linear natural logarithm element into the ultrasound image formation model, was found to be very robust, and performed better than all the other models in the real ultrasound sequence, as well as in the synthetic experiments. Encouraged by these results, our current e=orts are aimed into adapting our method so it will be able to estimate motion in ultrasound sequences that have di=erent noise characteristics than purely multiplicative Rayleigh, due to video recording and/or other deformations. References [1] C.B. Burckhardt, Speckle in ultrasound b-mode scans, IEEE Trans. Sonics Ultrasonics 25 (1) (1978) 1–6. [2] A.N. Evans, M.S. Nixon, Mode 5ltering to reduce ultrasound speckle for feature extraction, IEE Proc. Vision Image Signal Process. 142 (2) (1995) 87–94.
[3] M.G. Strintzis, I. Kokkinidis, Maximum likelihood motion estimation in ultrasound image sequences, IEEE Signal Process. Lett. 4 (6) (1997) 1156–1157. [4] C. Kontogeorgakis, M.G. Strintzis, N. Maglaveras, I. Kokkinidis, Tumor detection in ultrasound B-mode image through estimation using a texture detection algorithm, Proceedings 1994 Computer Cardiology Conference, 1994, pp. 117–120. [5] C. Kotropoulos, I. Pitas, Optimum nonlinear signal detection and estimation in the presence of ultrasonic speckle, Ultrasonic Imaging 14 (1992) 249–275. [6] C. Kotropoulos, X. Magnisalis, I. Pitas, M.G. Strinzis, Nonlinear ultrasonic image processing based on signal-adaptive 5lters and self organizing neural nets, IEEE Trans. Image Process. 3 (1) (1994) 65–77. [7] T. Loupas, W.M. McDicken, P.L. Allan, An adaptive weighted median 5lter for speckle suppression in medical ultrasonic images, IEEE Trans. Circuits Systems 36 (1) (1989) 129–135. [8] T. Taxt, Restoration of medical ultrasound images using two-dimensional homomorphic deconvolution, IEEE Trans. Ultrasonics Ferroelectron. Frequency Control 42 (4) (1995) 543–554. [9] R.N. Czerwinski, D.L. Jons, W.D. O’Brien Jr., Ultrasound speckle reduction by directional median 5ltering, Proceedings of the International Conference on Image Processing 1 (1995) 358–361. [10] E. Ko5dis, S. Theodoridis, C. Kotropoulos, I. Pitas, Nonlinear adaptive 5lters for speckle suppression in ultrasonic images, Signal Processing 52 (3) (1996) 357– 372. [11] A.N. Evans, M.S. Nixon, Biased motion-adaptive temporal 5ltering for speckle reduction in echocardiography, IEEE Trans. Medi. Imaging 15 (1) (1996) 39–50. [12] B. Olstad, Noise reduction in ultrasound images using multiple linear regression in a temporal context, Proce. SPIE 1451 (1991) 269–281. [13] T. Taxt, A. Lundervoid, B. Angelsen, Noise reduction and segmentation in time-varying ultrasound images, Proceedings of the International Conference on Pattern Recognition 1 (1990) 561–566. [14] L. Markosian, A motion-compensated 5lter for ultrasound image sequences, Technical Report CS-96-14, Department of Computer Science, Brown University, 1996. [15] F. Yeung, S.F. Levinson, K.J. Parker, Multilevel and motion model-based ultrasonic speckle tracking algorithms, Ultrasound Med. Biol. 24 (3) (1998) 427–441. [16] A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York, 1991. [17] Z. Zhengyou, Parameter estimation techniques: a tutorial with applications to conic 5tting, Image Vision Comput. J. 15 (1) (1997) 59–76. [18] P. Brodatz, Textures, Dover Publications, New York, 1966.
B. Cohen, I. Dinstein / Pattern Recognition 35 (2002) 455–463
463
About the Author—BOAZ COHEN received his B.S. and M.S. degree in Electrical and Computer Engineering from the Ben Gurion University of the Negev, Beer Sheva, Israel, in 1993 and 1996, respectively. He is now a Ph.D. student in the Ben Gurion University of the Negev. His interests are in the areas of image sequence restoration, enhancement, and analysis. About the Author—ITS’HAK DINSTEIN received his Ph.D. in Electrical Engineering from the University of Kansas, Lawrence Kansas, in 1974. He was with COMSAT Laboratories, Gaithersburg Maryland, from 1974 till 1977, when he returned to Israel and joined Ben Gurion University, in Beer Sheva, where he is now a professor in the Electrical and Computer Engineering Department. He is the head of the image processing laboratory at the Electrical and Computer Engineering Department. During 1982–1984 he was a visiting scientist at IBM Research Laboratory, San Jose, California. During 1988–1990 he was a visiting associate professor at Polytechnic University, Brooklyn, New York. His research interests include image processing and computer vision.