ARTICLE IN PRESS Signal Processing 89 (2009) 1973–1989
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Enhanced detectability of point target using adaptive morphological clutter elimination by importing the properties of the target region Xiangzhi Bai a,, Fugen Zhou a, Yongchun Xie b, Ting Jin a a b
Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, China Beijing Institute of Control Engineering, Beijing 100080, China
a r t i c l e in fo
abstract
Article history: Received 20 September 2007 Received in revised form 25 March 2009 Accepted 26 March 2009 Available online 9 April 2009
An efficient clutter elimination algorithm will largely enhance the detectability of dim point targets embedded in heavy clutter image. In this paper, a novel point target detectability enhancement algorithm named adaptive morphological clutter elimination which is constructed by importing the properties of the target region is proposed. The consideration of the properties of the target region not only enhances the adaptive ability of the algorithm, but also improves the performances of the algorithm. Experiments indicate that our proposed algorithm on clutter elimination, false alarm reduction and target enhancement are more powerful than some other widely used methods for the purpose of point target detection; thus the detectability of the point target image is greatly enhanced. & 2009 Elsevier B.V. All rights reserved.
Keywords: Detectability enhancement Clutter elimination False alarm reduction Mathematical morphology Point target detection Contour structuring element
1. Introduction Detecting and tracking point target in infrared or visual image sequences with heavy clutter is crucial for different image processing applications in the areas of military, medicine, aeronautics and astronautics. When the target is far away from the imaging equipment, the target appears only as a point and has the features of low signal to noise ratio (SNR), clutter background, moving at unknown velocity and unavailable shape information, which makes the point target difficult to be detected. Hence, the most crucial counterpart of point target detection is to enhance the detectability of the point target. And, the proposed solution is to eliminate the clutter background of the original image following the Signal-Plus-Noise (SPN) model of the small target image.
Corresponding author. Tel.: +86 10 82338048; fax: +86 10 82316502.
E-mail addresses:
[email protected],
[email protected] (X. Bai). 0165-1684/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2009.03.036
Some filter-based approaches are proposed to reduce the clutter background of the original image, such as median filter, multi-level filter [1], max-mean filter [2], max-median filter [2], wavelet-based method [3], rectification filters [4], the triple temporal filter (TTF) [5], 2D adaptive lattice algorithm [6], kernel smoothing methods [7] and 3D directional filters [8]. However, the median filter, multi-level filter, max-mean filter, max-median filter, wavelet-based method and rectification filters could not perform efficiently if the clutter is heavy [1,2,4] or if the features of the target are unavailable [3]. Although the TTF has the properties of easy hardware implementation and insensitive to evolving cloud data, the selection of optimize parameters is extremely difficult [5]. The 2D adaptive lattice algorithm can be used to reduce the effect of clutter, but the algorithm is complex and timeconsuming [6]. The kernel smoothing methods are simple, but their performances vary in different mathematical assumptions of the clutter, which makes them difficult to be used in the situation of complex data or undetermined mathematical model of the clutter [7]. 3D directional filters need several consecutive images with motion
ARTICLE IN PRESS 1974
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
properties to achieve good performance, but in some cases, these consecutive images can not be easily obtained. Recently, some methods utilizing neural networks [9], support vector machine [10], and probability visual learning [11] are proposed to increase the adaptive ability to the variety of the clutter. Although they perform well in some cases, the training set they should have are difficult to construct. So, most of these methods are ineffective or too complex when the clutter is heavy or the target is dim [12]. Because of the parallel property and ease of implementation in real-time hardware system [13], some morphology-based methods [14–16] are proposed to decrease the effect of the clutter. But most of the presented morphology-based methods usually focus on the effective combination or selection of the existing morphological operations, and are not the powerful tools for clutter elimination because of the disadvantage of the smoothing on image details. Fortunately, some operations of the mathematical morphology based on contour structuring elements (namely CB morphology) [17] can maintain the details of the image while filtering the clutter, which makes them more useful than other filters for the purpose of point target detection. However, CB morphological operations are also sensitive to heavy clutter and noise [17]. In an image with point target, the target region is different from the clutter background. So, if the properties of the point target region are appropriately considered, the detectability of the morphology-based method will be considerably improved [18,19]. In this paper, a simple and effective method for point target detectability enhancement by using adaptive morphological clutter elimination (AMCE) is proposed. The method is derived from the CB morphological operations by importing the properties of the point target regions. After demonstrating the shortcomings of the classical morphology and the CB morphology based methods, the adaptive morphological clutter elimination is constructed according to the properties of the target region. The consideration of the properties of the target region not only enhances the adaptive ability of the algorithm, but also improves the performances of the algorithm. The experiments show that the detectability enhancement of adaptive morphological clutter elimination is more powerful than other algorithms. Thus, the dim small target can be apparently enhanced by the proposed algorithm, and the proposed algorithm can be used in the field of dim small target detection and tracking, such as in the forward-looking infrared (FLIR) system, in the infrared guidance system, and so on. This paper is organized as follows. Section 2 presents the definitions of mathematical morphology including both classical mathematical morphology and CB morphology. Sections 3 and 4 analyze the shortcomings of the classical and CB morphological operation for the purpose of clutter elimination. Section 5 proposes the adaptive morphological clutter elimination algorithm. Section 6 gives the results of the algorithm, and Section 7 concludes the discussion.
2. Mathematical morphology 2.1. Classical mathematical morphology Mathematical morphology is developed from geometry and based on set theory. It was firstly proposed by Matheron [13] to analyze the mineral samples, and then extended to image analysis by Serra [13]. Recent years, some other new morphological concepts are proposed, including CB morphology. Comparing with the new mathematical morphology, the original mathematical morphology can be called classical mathematical morphology. All the mathematical morphological operations are introduced by two basic operations: dilation and erosion, which work with two sets. One set is the original image to be analyzed and the other set is called structuring element. With gray-level image, let f and B represent the graylevel image and a structuring element respectively, the dilation and erosion of f (r, c) by B (i, j), denoted by fB and fYB, are defined by f B ¼ maxðf ðr i; c jÞ þ Bði; jÞÞ,
(1)
f YB ¼ minðf ðr þ i; c þ jÞ Bði; jÞÞ.
(2)
i;j
i;j
Here, the domain of fB and fYB are the dilation and erosion of the domain of f with the domain of B. The dilation (erosion) makes the image value larger (smaller) than the original image because of the maximum (minimum) operation. Based on dilation and erosion, opening and closing of f (r, c) by B(i, j), denoted by f3B and fdB, are defined by f B ¼ ðf YBÞ B,
(3)
f B ¼ ðf BÞYB.
(4)
Opening generally smoothes the bright small regions of the image, and closing eliminates the dark small holes. 2.2. Mathematical morphology based on contour structuring element (CB morphology) CB morphology which reorganizes the morphological operations through the contour of the structuring elements was proposed by Gong [17], and was founded according to the planar structuring elements. Now, let B represent a planar structuring element, and @B be the contour of B following the connectivity of B. The CB dilation and CB erosion of f by @B are defined by CBDB ðf Þ ¼ f @B,
(5)
CBEB ðf Þ ¼ f Y@B.
(6)
From the definition of CB dilation (CB erosion), CBDB(f) (CBEB(f)) uses the maximum (minimum) gray-value in the region of @B to replace the gray-value of the corresponding position in f regardless of the properties of the region. The CB opening and CB closing of f by @B are defined by CBOB ðf Þ ¼ ðf YBÞ B,
(7)
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
CBC B ðf Þ ¼ ðf @BÞYB.
(8)
Moreover, the operations of OB(f) and CB(f), are defined by OB ðf Þ ¼ maxff ; CBOB ðf Þg,
(9)
C B ðf Þ ¼ minff ; CBC B ðf Þg.
(10)
Because the maximum operation in OB(f), OB(f) only maintains the regions that the gray-values are larger in the original image and the processed image by CBOB(f), leaving the other regions unchanged. Similarly, CB(f) only maintains the regions that the gray-values are smaller in the original image and the processed image by CBCB(f), leaving the other regions unchanged because of the minimum operation. Then, the details of the image are protected. All of these indicate that OB(f) can distinguish the dark regions of the image. So, OB(f) can be used to estimate the background of the image with dark target or to filter the negative noise. Conversely, CB(f) can distinguish the bright regions of the image, which makes CB(f) be used to estimate the background of the image with bright target or to filter the positive noise. 3. Classical morphological clutter elimination 3.1. Mathematical model of the point target image The image embedded with point targets can be modeled as follows [6]: f ðr; kÞ ¼ Sðr; kÞ þ f b ðr; kÞ þ nðr; kÞ;
k ¼ 0; 1; 2; . . . ,
(11)
where r ¼ (x, y) is the spatial coordinate of the image f; k is the sampled time; S(r, k) is the signal of the target, which is an optical blur dominated by the point spread function and can be well modeled by the 2D Gaussian shape [16]. fb(r, k) is the clutter background, and n(r, k) is the noise at sampled time k. The gray-level target image only containing the targets and noises can be obtained by removing the clutter from the original image as follows. f T ðr; kÞ ¼ f ðr; kÞ f b ðr; kÞ ¼ Sðr; kÞ þ nðr; kÞ,
1975
morphological operations which are opening and closing [14–16]. So, the properties of these methods for point target detection are mainly decided by the properties of opening and closing. In these methods, opening is used to estimate the clutter background of the bright point target image and closing is used to estimate the clutter background of the dark point target image. Then the classical morphological clutter elimination is defined as follows: f T ¼ f f b ¼ f f B,
(13)
fT ¼ fb f ¼ f B f.
(14)
Here fb represents the estimated clutter background through f3B or fdB. Opening and closing usually change the gray-values of many regions which contain no target. Then fb differs from the clutter of the original image. According to expression (13), all the regions whose gray-values are changed have outputs in fT. Consequently, there are a large number of pixels with low gray-value in fT, which act as noise and increase the probability of false alarm during postprocessing. Further more, opening and closing can not distinguish a dim target, leading to the target losing, which increases the difficulties of post-processing. Opening is applied on the image shown in Fig. 1, fb of Fig. 1 by opening is shown in Fig. 2, and fT is shown in Fig. 3. The real target regions are labeled by rectangles. The structuring element is rhombus, whose radius is 3. As shown in Fig. 3, although the target is enhanced in fT, there are a large number of low gray-pixels in other regions. Especially, the gray-values of the pixels labeled by the circles in Fig. 3 are more close to that of the target pixels, and they are the potential false alarm points. Obviously, the heavier the clutter is and the dimmer the target is, the
(12)
where fT(r, k) is the gray-level target image only containing targets and noises. A good clutter elimination algorithm can reduce the residual clutter background and the noises in fT to the largest extent, thus enhance the detectability of the point target as much as possible. Then, the difficulty of postprocessing on point target detection and tracking is greatly reduced. The shape of the point target is always approximately a circle or ellipse, which is convenient for structuring element selection in mathematical morphology applications. Hence, mathematical morphology is appropriate for clutter elimination.
Fig. 1. Original image.
3.2. Classical morphological clutter elimination Most of the morphological clutter elimination methods in the literatures such as the top-hat based methods, ASFs and so on are usually the combination of the classical
Fig. 2. Opening of Fig. 1.
ARTICLE IN PRESS 1976
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
Fig. 4. CB of Fig. 1.
Fig. 3. fT of Fig. 1 through opening.
worse performance of opening for background estimation will be, which indicates that opening is sensitive to heavy clutter and dim target intensity. Because of the limitation of the classical morphological opening and closing, most of the presented morphologybased algorithms which are the combination of opening and closing do not perform well if the image has heavy clutter or the target is dim. Fig. 5. fT of Fig. 1 through CB.
4. Morphological clutter elimination based on CB operations CB morphology is mainly used as a filter for the property of protection on image details as proposed [17]. The point target is usually small and acts as noise in the image. So, CB can also be used to estimate the clutter background of bright point target image, while OB can be used to estimate the clutter background of dark point target image. CB morphological clutter elimination can be defined as follows:
more prominent than the result of the classical opening operation, there are also some noise pixies in fT and they are labeled by the circles in Fig. 5. They are the potential false alarms. 5. Adaptive morphological clutter elimination 5.1. Assumptions of the properties of the target region
f T ¼ f f b ¼ f CBðf Þ,
(15)
f T ¼ f b f ¼ OB ðf Þ f .
(16)
According to expression (11) and the mathematical model of the point target S(r, k), the assumptive properties of the image with point targets are as follows in most cases:
Here fb represents the estimated clutter background through CB or OB. CB(OB) eliminates the bright (dark) regions whose size is smaller than that of the structuring element from the original image, leaving the other regions unchanged. This results in a better estimation of clutter background by CB(OB) than that of opening (closing). So, the point target in fT is clearer than that of the classical morphological clutter elimination. Because CB(OB) uses the operation of minimum (maximum) after CBCB(CBOB) to filter the bright (dark) region, all the regions whose gray-values are smaller (larger) than that of the original image after the operation of CBCB(CBOB) in CB(OB) will be removed from fb regardless of the property of the target region. This means all the regions whose gray-values are changed will have output in fT, which also leaves some non-zero gray-value regions in fT and increases the probability of false alarm in the post-processing. All of these are because of the lack of using of the properties of the target region. fb of Fig. 1 by CB is shown in Fig. 4, and fT is shown in Fig. 5. As shown in Fig. 5, although the target regions are
(1) The point target usually has a region size except for the single pixel target, and the gray-value of the pixels in the target region changes continuously. The noise signal is usually an impulse response with large value in image and mixed with the background, which makes the size of the noise region smaller than that of the target region and the gray-value dose not change continuously in most cases. Consequently, some noise regions can remain in the clutter by considering this difference between the target region and the noise region while estimating the clutter background, which decreases the noises in fT and enhances the detectability of the point target. (2) The gray-values of the pixels between the target regions and the surrounding clutter background usually have a gap. The gray-values of the pixels in the bright target regions are usually larger than those of the surrounding clutter background. So, the gray-values of the target region in the original image and the corresponding estimated clutter background are much
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
different. Let Gdiff represents the gray intensity difference between the corresponding pixels in the original image and the estimated clutter background. Then, after clutter background estimation, only the regions whose Gdiff between the original image and the estimated clutter background is larger than a given value are the potential target regions. Other regions whose Gdiff are smaller may be the clutter. Most existing morphology-based algorithms applied for clutter elimination are only based on the principle of the shape and size which are also the base of mathematical morphology. Then, the properties of the point target region are not well used by these algorithms and thereby the algorithms do not perform well.
5.2. Proposed method According to the second assumptive property, a threshold can be selected to differentiate the target regions and the clutter or noise regions. The regions whose Gdiff after clutter background estimation is larger than the threshold are the target regions or the noise regions. Obviously, a too large threshold may reject some target regions, and a too small threshold may accept some clutter as the target region. Therefore, the selection of the threshold is very important. In order to keep the target region, the selected range of the threshold must be considered carefully based on the properties of the target regions, and the threshold should be selected automatically. In this section, the strategy for threshold selection is demonstrated firstly. Then, the proposed method is constructed and analyzed.
1977
5.2.1. Adaptive threshold selection The Gdiff of each pixel after CBCB or CBOB is different. Also, the gray value distributions of the target region and the background region are different, then the Gdiff of the target region and the background region is different. So, if the Gdiff of the target region after CBCB or CBOB can be confirmed, the selected range of the threshold will be easily determined. The following two propositions display the properties of CBCB and CBOB, which indicate the Gdiff of the target region and the principle of threshold selection. Proposition 1. If I is a bright region, the maximum value of the region @B corresponding to any point in I is max z, all the max z corresponding to all the points in I form a set Bz. The minimum of Bz is min Bz. Let ^I represents the result of I processed by the operation CBCB, then the difference of the gray-value in the corresponding position of I and ^I is not larger than the difference of the local maximum gray-value of I and min Bz. That is, let max I represent the local maximum gray-value of I, then Iðx; yÞ ^Iðx; yÞpmax I min Bz,
(17)
where (x,y) is the pixel coordinate of I. If (x,y) is the local maximum position, then Iðx; yÞ ^Iðx; yÞ ¼ max I min Bz. Proof. Let DI and D^I represent the domain of I and ˆI, respectively. Since I is the bright region, min Bz is the minimum value of Bz. So, min BzpI(x,y), 8(x,y)ADI. Let I1 represent the region I processed by CBDB(I), DI1 is the domain of I1, then according to the definition of CBDB(I), CBDB(I) takes the maximum value of the region overlaid by @B to replace the gray-value of each position. So, the grayvalue in I1 is the maximum gray-value in the domain of @B
Fig. 6. Example demonstration of Proposition 1: (a) CBDB of Fig. 1, (b) CBCB of Fig. 1 and (c) Iðx; yÞ ^Iðx; yÞ result of Fig. 1.
ARTICLE IN PRESS 1978
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
for each position. Since I is the bright region, the grayvalues in I1 must satisfy: 8ðx; yÞ 2 DI1 ;
min BzpIðx; yÞpmax I;
and
I1 ðx; yÞ 2 Bz.
According to the definition of CBCB(I), CBDB(I) is followed by the classical erosion, which takes the minimum value of the region in I1 to replace the gray-value of the interest region. Then, 8ðx; yÞ 2 D^I , ^Iðx; yÞ ¼ minfI1 ðx; yÞ; 8ðx; yÞ 2 DI1 g ¼ min Bz. That means all the gray-value of the pixels in I are replaced by min Bz. For I(x,y)pmax I, then Iðx; yÞ ^Iðx; yÞpmax I min Bz. If the (x,y) is the local maximum position, that is I(x,y) ¼ max I, then the inequation transforms into the equation Iðx; yÞ ^Iðx; yÞ ¼ max I min Bz:
&
Fig. 1 is used here as an example to demonstrate Proposition 1. The result of Fig. 1 after CBDB is shown in Fig. 6(a). As the rectangle region shown, the value of each position is replaced by the maximum value of the region overlaid by @B (B is rhombus with radius 3). Take 129 (the center of the first rectangle) as an example, it is the maximum value of the region overlaid by @B according to the position of the center of the first rectangle. The minimum value of the first rectangle region of Fig. 6(a) is 129 which is min Bz, and the maximum value of the first rectangle region of Fig. 1 is 140 which is max I. Obviously, min Bz (129) is not larger than max I (140) and all the values in the first rectangle region of Fig. 6(a). Then, after CBCB, all the values are replaced by min Bz (129) (Fig. 6(b)). Because max I (140) is the maximum value of the original target region, the Gdiff of all the positions in the first rectangle of Fig. 6(a) will not be larger than max Imin Bz ¼ 140129 ¼ 11 (Fig. 6(c)). Another target region labeled by the second rectangle in Fig. 6(a) has the similar result. The result of Iðx; yÞ ^Iðx; yÞ of Fig. 1 is demonstrated in Fig. 6(c). Fig. 6(c) shows that, different target region has different outputs, but they are all not larger than max Imin Bz of each region. Also, because the target region is brighter than the surrounding region, the values of the target region are larger than 0 in Fig. 6(c). All of these mean that Gdiff of the target region varies following the properties of the corresponding target region. So, if the threshold can be selected appropriately following the Proposition 1 and the properties of the target region, some pixels of the target region will remain in fT and other clutter background will be eliminated. This will greatly enhance the detectability of the target. Similarly, the dark region and the CBOB operation have the following proposition. Proposition 2. If I is a dark region, the minimum value of the region @B corresponding to any point in I is min z, all the min z corresponding to all the points in I form a set Bz. The maximum of Bz is max Bz. Let ^I represents the result of I processed by the operation CBOB, then the difference of the gray-value in the corresponding position of I and ^I is not larger than the difference of max Bz and the local minimum gray-value of I. That is, let min I represent the local minimum
gray-value of I, then ^Iðx; yÞ Iðx; yÞpmax Bz min I,
(18)
where (x,y) is the pixel coordinate of I. If (x,y) is the local minimum position, then Iðx; yÞ ^Iðx; yÞp max Bz min I. Proof. Let DI and D^I represent the domain of I and ^I, respectively. Since I is the dark region, max Bz is the maximum value of Bz. So, max BzXI(x,y), 8(x,y)ADI. Let I1 represent the region I processed by CBEB(I), DI1 is the domain of I1, then according to the definition of CBEB(I), CBEB(I) takes the minimum value of the region in @B to replace the gray-value of each position. So, the gray-value in I1 is the minimum gray-value in the domain of @B for each position. Since I is the dark region, the gray-values in I1 must satisfy: 8ðx; yÞ 2 DI1 ;
min IpI1 ðx; yÞpmax Bz;
and
I1 ðx; yÞ 2 Bz.
According to the definition of CBOB(I), CBEB(I) is followed by the classical dilation, which takes the maximum value of the region in I1 to replace the gray-value of the interest region. Then, 8ðx; yÞ 2 D^I , ^Iðx; yÞ ¼ maxfI1 ðx; yÞ; 8ðx; yÞ 2 DI1 g ¼ max Bz. That means all the gray-value of the pixels in I are replaced by max Bz. For I(x,y)Xmin I, then ^Iðx; yÞ Iðx; yÞpmax Bz min I. If the (x,y) is the local minimum position, that is I(x,y) ¼ min I, then the inequation transforms into the equation Iðx; yÞ ^Iðx; yÞ ¼ max Bz min I:
&
Following the propositions and the analysis above, in order to detect the target in heavy clutter image, the threshold should vary with different images and different regions in the same image, which is the essence of adaptability. A large constant threshold may result in the target losing, while a small constant threshold may bring more noises and increase the probability of false alarm. So, the threshold must adaptively vary in different regions of the image. According to Proposition 1, the maximum range of the Gdiff of the bright point target region after CBCB is smaller than the difference between the local maximum grayvalue and the minimum (min Bz) of the maximum grayvalue set (Bz). Therefore, in order to detect the point targets, the appropriate threshold for each pixel should not be larger than the difference. That is the selected range of the threshold is [0, max Imin Bz]. Although min Bz is not easy to calculate, there is no need to calculate min Bz. We just need to find an approximate value which is in [0, max Imin Bz] and can estimate a better background. In this paper, a simple strategy to calculate the threshold is proposed and demonstrated as follows. A L L window wi is selected to calculate the threshold. The size of wi should be smaller than the structuring element B and be contained in B. Then, the threshold of the pixel corresponding to the center position of wi is
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
1979
defined as the function of all the gray-values of the pixels in wi, which is denoted as f(wi). The proposed definition of f(wi) is as follows: f ðwi Þ ¼ max w min w,
(19)
where max w and min w are the local maximum and local minimum gray-value of the pixels in wi, respectively. The larger size of wi may result in a larger threshold, and the residual clutter in fT will be less. But, the target region may be lost if the size of wi is too large. So, the size of wi must follow the prior knowledge of the point target. Experimental results showed that it was better to select L as follows. L ¼ ð0:60:8ÞSmax ,
(20)
where Smax is the size of the largest possible target according to prior knowledge. Following the assumptions of the properties of the target region, if the region is a real target region, the grayvalues in the region is usually continuous. Because the size of wi is smaller than the target region, min w of wi should be the gray-value of the pixel belonging to the target region and will not be smaller than min Bz, that is min Bzpmin w. Also, because the size of wi is smaller than the target region, max w will not be larger than the maximum value of the target region max I, that is max wpmax I. Then, f(wi) ¼ max wmin wpmax Imin Bz. Moreover, min wpmax w, then f(wi) ¼ max wmin wX0. So, 0pf(wi)pmax Imin Bz. This means that the threshold calculated through f(wi) should falls within the interval [0, max Imin Bz]. Therefore, f(wi) can be used as an adaptive threshold to differentiate the possible target regions and the clutter background. Fig. 7. Flow chart of adaptive morphological clutter elimination.
5.2.2. Clutter elimination The adaptive morphological clutter elimination can be defined as follows: f T ¼ f f b ¼ f f Tw ðCBC B ðf Þ; f Þ,
(21)
where fTw is the selection operation through thresholding the difference image of CBCB(f) and f by using f(wi) for all the pixels. ( f 2 ðiÞ; f 2 ðiÞ f 1 ðiÞof ðwi Þ , (22) f Tw ðf 1 ; f 2 Þ ¼ f 1 ðiÞ else where i is the index of the pixels in the image, and f(wi) varies following i. The size of the structuring element B should be larger than the maximum size of the target region. Let LB represent the size of B. The proper choice of LB is LB ¼ ð1:01:5ÞSmax .
(23)
Let N represent the number of the pixels in the image. The algorithm of adaptive morphological clutter elimination is illustrated in Fig. 7, which is the detail of expression (21). Obviously, if f(wi) ¼ 0, then fTw ¼ max. Therefore, the adaptive morphological clutter elimination will become the CB morphological clutter elimination.
5.2.3. Properties analysis Based on the analysis above, if the region is a real target region, the gray-values in the region are continuous. Then, the threshold calculated through f(wi) could not exceed the selected range [0, max Imin Bz] specified by the propositions. So, the region will remain in fT for satisfaction with the first assumption of the properties of the point target region. Conversely, if the region is the noise region and hybrids with the clutter background, the gray-values of the region are discontinuous because the gray-values of the clutter background are usually smaller than those of the noise. Then, min w may be the gray-value of the hybrid clutter background and max w may be the gray-value of noise, which leads to a larger threshold. In this situation, the threshold may exceed the selected range which is specified by the propositions. So, the region will be rejected by fT. That is, some noise regions will be estimated as clutter for not satisfying the first assumption of the properties of the point target region. For f(wi), only the region whose gray-values have a gap comparing with the gray-value of the surrounding clutter can be recognized as a point target region and retained in fT, which is consistent with the second assumption of the
ARTICLE IN PRESS 1980
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
properties of the point target region. At the same time, if the threshold is selected in the specified selected range following the propositions, the point target will be retained in fT. Therefore, the proposed AMCE satisfies the two assumptive properties, which means AMCE utilizes the properties of the point target region in the operation of fTw implicitly. As a result, the point target regions will be correctly identified and the noise regions will be suppressed to the largest extent. Also, according to the Section 5.2.2, AMCE will become the CB morphological clutter elimination when f(wi) ¼ 0. All of these indicate that, AMCE is not a simple modified morphological transformation or a combination of existing morphological operations, but an effective algorithm using the superiorities of both the morphological transformation and the properties of the point target for the purpose of target detection. As a result, the point target in fT will be easily recognized and the post-processing will be largely simplified because of the detectability enhancement by AMCE. The fb of Fig. 1 through AMCE is shown in Fig. 8, and fT is shown in Fig. 9. As shown in Fig. 9, the bright noise pixels in the background of the original image have been removed from fT because they are not satisfying with the two previous assumptive properties. Although some pixels of target region are also removed from fT for the thresholding operation, some other pixels will remain in fT according to the algorithm. These remained pixels are usually the crucial counterpart of the target and play an important role in the application of target detection. Also, following the definition of AMCE, the low contrast target will be detected because of the existing gap between gray-values of the target region and the surrounding clutter.
Fig. 8. fb of Fig. 1 through AMCE(L ¼ 3).
Fig. 9. fT of Fig. 1 through AMCE(L ¼ 3).
6. Experimental results and discussions To achieve the purpose of being easily detected and simplifying post-processing, the point target detectability enhancement should apparently enhance the point target while largely suppress the clutter background and reduce the number of false alarms. Then, the effect of the clutter background and dim target intensity could be suppressed to the largest extent. So, we choose some widely used target enhancement algorithms and compared them with AMCE to demonstrate the superiority of AMCE in all the situations. In this section, firstly, the clutter background elimination experiment is designed to demonstrate the performance of AMCE for clutter background elimination. Secondly, the performances of false alarm reduction of different algorithms are also measured and compared. Thirdly, the effects of detectability enhancement of different algorithms are shown in target enhancement experiment. Fourthly, to demonstrate the superiority of AMCE for target detection simplification, the point targets embedded in heavy clutter images are easily detected by AMCE and demonstrated in point target detection experiment. Finally, the computation time of different algorithms are compared. All the experiments show the superior performances of AMCE for point target detectability enhancement. 6.1. Clutter background elimination The accurate elimination of the clutter background largely decreases the residual clutter background remained in fT, which will largely reduce false alarms and enhance the target. To demonstrate the performance of clutter background elimination of the algorithm, a measure named mean absolute value of residual background (MARB) is defined to compute the residual background remained in fT after different algorithms. P x;y jF b ðx; yÞ f b ðx; yÞj MARB ¼ , (24) Lw Lh where Fb is the background of the original image, fb the estimated background of the image according to fT through different algorithms. Lw and Lh are the width and height of the image, respectively. MARB indicates that the accurate elimination of the clutter background leads to a little difference of Fb and fb. So, smaller MARB gives better performance of clutter background elimination of the algorithm. In order to compute MARB, 200 dim small target images are used, and some of the images are shown in Fig. 10. Fig. 10 shows that the targets are dim, and various heavy clutters exist in the images, such as the cloud clutter, the building clutter and the clutter caused by the imaging sensor. MARBs of some images after different algorithms are listed in Table 1. Table 1 shows that, because of the impacts of the heavy clutter, the dim target and the smoothing on image detail of the classical mathematical morphological operations, MARBs of top-hat transformation are much larger than
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
1981
Fig. 10. Some example images in all the experiments.
Table 1 Comparison of clutter background elimination (MARB).
AMCE Top-hat transformation Max-median 3 3 Max-median 5 5 Max-Mean 3 3 Max-mean 5 5 U-Kernel E-Kernel
Image 1
Image 2
Image 3
Image 4
Image 5
Image 6
Image 7
Image 8
Image 9
Image 10
0.0032 5.6029 0.0908 0.4391 0.3616 0.5559 2.2212 1.9511
0.0037 6.1714 0.1187 0.5329 0.4274 0.6505 2.3742 2.0846
0.0105 3.0895 0.0579 0.2028 0.1878 0.3230 1.9861 1.8124
0.0112 3.1063 0.0612 0.2069 0.1930 0.3263 2.0373 1.8489
0.0073 5.5830 0.6425 1.1470 0.7736 1.0492 4.3320 3.7193
0.0155 3.4722 0.0698 0.2465 0.2307 0.3484 1.6068 1.4275
0.0038 6.1714 0.1187 0.5328 0.4274 0.6504 2.3737 2.0843
0.0082 4.4363 0.1141 0.3102 0.3150 0.5402 0.8098 2.4271
0.0092 4.3258 0.1112 0.3074 0.3049 0.4883 2.7495 2.3645
0.0103 4.2708 0.1096 0.3015 0.3117 0.4969 2.7681 2.3520
those of other algorithms. Therefore, most of the existing morphology-based algorithms which are the combination of the morphological operations do not perform well if the clutter is heavy and the target is dim. The performances of clutter background elimination of max-median and max-mean are better than U-Kernel and E-Kernel, which is because the smoothing on image detail of U-Kernel and E-Kernel is heavier than max-median and max-mean. The MARBs of AMCE are the smallest in all the types of the images, which means the performance of clutter background elimination of AMCE is the best. And, the reason is that the consideration of the property of the target region and the protection on image detail of CB morphology can estimate more accurate clutter background. So, AMCE can largely suppress the clutter background, which makes the target detection easy. This experiment demonstrated the robust and efficient performances of AMCE for target enhancement through clutter background elimination.
6.2. Reduction in false alarm The in-accurate elimination of the clutter background will bring large number of false alarms. These false alarms largely increase the difficulty of target detection. A good algorithm for point target detectability enhancement should largely decrease the number of false alarms even if the clutter background is not eliminated completely.
This experiment shows and compares the abilities of false alarm reduction of different algorithms. The size of the point target region is small, and the gray-value of the point target in the image is usually bigger than that of the surrounding pixels, which makes the point target in the image usually act as noise. That is, the SNR is not suitable for describing the ability of point target detection. As the point target image indicates, the ability of point target detection is based on both the signal intensity and the surrounding background. So, the local signal-to-background ratio (LSBR) [19] is more appropriate for describing the detectability enhancement of various algorithms. LSBR can be defined as: 8 <1 LSBR ¼ 10 log :s2b
W=2 X
W=2 X
k¼W=2 j¼W=2
9 = ½Iðs k; r jÞ mb , ; (25)
Where s2b is the variance and mb the mean of the background in the window described by the width and height W around the interest pixel (s, r). Small LSBR indicates a dim target and heavy clutter. A measure to describe the ability of false alarm reduction based on LSBR provided by Soni [19] is the probability of false alarms (PFA) per pixel versus the input LSBR. Several widely used methods are applied on real infrared images to calculate the measure and compared with AMCE. 50 images are used to calculate the curves in each of Figs. 11 and 12. Figs. 11 and 12 show the variety of
ARTICLE IN PRESS 1982
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
Fig. 11. PFA versus varying LSBR in cloud clutter image.
Fig. 12. PFA versus varying LSBR in heavy clutter image.
PFA per pixel of the result image with different clutter background by various clutter elimination algorithms as a function of the LSBR. In Figs. 11 and 12, all the clutter
elimination algorithms can decrease the false alarms as the LSBR increasing, but AMCE shows the greatest improvement. The performance of false alarm reduction
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
of max-median and max-mean are worse than E-Kernel and U-Kernel because max-median and max-mean may smooth the target region while estimating the clutter background. Therefore, the contrast between the target and the remained clutter in fT after max-median and maxmean is smaller than that of E-Kernel and U-Kernel. The performances of AMCE are much better than other algorithms, which means AMCE accurately eliminates the clutter background and increases the contrast between the target and the remained clutter. Then, the false alarms are largely reduced. The clutters of the images in this experiment are heavy, but AMCE shows better performance than other algorithms in all situations. The robust and effective properties of AMCE in false alarm reduction were indicated in this experiment.
1983
are smaller. For the Target 7 and Target 8, LSBRs of UKernel and E-Kernel are the smallest. And, the LSBRs of UKernel and E-Kernel of Target 8 are even smaller than that of the original image. Also, Table 2 shows that the performances of all the methods except AMCE become worse when LSBR becomes small, which is because of the
6.3. Target enhancement The clutter background elimination and false alarm reduction apparently increase the contrast between the point target and the clutter, which enhances the point target and makes the detection easy. LSBR is proposed to measure the detectability of point target. Larger LSBR indicates a better detectability of point target. So, a good algorithm for detectability enhancement should increase LSBR of the point target as much as possible. In order to demonstrate the efficient performance of target enhancement of AMCE, several widely used methods are applied on different point target images. 200 images are used to calculate the LSBR value, and some images are shown in Fig. 10. Some of the LSBRs of the point targets after different methods are listed in Table 2. In Table 2, +N means sb of the surrounding clutter background is zero, which indicates that the clutter background in the local area is eliminated completely and the detectability of the target is enhanced to the largest extent. Table 2 shows that, LSBRs of top-hat transformation are much smaller than other methods in most cases, which verifies that the classical morphology-based methods do not perform well if the property of the target region is not properly considered. Some of the LSBRs of UKernel and E-Kernel are larger than those of max-median and max-mean, but other LSBRs of U-Kernel and E-Kernel
Fig. 13. Original IR image (240 200): (a) IR image with sky clutter and (b) IR image with sky and tree clutter.
Table 2 Comparison of target enhancement (W ¼ 33).
Original AMCE Top-hat transformation Max-median 3 3 Max-median 5 5 Max-mean 3 3 Max-mean 5 5 U-Kernel E-Kernel
Target 1
Target 2
Target 3
Target 4
Target 5
Target 6
Target 7
Target 8
Target 9
Target 10
0.1707 16.3735 2.5067 5.1352 0.8190 1.5988 2.8422 2.1827 2.0442
1.0354 21.3328 5.9352 8.2954 2.0184 3.3604 6.1611 5.9902 5.6057
4.7909 +N 5.6554 9.0726 1.9469 3.3996 5.5720 9.1800 8.4651
0.5111 +N 0.0227 2.7709 0.3875 0.4986 1.0338 1.3742 1.3115
0.0186 12.9689 0.1021 4.0742 0.5313 0.9630 1.5062 0.8709 0.8315
3.7211 +N 4.4658 8.3323 1.6336 2.7280 4.5407 7.7439 7.0760
0.1816 12.0092 0.1517 9.5565 8.0149 2.2692 2.6623 0.1384 0.1271
0.0447 +N 0.0002 2.8488 0.4493 0.4740 0.4971 0.2008 0.2020
0.2811 +N 0.5484 4.2893 0.8956 0.8911 1.6850 7.9899 9.7128
2.8647 +N 2.8810 0.0393 0.0879 0.1429 0.2170 1.7900 1.4700
ARTICLE IN PRESS 1984
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
effect of the heavy clutter and the dim target intensity. These indicate that, the performances of these methods are not robust and not efficient if the clutter is heavy and the target is dim. But, AMCE performs very well no matter how small the LSBR of the target is, and some targets (Target 3, Target 4, Target 6, Target 8, Target 9 and Target 10) are enhanced to the largest extent by AMCE while other methods do not perform well. These mean that AMCE is a robust and efficient method under the conditions of heavy clutter and dim target intensity. In order to demonstrate the excellent target enhancement performance of AMCE directly, AMCE is applied to many infrared images, and some 3D plots of the target intensity of two types of infrared images (Fig. 13) before and after AMCE are shown in Figs. 14 and 15. Fig. 13 shows two types of the original images with sky clutter or sky and tree clutter. As the images shown, the clutter of the image is heavy and the targets in the images are difficult to be identified, especially when the target
appears in the tree clutter. Figs. 14 and 15 show the intensity change of point target inserted in the images. In Fig. 14(a) shows the original target intensity with the sky clutter. Most of the intensity of the pixels illustrated in (b) after processing by AMCE decreases to zero and the difference of the intensity between point target and the clutter is enlarged, which indicates that the detectabilty of the point target is enhanced. Fig. 14(c) shows another original target intensity of the sky clutter image with a very low LSBR. Most of the intensity of the pixels illustrated in Fig. 14(d) after processing by AMCE also decreases to zero. The point target in the original image can not be identified. Conversely, it is largely protruded in Fig. 14(d). Although there are some high intensity pixels around the point target, the false alarms has been much decreased comparing with the original image, which means the detectability of the point target is still well enhanced under the condition of very low LSBR.
Fig. 14. Signal intensity plot of 33 33 window around the center of the target region with sky clutter: (a) 3D intensity plot of the target region and the surrounding region (LSBR ¼ 0.3353), (b) Enhancement of (a) after AMCE, (c) 3D intensity plot of the target region and the surrounding region (LSBR ¼ 0.0462) and (d) Enhancement of (c) after AMCE.
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
1985
Fig. 15. Signal intensity plot of 33 33 window around the center of the target region with sky and tree clutter: (a) 3D intensity plot of the target region and the surrounding region (LSBR ¼ 0.2493), (b) Enhancement of (a) after AMCE, (c) 3D intensity plot of the target region and the surrounding region (LSBR ¼ 0.0245) and (d) Enhancement of (c) after AMCE.
As illustrated in Fig. 15, the detectability of the point targets in the images with sky and tree clutter are also enhanced. Fig. 15(a) and (c) show that the signal intensity of the noise surrounding the target is very close to that of the target region, but after processing by AMCE the target is greatly enhanced and easy to be detected. Especially when LSBR ¼ 0.0245 as shown in Fig. 15(c), the target is submerged in the clutter background. Although there are some noise regions remained in fT as shown in Fig. 15(d), the target region is enhanced greatly and the false alarm has been depressed much comparing with the original image. Also, the 3D plots of the target intensity change of the images in Fig. 10 are shown in Fig. 16. The first row of Fig. 16 is the original target intensity distribution. The second row of Fig. 16 is the target intensity distribution after the target enhancement by AMCE. Fig. 16 shows that the clutter background is apparently suppressed and the targets are greatly enhanced, which will effectively improve the detectability of the targets. To show the efficient performance of detectability enhancement for the purpose of target detection on more
image data, the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are adopted here. Efficient target enhancement method largely enhances the dim target and simplifies the target detection. The ROC curve demonstrates the relationship between the probability of the correct detection (Pd) and PFA [20]. A larger value of Pd at the same PFA means a better performance of the target enhancement and detection method. So, more efficient target enhancement method will result in a better ROC curve and thus a larger AUC value. 36 image sequences whose length ranges from 10 to 600 are used to calculate the ROC curves, and some examples of the images are shown in Figs. 10 and 13. Two of the ROC curves are shown in Fig. 17. As the ROC curves show that, the ROC curves of the AMCE sharply reach the value of 1, and the AMCE has the largest value of Pd at the same PFA among all the methods, which means AMCE enhances the dim target better than other methods. The AUC value is the area of the region under the ROC curve. So, a better ROC curve results in a larger AUC value.
ARTICLE IN PRESS 1986
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
Fig. 16. Signal intensity plot of 33 33 window around the center of the target region in images of Fig. 10.
The mean value of the AUC values for all the ROC curves of each method calculated from different image sequences are demonstrated in Table 3. Table 3 shows that, the AUC value of AMCE is bigger than other methods, which verifies that the detectability enhancement performance of AMCE for target detection is better than other methods. Moreover, the AUC value of AMCE is much close to 1, which means that almost all the point targets are correctly detected after the detectability enhancement by AMCE. So, the performance of AMCE is much efficient and robust. This experiment demonstrates the robust and efficient performance of AMCE for detectability enhancement of the point target.
6.4. Point target detection The large enhancement of the detectability of the point target will largely simplify the detection of the point target. Then, the point target embedded in the image can be detected simply after AMCE through the thresholding operation with an appropriate threshold. The point target detection result of Fig. 13 is shown in Fig. 18. Due to the accurate elimination of the clutter background and false alarm reduction, the detectability of the point target in the result image obtained by AMCE is largely enhanced, which makes the thresholding operation easy. Consequently, the threshold for image binary will be easily selected. The result images in Fig. 18 are
obtained under very low LSBR through an iterative thresholding algorithm [21], in which the targets are detected correctly. Also, the correct detection results of the images in Fig. 10 are shown in Fig. 19. All the detection results show that AMCE is effective for the purpose of target detection. This experiment indicates the great improvement of AMCE for simplification of point target detection through detectability enhancement.
6.5. Comparison of computation time The computation time of AMCE is composed of the time of the mathematical morphological operation and the time of f(wi) calculation for all the pixels. Because the mathematical morphological operation and f(wi) calculation are both computationally cheap, the computation time of AMCE is small. To compare the computation time of AMCE with other widely used methods, some infrared images with size 128 128 are used. The average computation times of different methods are listed in Table 4 (CPU: Intel Pentium 4, 2.6 GHz, Memory: 512 MB). Table 4 shows that, the computation time of AMCE is smaller than max-median and max-mean. Although the computation time of AMCE is a little longer than top-hat transformation, U-Kernel and E-Kernel because of f(wi) calculation in AMCE, the differences of the computation times among them are quite small. In another way, the
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
1987
Fig. 17. ROC curves for comparison of target enhancement: (a) ROC curve of the images with sky clutter and (b) ROC curve of the images with cloud clutter.
Table 3 Comparison of AUC values. AMCE
Top-hat transformation
Max-median 3 3
Max-median 5 5
Max-mean 3 3
Max-mean 5 5
U-Kernel
E-Kernel
0.9964
0.5897
0.8847
0.8378
0.8085
0.8558
0.9802
0.9856
performance of AMCE in clutter elimination, false alarms reduction and target enhancement are much better than other methods listed in Table 4. All of these mean that,
AMCE is a fast, robust and efficient method for detectability enhancement, which indicates that AMCE can be well used in quasi real-time systems.
ARTICLE IN PRESS 1988
X. Bai et al. / Signal Processing 89 (2009) 1973–1989
7. Conclusions In order to decrease the impact of heavy clutter and dim target intensity, a novel method for point target
detectability enhancement named AMCE is proposed. AMCE is constructed from CB morphological operations by importing the properties of the target region. Firstly, an image is obtained by applying CB morphological operation on the original image, and is subtracted from the original image to generate the residual image. Secondly, the clutter background is estimated through comparing the residual image with the adaptive thresholds of each pixel. The adaptive thresholds are calculated through a function of all the gray-values in a selected window according to the properties of the target region. Finally, the result image only containing target and very little noise is obtained by subtracting the estimated clutter background from the original image. Because of the high parallel property of mathematical morphology, the algorithm can be implemented in realtime hardware system. Although the shape selection of the structuring element is one of the biggest problems in the application of mathematical morphology, the shape of the structuring element is easy to be selected for the purpose of point target detection and can usually be a rhombus or rectangle. Furthermore, for considering the properties of the target regions in the algorithm, the point target can be detected and the noise can be depressed greatly regardless of the signal intensity of the target. So, the proposed algorithm in this paper is not a simple modified morphological transformation or a combination of the existing morphological operations, but an effective algorithm by importing the properties of the point target region into the morphological transformation for the purpose of target detection. Consequently, the algorithm can be used to detect the dim point target in a heavy clutter image and simplify the post-processing of the dim target detection and tracking, and can be applied in the infrared or visual dim small target detection and tracking
Table 4 Comparison of computation time (s).
Fig. 18. Point target detection results of Fig. 13: (a) detection result of Fig. 13(a) and (b) detection result of Fig. 13(b).
AMCE Top-hat MaxMaxtransformation median mean 33 33
MaxMaxmedian mean 55 55
UEKernel Kernel
0.077 0.018
2.326
0.020 0.022
Fig. 19. Point target detection results of Fig. 10.
2.290
1.890
1.948
ARTICLE IN PRESS X. Bai et al. / Signal Processing 89 (2009) 1973–1989
systems. Comparative analysis with other algorithms reveals its superiority on clutter elimination, false alarm reduction and target enhancement, which greatly enhances the detectability of point target.
Acknowledgments We are grateful to the anonymous reviewers for their constructive comments. This work was partly supported by the Aeronautical Science Foundation of China (20070151003) and the Innovation Foundation of Beijing University of Aeronautics and Astronautics (BUAA) for PhD Graduates from BUAA. The authors also would like to thank Dr. Changming Sun at CSIRO Mathematical and Information Sciences, Sydney, Australia and Dr. Li Yan at School of Geology and Space Science in Peking University, Beijing, China for many helpful suggestions and discussions. References [1] Y.S. Moon, T.X. Zhang, Z.R. Zuo, Z. Zuo, Detection of sea surface small targets in infrared images based on multilevel filter and minimum risk Bayes test, International Journal of Pattern Recognition and Artificial Intelligence 14 (2000) 907–918. [2] S.D. Deshpande, M.H. Er, V. Ronda, Ph. Chan, Max-mean and maxmedian filters for detection of small-targets, Proceedings of SPIE 3809 (1999) 74–83. [3] T. Arodz, M. Kurdziel, T.J. Popiela, E.O.D. Sevre, D.A. Yuen, Detection of clustered microcalcifications in small field digital mammography, Computer Methods and Programs in Biomedicine 81 (2006) 56–65. [4] B. Zhang, T. Zhang, K. Zhang, Z. Cheng, Z. Cao, Adaptive rectification filter for detecting small IR targets, IEEE A&E Systems Magazine 22 (8) (2007) 20–26. [5] C.E. Cafer, J. Silverman, J.M. Mooney, Optimization of point target tracking filters, IEEE Transactions on Aerospace and Electronic Systems 36 (1) (2000) 15–25. [6] P.A. Ffrench, J.R. Zeidler, W.H. Ku, Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm, IEEE Transactions on Image Processing 6 (3) (1997) 383–397.
1989
[7] S. Leonov, Nonparametric method for clutter removal, IEEE Transactions on Aerospace and Electronic Systems 37 (3) (2001) 832–848. [8] T. Zhang, Z. Zuo, W. Yang, X. Sun, Moving dim point target detection with three-dimensional wide-to-exact search directional filtering, Pattern Recognition Letters 28 (2) (2007) 246–253. [9] X. Jin, C.H. Davis, Vehicle detection from high-resolution satellite imagery using morphological shared-weight neural networks, Image and Vision Computing 25 (2007) 1422–1431. [10] P. Wang, J.W. Tian, C.Q. Gao, Infrared small target detection using directional highpass filters based on LS-SVM, Electronics Letters 45 (3) (2009) 156–158. [11] B. Moghaddam, A. Pentland, Probabilistic visual learning for object representation, IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (9) (1997) 696–710. [12] Z. Liu, X. Shen, C. Chen, Small objects detection in image data based on probabilistic visual learning, in: Proceedings of Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, China, 2005, pp. 5517–5521. [13] P. Soille, Morphological Image Analysis: Principles and Applications, Springer, Berlin, Germany, 2003. [14] S. Halkiotis, T. Botsis, M. Rangoussi, Automatic detection of clustered microcalcifications in digital mammograms using mathematical morphology and neural networks, Signal Processing 87 (2007) 1559–1568. [15] M. Zeng, J. Li, Z. Peng, The design of top-hat morphological filter and application to infrared target detection, Infrared Physics and Technology 48 (2006) 67–76. [16] F. Zhang, C. Li, L. Shi, Detecting and tracking dim moving point target in IR image sequences, Infrared Physics and Technology 46 (2005) 323–328. [17] W. Gong, Q.Y. Shi, M.D. Cheng, CB morphology and its applications, in: Proceedings of International Conference for Yong Computer Scientists, Beijing, China, 1991, pp. 260–264. [18] X. Bai, F. Zhou, T. Jin, Y. Xie, Infrared small target detection and tracking under the conditions of dim target intensity and clutter background, Proceedings of SPIE 6786 (2007) pp. 67862M1– 67862M9. [19] X. Bai, F. Zhou, Y. Xie, New class of top-hat transformation to enhance infrared small targets, Journal of Electronic Imaging 17 (3) (2008) 0305011–0305013. [20] U. Braga-Neto, M. Choudhary, J. Goutsias, Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators, Journal of Electronic Imaging 13 (4) (2004) 802–813. [21] X. Bai, F. Zhou, Edge detection based on mathematical morphology and iterative thresholding, in: Proceedings of International Conference on Computational Intelligence and Security, Guangzhou, China, 2006, pp. 1849–1852.