ARTICLE IN PRESS
JID: PATREC
[m5G;October 17, 2018;5:2]
Pattern Recognition Letters xxx (xxxx) xxx
Contents lists available at ScienceDirect
Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec
General model for linear information extraction based on the shear transformation Pengfei Xu a, Jun Guo a,∗, Feng Chen a, Yun Xiao a, Qishou Xia b, Baoying Liu a a b
School of Information Science and Technology, Northwest University, Xian, 710127, China College of Mathematics and Computer Science, Chizhou University, Chizhou, Anhui, 247000, China
a r t i c l e
i n f o
Article history: Available online xxx MSC: 41A05 41A10 65D05 65D17 Keywords: Linear information extraction The shear transformation General model Edge detection Line extraction
a b s t r a c t Most images have lots of linear information, which plays an important role in the tasks of image processing and pattern recognition. However, due to the interference of complex background in the images, the multiple directional characteristics of the linear information, and the problems of directional limitations in traditional methods, all these lead to the incomplete and discontinuities existed in the extracted linear information. To solve these problems, this paper puts forward a general model for improving the performance of the linear information extraction methods (LIEM) by utilizing the shear transformation. In this model, the shear transformation can transform an object (a filter or an image) in multiple directions, which directly or indirectly increases the directional characteristics of the traditional LIEM, and improves their performances to extract directional linear information and weak linear information. This paper elaborates on the basic principles of the model and its two implementations for the specific tasks. Furthermore, a variety of experiments are made to verify the versatility and effectiveness of the proposed model.
1. Introduction Digital images are the most direct information sources from outside for human visual system. In these images, different types of linear information (such as edges, line segments, or some objects composed of lines, etc) are of concern in the related tasks of image processing or pattern recognition [2,12,15,34]. In the process of extracting these types of linear information, we always want to extract the linear information more accurately, continuous and complete. However, in actual situations, due to the interferences of multiple factors and the performance limitations of the relevant algorithms, the extracted linear information has the problems of incomplete and discontinuities. There have been lots of related works about the extraction of linear information, and edge detection is one typical task. The traditional methods are simple and easy to implement, but they are more sensitive to noise. The improved edge detection methods based on multi-scale image analysis have better performances, but higher computational complexity [1,6,18,24]. Furthermore, Xu et al. proposed an edge detection method based on Laplacian Bspline [25], his method overcomes the problem of the singularity
∗
Corresponding author. E-mail address:
[email protected] (J. Guo).
© 2018 Elsevier B.V. All rights reserved.
of traditional detection operators, and improves the robustness to the noise. Zhang et al. proposed a real-time edge detection system to support multiple resolutions dynamically [32]. XU et al. introduced the shear transformation to improve the performance of traditional edge detection operators, and his methods can detect more edges, especially for some weak edges [26,27]. Verma et al. presented a fuzzy system for edge detection using smallest univalue segment assimilating nucleus (USAN) principle and bacterial foraging algorithm (BFA) [21].The performances of these improved methods have greater improvements, but their computational complexity also increases. In addition, there are lots of lines (including straight lines and curve lines) in the images, and the extraction of these lines plays an important role in practical applications [3,10,20,31,36]. For example, Song et al. proposed a local-to-global line detection method [20], which can detect straight lines and curve lines simultaneously. The wire detection method proposed by Zhang has higher detection accuracy and less detection error using the spatial relationship between the cable tower and the wires [31]. There are also some other types of lines in some special images, such as the lines existed in the scanned topographic maps [4]. In 2013, Miao et al. proposed a method for separating linear features from topographic maps using energy density and shear transform [17]. Compared with the extraction of various types of lines, the linear information would have more challenges to be extracted. For example,
https://doi.org/10.1016/j.patrec.2018.10.010 0167-8655/© 2018 Elsevier B.V. All rights reserved.
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
ARTICLE IN PRESS
JID: PATREC 2
the linear information contained in the calligraphic writing images embodies the form and spirit information of the calligraphy characters. In order to extract the spirit information of calligraphy better, Zheng et al. proposed an algorithm based on multi-channel guided filtering [33], and Xu et al. proposed a shear-guided filter in 2016 [28], and in 2017, Wang et al. proposed a method based on local guide filtering and reference image [23]. For special tasks, the linear extraction methods are proposed. The traditional methods are easily affected by many factors, and the improved methods have better performances, but also have higher computational complexity [35,38]. Besides, the linear information may exist in multiple directions, however, most LIEM suffer from the problem of direction limitation. In addition, the improved methods are proposed for the special tasks, but cannot apply to other methods, which results in these improved methods are not universal. To address these problems, this paper proposed a general model for improving the performance of LIEM based on the shear transformation. This model has two implementations for the specific tasks, that is the shear transformation directly or indirectly enhances the directional characteristics of the traditional LIEM, and improves their abilities to extract directional linear information and weak linear information. More importantly, this model is a general model, which has been utilized to improve the performances of several LIEM, and can be applied to other two LIEM in this paper, and has strong universality. This paper is organized as follows. In Section 2, the basic principle of the shear transformation is stated. Section 3 elaborates the proposed general model based on the shear transformation. The validation experiments and analysis of the model are described in Section 4. While the final conclusion is given in Section 5. 2. The shear transformation As a widely used linear transformation approaches the shear transformation can transforms an object (a filter or an image) from one 2D coordinate to another 2D coordinate, meanwhile maintains the straightness and parallelism of the linear features and enhanced the directional characteristics of the transformed objects. In some senses, the shear transformation is conceivable to assume that the directionality of the object can be naturally discretized by using a shear matrix [9,11,37]. Usually, the shear transformation is performed on an object in horizontal and vertical directions, and the transformation matrices Sshear are correspondingly consisted of horizontal transformation matrices Sshear_h , and vertical transformation matrices Sshear_v . The finial transformation matrix set is appeared as Sshear = {Sshear_h , Sshear_v }. Specifically,
Sshear_h =
1 k 2ndir
0
1
, Sshear_v =
k
1
(1)
2ndir 1
0
Where ndir is the direction parameter,k ∈ N and k ∈ [(−2 )ndir , 2ndir ]. When an object M is operated by Sshear , then the sheared results M would be obtained.
M = M ∗ Sshear
(2)
When the shear transformation is performed in the horizontal direction, Sshear_h should be applied as following.
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9
xh , yh = (x, y ) × Sshear_h = (x, y ) ×
=
x+y×
k 2ndir
,y
1 k 2ndir
0
1
However, the element values remain unchanged during this process.
Mh xh , yh = M (x, y )
(4)
Similarly, when the shear transformation is performed in the vertical direction, it is necessary to use Sshear_v .
xv , yv = (x, y ) × Sshear_v = (x, y ) ×
=
x, x ×
k +y 2ndir
k 2ndir
1 0
1
(5)
Where (xv , yv ) is the coordinate of an element in the sheared object, and similarly, the element values remain unchanged during this process.
Mv xv , yv = M (x, y )
(6)
The inverse shear transform can convert the sheared objects back to the original objects, and the corresponding transformation matrices is Sshear_inv = {Sshear_inv_h , Sshear_inv_v } .
Sshear_inv_h =
1 k − ndir 2
0 1
, Sshear_inv_v =
1 0
−
k
2ndir 1
(7)
Then
M
inv
= M ∗ Sshear_inv
(8)
Where, Min v is a result set, which consists of the inverse sheared results, and the transformation process is similar to that of the shear transformation. The shear transformation can change the neighborhood environment around each element in the original objects (a filter or an image). Therefore, the linear information extraction results obtained from an image by utilizing the sheared filters are different, and so is the results obtained from the sheared images by utilizing a filter. Therefore, the shear transformation makes the traditional methods achieve multi-directional extraction of linear information.
3. General model for linear information extraction (LIE) based on the shear transformation To address the problems of inaccuracy and incompleteness of LIE in different tasks by traditional methods, this paper presents a general model to improve the performance of LIEM by utilizing the shear transformation. The proposed model makes full use of the multi-directionality of the shear transformation to overcome directional limitation in traditional LIEM directly or indirectly, and can be applied to most of the traditional methods of LIE. This general model is expressed as:
Iimage_set = Iimage ∗ Sshear ∗ Ooperation
(9)
Where, Iimage_set is an image set consisted of all the extracted results, Iimage is the original image, Sshear is the shear transformation, and Ooperation is a LIE operator. Our general model con sists of two different forms Iimage = F (Iimage ∗ (Sshear ∗ Ooperation ))
and Iimage = F (((Iimage ∗ Sshear ) ∗ Ooperation ) ∗ Sshear_inv ) , which can be
selected according to the specific tasks. Where, Iimage is the final result image, and F(•) is a fusion operation to get the final single image.
3.1. Extracting linear information by performing the shear transformation on an operator
(3)
Where (xh , yh ) is the coordinate of an element in the sheared object, (x,y) is the coordinate of an element in the original object.
In Eq. (9), if Ooperation is a filter operator, rather than some steps to extract the linear information, then we can use the first form of this model Iimage = F (Iimage ∗ (Sshear ∗ Ooperation )) . Here, the shear
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
ARTICLE IN PRESS
JID: PATREC
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9
transformation is performed on the filter operator Ooperation , and a set of filter operators Ooperation_set in different directions are obtained, which makes up for the deficiencies of direction limitation.
Ooperation_set = Sshear ∗ Ooperation
(10)
Then,
= I1 , I2 , I3 , . . . , Ii , . . . , In
(17)
Then,
I
image_set
= {I1 , I2 , I3 , . . . , Ii , . . . , In }
(18)
All the images in Iimage need to be transformed back by the _set
(11)
Iimage _set _inv = Iimage_set ∗ Sshear _inv
= I1 , I2 , I3 , . . . , Ii , . . . , In ∗ Sshear_inv
Oi represents a transformed filter operator, and n is the number of them. The operators in the set Ooperation_set have their own directions. Then the operators in every specific direction are used to extract the linear information from the original image, and an image set Iimage_set consisted of all the extracted results can be obtained.
= I1 ∗ Sshear_inv , I2 ∗ Sshear_inv , I3 ∗ Sshear_inv , . . . , Ii ∗ Sshear_inv , . . . , In ∗ Sshear_inv
= {I1 _inv , I2 _inv , I3 _inv , . . . , Ii_inv , . . . , In _inv }
(19)
= {I1 _inv , I2 _inv , I3 _inv , . . . , Ii_inv , . . . , In _inv }
(20)
Then,
Iimage_set = Iimage ∗ Ooperation_set = Iimage ∗ {O1 , O2 , O3 , . . . , Oi , . . . , On }
I
= Iimage ∗ O1 , Iimage ∗ O2 , Iimage ∗ O3 , . . . ,
Furthermore, the final result image Iimage is obtained by utilizing the method of probability fusion to fuse all the extracted results in Iimage . _set _inv
image_set _inv
Iimage ∗ Oi , . . . , Iimage ∗ On
= {I1 , I2 , I3 , . . . , Ii , . . . , In }
(12)
Then,
Iimage =
Iimage_set = {I1 , I2 , I3 , . . . , Ii , . . . , In }
(13)
Furthermore, the final result image Iimage is obtained by utilizing the method of probability fusion to fuse all the extracted results in Iimage_set .
inverse shear transformation.
Ooperation_set = {O1 , O2 , O3 , . . . , Oi , . . . , On }
Iimage =
3
i=n
pi × Ii
(14)
i=1
Where, pi is the weight of the image Ii . Unlike the original operator, which is sensitive to one or a few directions, the transformed filter operators are sensitive to much more directions. Therefore, we can extract the linear information from an image in multiple directions by the filter operators in Iimage_set , rather than in a few directions. So, this model makes it possible to extract some linear information which is difficult to capture before. 3.2. Extracting linear information by performing the shear transformation on an image In Eq. (9), if Ooperation is not a filter operator, but a series of steps to extract linear information, we can use the second form of this model Iimage = F (((Iimage ∗ Sshear ) ∗ Ooperation ) ∗ Sshear_inv ). Here, the shear transformation is performed on an image Iimage , and a transformed image set Iimage_set in different directions is obtained.
Iimage_set = Iimage ∗ Sshear
(15)
Then,
Iimage_set = {I1 , I2 , I3 , . . . , Ii , . . . , In }
(16)
Ii represents the ith transformed image, and n is the number of these images. The images in the set Iimage_set have their own directions. Then we extract the linear information from all the transformed images in Iimage_set by the operatorOoperation , and get an im age set Iimage of the corresponding result of each transformed _set image. Iimage _set = Iimage_set ∗ Ooperation
= {I1 , I2 , I3 , . . . Ii , . . . , In } ∗ Ooperation
= I1 ∗ Ooperation , I2 ∗ Ooperation , I3 ∗ Ooperation , . . . , Ii ∗ Ooperation , . . . , In ∗ Ooperation
i=n
pi × Ii_inv
(21)
i=1
Where, pi is the weight of the image Ii . In this type form of our model, a transformed image set with different direction characteristics is obtained by performing the shear transformation on an original image, and the relative positions between each pixel in these transformed images are changed (although the overall pixel values in each image are remained), which leads to the surrounding pixels around the linear information also change. Therefore, there exist some differences among the linear information extracted from these transformed images by the same operator, and the extracted linear information could be fused to obtain the optimal result through extracting more linear information in multiple directions. 4. Experiments and analysis In order to verify the generality and validity of the proposed model, we make several experiments including edge detection, line extraction and the extraction of detail information with linear features, and so on. In these experiments, our model has been successfully applied to most LIEM. In addition, this paper extends our model to two other methods about LIE. 4.1. To improve the performances of edge detection by the proposed model The goal of edge detection is to detect edges from the images continuously and integrally, and the result images have less noise. At present, there are many edge detection methods. The traditional edge detection algorithms utilize the corresponding detection operators, such as Sobel, Robert, Prewitt, Laplacian, and so on. In this case, we can use the form of the model Iimage = F (Iimage ∗ (Sshear ∗ Ooperation )). Furthermore, there also are some methods to detect edges in a series of steps, such as the wavelet-based edge detection algorithms, and then the form of the model can be used. Furthermore, there also are some methods to detect edges in a series of steps, such as the wavelet-based edge detection algo rithms, and then the form of the model Iimage = F (((Iimage ∗ Sshear ) ∗ Ooperation ) ∗ Sshear_inv ) can be used. Figs. 1 and 2 have shown the results obtained using two different forms of our model. Fig. 1(a) is an original image, Fig. 1(b) is the result obtained by Sobel, and Fig. 1(c) is the result obtained
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
JID: PATREC 4
ARTICLE IN PRESS
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9
Fig. 1. The results obtained by Sobel and our model; (a) The original image; (b) The result obtained by Sobel; (c) The result obtained by our model.
Fig. 2. The results obtained by a wavelet-based edge detection algorithm and our model; (a) The original image; (b) The result obtained by a wavelet-based edge detection algorithm; (c) The result obtained by our model.
by our model. From Fig. 1 (b), Sobel cannot detect some weak edges, and is sensitive to noise. In contrast, the result obtained by our model has more edges and less noise, as shown in Fig. 1(c). Fig. 2(a) is the original image, Fig. 2(b) is the result obtained by a wavelet-based edge detection algorithm, and Fig. 2(c) is the result obtained by our model. From Fig. 2(a), it is obvious that there are lots of edges in multiple directions in the original image, but the wavelet transform is only operated in three directions, which causes its difficulties to detect multi-direction edges completely. Therefore, there exist many discontinuous edges in Fig. 2(b). However the multi-directionality of the shear transformation largely compensates for the directional limitations of the wavelet transform in our model, which can detect the edges in more directions, and the detected edges are more continuous and complete, as shown in Fig. 2(c). Furthermore, three images of a dataset [5] for edge detection are used to verify the effectiveness of our model. The results obtained by the traditional Sobel, the improved Sobel [7], the traditional wavelet, and the method based on Wavelet Multi-Scale Registration and Modulus Maximum (WMSR-MM) [29] are used as the comparison methods, and our models are performed based on the improved Sobel and WMSR-MM. The final resulting images are shown in Figs. 3 and 4. From these results, we can see that improved Sobel can get more edges than traditional Sobel (As shown in Fig. 3(a3 -c3 )), and so is WMSR-MM, as shown in Fig. 4(a1 -c1 ). However, the methods based on our model have the best performances, and they can detect more accurate edges, as shown in Figs. 3(a4 -c4 )) and 4(a2 -c2 ). Moreover, the image dataset provides the ground truth images for each original image, so we can give two quantitative evaluations: the number of edge pixels accurately detected (NPAD) and the number of edge pixels missing detected (NPMD), and the results are shown in Tables 1 and 2. Similarly to the resulting images, the results obtained by our model have more NPAD and less NPMD.
Fig. 3. The original images, the ground truth and the resulting images obtained by different methods based on Sobel. (a-c) The original images, (a1 -c1 ) the ground truth, (a2 -c2 ) the results obtained by Soble, (a3 -c3 ) The results obtained by improved Sobel, (a4 -c4 ) The results obtained by our model.
Fig. 4. The resulting images obtained by different methods based on wavelet. (a-c) the results obtained by wavelet, (a1 -c1 ) The results obtained by WMSR-MM, (a2 -c2 ) The results obtained by our model.
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
ARTICLE IN PRESS
JID: PATREC
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9
5
Table 1 The quantitative evaluations of the results obtained by different methods based on Sobel. Image
Methods
NPAD
NPMD
43.pgm
Sobel Improved Sobel Our model Sobel Improved Sobel Our model Sobel Improved Sobel Our model
4223 4563 5901 3177 3654 4914 3837 4765 6722
4610 4270 2932 10,932 10,455 9195 6647 5719 3892
airfield.pgm
mainbuilding.pgm
Table 2 The quantitative evaluations of the results obtained by different methods based on Wavelet. Image
Methods
NPAD
NPMD
43.pgm
Wavelet WMSR-MM Our model Wavelet WMSR-MM Our model Wavelet WMSR-MM Our model
4107 4328 5393 3539 3938 4822 5824 6194 6596
3726 3605 3440 10,570 10,124 9287 4660 4290 3888
airfield.pgm
mainbuilding.pgm
4.2. To improve the extraction performances of the detail information with linear features by the proposed model The Chinese calligraphy characters in the calligraphy writing images have lots of detail information with linear features, which is an important expression of calligraphy form and spirit elements, as shown in Fig. 5(a) and (b). FFCM [16] can detect most of the information of the Chinese calligraphy characters (As shown in Fig. 5(a1 -b1 )), but the ability to extract the spirit information needs to be improved. In order to extract these kinds of information more accurately, a calligraphy character information extraction method combining multiple channels and guided filters (MCGF) was proposed by Zheng to extract the form and spirit information of the calligraphy word [33], as shown in Fig. 5(a2 -b2 ). However, MCGF is limited to its transformation directions, which causes that it is difficult to extract some detailed information and weak line segments in the half-dry strokes. To solve this problem, our model Iimage = F (Iimage ∗ (Sshear ∗ Ooperation )) can be used to extract the Chinese calligraphy characters in more directions, which makes the extracted calligraphy characters more completely, and the results have more ink change information so that they can better reflect the beauty of calligraphy, as shown in Fig. 5(a3 -b3 ). 4.3. To improve the performances of line extraction by the proposed model In most images, there are a large number of lines (including straight lines and curves), which are often the key information for object detection and recognition. For example, in a scanned topographic map with complex background, the colors of the background pixels are very similar to those of the pixels on the lines, and these lines have lots of directions, as shown in Fig. 6(a-c). The lines in these topographic maps with the complex background are difficult to be detected by the traditional image segmentation methods [16], as shown in Fig. 6(a1 -c1 ). Fig. 6(a2 -c2 ) shows the line extraction result using energy density. It can be seen that there are lots of broken lines due to the directional limitation of the energy density templates. Fig. 6(a3 -c3 ) is the extracted lines by our model, and the lines in the resulting image are much more continuous.
Fig. 5. The results obtained by FFCM, MCGF and our model. (a-b) The original images, (a1 -b1 ) The results obtained by FFCM, (a2 -b2 ) The results obtained by MCGF, (a3 -b3 ) The results obtained by our model.
That is because the shear transformation increases the directional features of the lines, which indirectly compensates for the defect of directional limitation of the line extraction templates.
4.4. Two other improved methods of our model In our model, the shear transformation is utilizing to improve the ability of LIEM. At present, some special cases of this model have been applied to several practical tasks, such as edge detection [26], lines extraction [17], calligraphy character extraction [33], and so on. These improved methods were proposed by introducing the shear transformation into the traditional methods to significantly improve the sensitivity to multi-directional linear information and weak linear information. Therefore, the extracted linear information has higher accuracy. In order to extend the versatility of this model further, we apply our model to two other line detection methods, and the related experiments and discussions are as follows. (1) To improve the performances of Hough transform by the proposed model The Hough transform [8,30] is a common method for detecting lines, and the detected lines using Hough transform on three images are shown in Fig. 7(b1 -b3 ). From the results, it can be seen that most of the linear information is detected well. However, some weak linear information is still not extracted, which leads to the broken lines and leakage extraction occurred. In contrast, the improved method of our model can extract more linear information, the continuity of the lines is also greatly improved, and the results have less noise, as shown in Fig. 7(c1 -c3 ). (2) To improve the performances of line segment detector (LSD) by the proposed model
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
JID: PATREC 6
ARTICLE IN PRESS
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9
Fig. 6. The results obtained by FFCM, the methods based on the energy density and our model. (a-c) The original images, (a1 -c1 ) The results obtained by FFCM, (a2 -c2 ) The results obtained by the methods based on the energy density, (a3 -c3 ) The results obtained by our model.
LSD utilizes the gradient information of the image to detect straight lines, and it has higher efficiency than Hough transform [13,22]. In this experiment, the traditional LSD is used to detect the lines from three test images, and the results are shown in Fig. 8(b1 -b3 ). The extracted lines by the improved method of our model are shown in Fig. 8(c1 -c3 ). By comparing these experimental results, we can see that the multi-directionality of the shear transformation largely improves the performances of LSD to detect linear information. Therefore, the improved method can detect more linear information, especially for the weak line information, and the detected lines are more continuous and smoother. In addition, we also use three images from the dataset for edge detection to test our model, and the information of quantitative evaluation can be obtained by comparing the result images and the ground truth images. The traditional Hough transform (HT), the method combing canny and Hough transform (CHT) [14], the traditional LSD and Multiscale line segment detector (MLSD) [19] are used as the comparison methods, and our methods are performed based on CHT and MLSD. The final detected lines are shown in Figs. 9 and 10. From the results, we can see that CHT and MLSD can extract more lines than HT and LSD respectively, as shown in Figs. 9(a1 -c1 ) and 10(a1 -c1 ). In contrast, the methods based our model have the best performances, they can extract more
Table 3 The quantitative evaluations of the results obtained by different methods based on HT. Image
Methods
NPAD
NPMD
43.pgm
HT CHT Our model HT CHT Our model HT CHT Our model
2471 4768 6426 2869 4545 6149 4380 5044 7345
6362 4065 2407 11,240 9564 7960 6104 5140 4139
airfield.pgm
mainbuilding.pgm
accurate lines, as shown in Figs. 9(a2 -c2 ) and 10(a2 -c2 ). Furthermore, we also give the quantitative evaluation of NPAD and NPMD by comparing the result images with the ground truth. The experimental data of NPAD and NPMD are shown in Tables 3 and 4. From the quantitative evaluation parameters, we can come to the conclusion that the results obtained by our model have more NPAD and less NPMD. For example, there are about 20 0 0 more average NPAD in the results obtained by our model than that in the results obtained by CHT, and about 10 0 0 more average NPAD in the results obtained by our model than that in the results obtained by MLSD.
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
JID: PATREC
ARTICLE IN PRESS
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9
7
Fig. 7. The results obtained by Hough transform and our model; (a) The original images; (b) The results obtained by Hough transform; (c) The results obtained by our model.
Fig. 8. The results obtained by LSD and our model; (a) The original images; (b) The results obtained by LSD; (c) The results obtained by our model.
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
ARTICLE IN PRESS
JID: PATREC 8
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9
For NPMD, the results obtained by our model have about 1100 less average NPMD that in the results obtained CHT, and about 300 less average NPMD than that in the results obtained MLSD. 5. Conclusion This paper presents a general model for improving the performance of LIEM by introducing the shear transformation with multi-directionality. The shear transformation can increase the directional characteristics of the LIE operators, or indirectly compensate for the defect of the directional limitation of LIEM by transforming the image in multiple directions. These improved methods solve the problems of incomplete and discontinuous extraction of the linear information by the traditional methods, and the image fusion of the image set consisted of all the extracted results also enhances the robustness of LIEM and their insensitivity to noise. However, the introduction of the shear transformation also increases the computational complexity and the running time of LIEM. Conflict of Interest Fig. 9. The resulting images obtained by different methods based on HT. (a-c) the results obtained by HT, (a1 -c1 ) The results obtained by CHT, (a2 -c2 ) The results obtained by our model.
There are no conflict of interest for this paper. Acknowledgments The work was jointly supported by the National Natural Science Foundation of China under grant: No. 61502387, 61702415, 61802335, 61876145; Talent Support Project of Science Association in Shaanxi Province: 20180108; Natural Science Foundation of Shaanxi Province, under grant No.2016JQ6029; The 59th Chinas Post-doctoral Science Fund No. 2016M592832. References
Fig. 10. The resulting images obtained by different methods based on LSD. (a-c) the results obtained by LSD, (a1 -c1 ) The results obtained by MLSD, (a2 -c2 ) The results obtained by our model.
Table 4 The quantitative evaluations of the results obtained by different methods based on LSD. Image
Methods
NPAD
NPMD
43.pgm
LSD MLSD Our model LSD MLSD Our model LSD MLSD Our model
2388 4677 5269 2289 3877 4134 1687 4476 4508
6445 4156 3564 11,820 10,232 9975 7733 4944 4912
airfield.pgm
mainbuilding.pgm
[1] J. Canny, A computational approach to edge detection, in: Readings in Computer Vision, Elsevier, 1987, pp. 184–203. [2] X. Chang, Z. Ma, M. Lin, Y. Yang, A.G. Hauptmann, Feature interaction augmented sparse learning for fast kinect motion detection, IEEE Trans. Image Process. 26 (8) (2017) 3911–3920. [3] X. Chang, F. Nie, S. Wang, Y. Yang, X. Zhou, C. Zhang, Compound rank-k projections for bilinear analysis, IEEE Trans. Neural Network Learn. Syst. 27 (7) (2016) 1502–1513. [4] Y. Chen, R. Wang, J. Qian, Extracting contour lines from common-conditioned topographic maps, IEEE Trans. Geosci. Remote Sens. 44 (4) (2006) 1048–1057. [5] E. Detector, accessed on dec. 1999 [online], (Available: http://figment.csee.usf. edu/edge/roc/). [6] M.A. Duval-Poo, F. Odone, E. De Vito, Edges and corners with shearlets, IEEE Trans. Image Process. 24 (11) (2015) 3768–3780. [7] K. Goel, M. Sehrawat, A. Agarwal, Finding the optimal threshold values for edge detection of digital images and comparing among bacterial foraging algorithm, canny and sobel edge detector, in: 2017 Int. Conf. Comput., Commun. Autom. (ICCCA), 2017, pp. 1076–1080. [8] J. Illingworth, J. Kittler, A survey of the hough transform, Computer Vision, Graphics, Image Process. 44 (1) (1988) 87–116. [9] W.-Q. Lim, The discrete shearlet transform: a new directional transform and compactly supported shearlet frames, IEEE Trans. Image Process. 19 (5) (2010) 1166–1180. [10] M. Luo, X. Chang, Z. Li, Simple to complex crossmodal learning to rank, Comput. Vision Image Understanding 163 (2017) 67–77. [11] M. Luo, X. Chang, L. Nie, Y. Yang, A.G. Hauptmann, Q. Zheng, An adaptive semisupervised feature analysis for video semantic recognition, IEEE Trans. Cybern. 48 (2) (2018) 648–660. [12] Z. Ma, X. Chang, Z. XU, A. G.Hauptmann, Joint attributes and event analysis for multimedia event detection, IEEE Trans. Neural Network Learn. Syst. 29 (2018) 2921–2930. [13] S. Mansouri, M. Charhad, M. Zrigui, Arabic text detection in news video based on line segment detector, Res. Comput. Sci. (2017). [14] Y. Meng, Z. Zhang, H. Yin, Automatic detection of particle size distribution by image analysis based on local adaptive canny edge detection and modified circular hough transform, Micron 106 (2018) 34–41. [15] Q. Miao, P. Xu, X. Li, J. Song, W. Li, Y. Yang, The recognition of the point symbols in the scanned topographic maps, IEEE Trans. Image Process. 26 (6) (2017) 2751–2766. [16] Q. Miao, P. Xu, T. Liu, A novel fast image segmentation algorithm for large topographic maps, Neurocomputing 168 (2015) 808–822.
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
JID: PATREC
ARTICLE IN PRESS
[m5G;October 17, 2018;5:2]
P. Xu et al. / Pattern Recognition Letters 000 (2018) 1–9 [17] Q. Miao, P. Xu, T. Liu, Y. Yang, J. Zhang, W. Li, Linear feature separation from topographic maps using energy density and the shear transform, IEEE Trans. Image Process. 22 (4) (2013) 1548–1558. [18] L. Nie, L. Zhang, Y. Yan, X. Chang, M. Liu, L. Shaoling, Multiview physician-specific attributes fusion for health seeking, IEEE Trans. Cybern. 47 (11) (2017) 3680–3691. [19] Y. Salan, R. Marlet, P. Monasse, Multiscale line segment detector for robust and accurate sfm, 2016 23rd International Conference on Pattern Recognition (ICPR) (2016) 20 0 0–20 05. [20] B. Song, X. Li, Power line detection from optical images, Neurocomputing 129 (2014) 350–361. [21] O.P. Verma, A.S. Parihar, An optimal fuzzy system for edge detection in color images using bacterial foraging algorithm, IEEE Trans. Fuzzy Syst. 25 (1) (2017) 114–127. [22] R.G. Von Gioi, J. Jakubowicz, J.-M. Morel, G. Randall, Lsd: a fast line segment detector with a false detection control, IEEE Trans. Pattern Anal. Mach. Intell. 32 (4) (2010) 722–732. [23] L. Wang, X. Gong, Y. Zhang, P. Xu, X. Chen, D. Fang, X. Zheng, J. Guo, Artistic features extraction from chinese calligraphy works via regional guided filter with reference image, Multimed. Tools Appl. 77 (3) (2018) 2973–2990. [24] S. Wang, X. Li, L. Yao, Q. Sheng, G. Long, Learning multiple diagnosis codes for icu patients with local disease correlation mining, ACM Trans. Knowl. Disc. Data (TKDD) 11 (3) (2017) 1–31. [25] D. Xu, X. Wang, G. Sun, H. Li, Towards a novel image denoising method with edge-preserving sparse representation based on laplacian of b-spline edge-detection, Multimed. Tools Appl. 76 (17) (2017) 17839–17854. [26] P. Xu, Q. Miao, C. Shi, J. Zhang, W. Li, An edge detection algorithm based on the multi-direction shear transform, J. Vis. Commun. Image Represent. 23 (5) (2012) 827–833. [27] P. Xu, Q. Miao, C. Shi, J. Zhang, M. Yang, General method for edge detection based on the shear transform, IET Image Process. 6 (7) (2012) 839–853.
[28] P. Xu, X. Zheng, X. Chang, Q. Miao, Z. Tang, X. Chen, D. Fang, Artistic information extraction from chinese calligraphy works via shear-guided filter, J. Vis. Commun. Image Represent. 40 (2016) 791–807. [29] W. Xu, X. Liu, Z. Dai, A new edge detection method of magnetic flux leakage image based on wavelet multi-scale registration and modulus maximum, in: 2018 Chin. Control Decis. Conf., 2018, pp. 400–404. [30] Z. Xu, B.-S. Shin, R. Klette, Accurate and robust line segment extraction using minimum entropy with hough transform, IEEE Trans. Image Process. 24 (3) (2015) 813–822. [31] J. Zhang, H. Shan, X. Cao, P. Yan, X. Li, Pylon line spatial correlation assisted transmission line detection, IEEE Trans. Aerosp. Electron. Syst. 50 (4) (2014) 2890–2905. [32] K. Zhang, L. Ding, Y. Cai, W. Yin, F. Yang, J. Tao, L. Wang, A high performance real-time edge detection system with neon, IEEE, 2017, pp. 847–850. [33] X. Zheng, Q. Miao, Z. Shi, Y. Fan, W. Shui, A new artistic information extraction method with multi channels and guided filters for calligraphy works, Multimedia Tools Appl. 75 (14) (2016) 8719–8744. [34] L. Zhu, Z. Huang, X. Liu, Discrete multimodal hashing with canonical views for robust mobile landmark search, IEEE Trans. Multimedia 19 (9) (2017) 2066–2079. [35] L. Zhu, J. Shen, H. Jin, Content-based visual landmark search via multimodal hypergraph learning, IEEE Trans. Cybern. 45 (12) (2015) 2756–2769. [36] L. Zhu, J. Shen, H. Jin, Landmark classification with hierarchical multi-modal exemplar feature, IEEE Trans. Multimedia 17 (7) (2015) 981–993. [37] L. Zhu, J. Shen, L. Xie, Unsupervised topic hypergraph hashing for efficient mobile image retrieval, IEEE Trans. Cybern. 47 (11) (2017) 3941–3954. [38] L. Zhu, J. Shen, L. Xie, Unsupervised visual hashing with semantic assistant for content-based image retrieval, IEEE Trans. Knowl. Data Eng. 29 (2) (2017) 472–486.
Please cite this article as: P. Xu et al., General model for linear information extraction based on the shear transformation, Pattern Recognition Letters (2018), https://doi.org/10.1016/j.patrec.2018.10.010
9