Automatic Identification and Extraction of Clouds from Astronomical Images Based on Support Vector Machinetwo

Automatic Identification and Extraction of Clouds from Astronomical Images Based on Support Vector Machinetwo

CHINESE ASTRONOMY AND ASTROPHYSICS Chinese Astronomy and Astrophysics 43 (2019) 128–142 Automatic Identification and Extraction of Clouds from Astrono...

955KB Sizes 0 Downloads 29 Views

CHINESE ASTRONOMY AND ASTROPHYSICS Chinese Astronomy and Astrophysics 43 (2019) 128–142

Automatic Identification and Extraction of Clouds from Astronomical Images Based on Support Vector Machine†  WANG Li-wen1 1

JIA Peng1

CAI Dong-mei1

LIU Hui-gen2

School of Physics and Optoelectronics, Taiyuan University of Technology, Taiyuan 030024 2

School of Astronomy and Space Science, Nanjing University, Nanjing 210034

Abstract For the time-domain astronomical research, the optical telescopes with a small and medium aperture can get a huge amount of data through automatic sky surveying. A certain proportion of automatically acquired data are interfered by clouds, which makes it very difficult to automatically extract the dim objects and make photometry. Therefore, it is necessary to identify and extract clouds from these images as the index figures for a reference in the subsequent information extraction. In this paper, an astronomical image selection system based on the support vector machine is proposed, which sets the gray value inconsistency and texture difference as the reference to select the images interfered by clouds. Based on the classification results, by through the histogram transformation and feature selection, the index figures of clouds can be further extracted. The experimental results show that our method can achieve the realtime selection of astronomical images with a classification accuracy better than 98%. By the histogram transformation and feature selection the index figure of clouds can be preliminarily extracted as the references for the photometry and dim object extraction. Key words techniques: image processing—techniques: photometry—astronomical data bases: surveys 1.

INTRODUCTION

The time-domain astronomy is a very active research field at present, its research objects include the supernovae, variable stars, and extrasolar planets with a continuous luminosity †

Supported by National Natural Science Foundation (11503018, U1631133), Shanxi High School Science

and Technology Innovation Project (2016033) Received 2018–01–29; revised version 2018–03–08  

A translation of Acta Astron. Sin. Vol. 59, No. 4, pp. 34.1–34.12, 2018 [email protected]

0275-1062/01/$-see front matter © 2019 Elsevier B. V. All rights reserved. doi:10.1016/j.chinastron.2019.02.007

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

129

variation, as well as the near-Earth objects with a rapid motion. Although the timescales of their position and luminosity variations are different, and their requirements on the observation are also different, but in general, they all require the continuous acquisition of observational data in the respective time windows, therefore many small-and-medium sized telescopes distributed in the world are used to make the continuous imaging observation by means of automatic control system[1−7] . However, this kind of observations bring many new challenges and problems to the time-domain astronomical observation and data processing[8] : in the time-domain astronomical observation, except for the near-Earth rapidly moving objects which appear as line-shaped images, most of interesting objects are point sources, in a huge amount of astronomical data acquired at the instrumental end, a certain proportion may be interfered by the cloud layers in the sky. When we make detection and photometry on the astronomical objects in theses data, the cloud layer will influence the observational accuracy, even completely interfere the observation[9] . Fig.1 shows the acquired image data at the instrumental end, in which, Panels (a)–(c) are the normal images without clouds, Panels (d)–(f) are the images interfered by clouds.

Fig. 1 Image examples. (a)–(c) normal images; (d)–(f) images with clouds

Because of the influence of the cloud layer, the light penetrated through the cloud layer will be attenuated to cause an error in photometry, at the same time, the light come from

130

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

the ground will be reflected to the imaging system by the cloud layer, this makes the cloud area become more bright than the cloudless area, and seriously affects the detection of weak and transient sources. In consideration of this point, in order to provide a reference for the photometry and extraction of dark and transient objects, it is necessary to make first the cloud extraction from the image, and according to the grey-value variation of clouds to build up an index figure. The index figure is a frame of image with the same size as the original data, the grey-value information of its all pixels can represent the profile and grey-value information of the clouds in the obtained data. But if there is no cloud in the image, the derived index figure will be meaningless. Hence, before the cloud extraction we have to select the images with clouds. Because of the huge quantity of data of time-domain astronomical observations, the artificial selection will be very hard and time-consuming, it is necessary to establish a rapid classification system of cloud images. Then, according to the classification result we can further extract the clouds in the image and build up the index figure favorable to the scientific research[10,11] . But the evaluation of data characteristics of an image is always a very difficult problem, especially for the astronomical image. As there are many external interference factors, and the effects of different factors differ with the astronomical observation task, hence it is very difficult to judge directly whether clouds exist in the image by several empirical parameters and their distribution ranges obtained from the analysis on the physical process of imaging. Instead of the direct analysis of physical process, an alternate idea is to start from the data characteristics, namely, to label the astronomical images with clouds, and combine with the morphological features of cloud images, then by means of machine learning to establish a classifier in consistence with the data characteristics. In the galaxy classification and spectrum recognition[12,13] , Support Vector Machine (SVM) is a kind of common-used classifier. The basic strategy of this kind of classifier is to ensure the different kinds of data have a maximal class interval. As the problem of interval maximization generally can be converted into a problem of convex quadratic programming, hence, compared with the neural network, random tree, and decision tree, the SVM can give the required classifier in the case of a smaller number of data[14,15] . This character has relaxed the requirement on the data accumulation, and reduced the workload of artificial labeling. Besides, when the data are indivisible in the low-dimensional space, by means of kernel function mapping, the classification problem can be converted into the problem of super-plane separation in the high-dimensional space. In Sec.2 of this paper, starting from the data characteristics, by analyzing the characteristics of cloud images, we have determined the descriptive quantities of image characteristics, established the classification system, and made the experimental test; in Sec.3, we have performed the extraction of index figure from the image with clouds; and finally in Sec.4 a summary is given.

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

2.

131

RAPID CLASSIFICATION SYSTEM OF ASTRONOMICAL CLOUD IMAGES

The cloud layer has a two-dimensional structure, we have to start from its characteristics and describe it by using some parameters. Generally, the cloud layer has a certain extension feature and texture feature, and the thickness of the cloud layer may cause a difference of grey value, hence we take the grey-value inconsistency and texture feature as the evaluation indexes of cloud layers in astronomical images, and adopt the 5 indexes including the greyvalue inconsistency and texture feature to make the cloud recognition. 2.1

Extraction of Cloud Image Characteristics

2.1.1 The grey-value inconsistency The grey-value inconsistency of the image is an important index to represent the image’s background grey value, as an evaluation index of an astronomical image, the image’s greyvalue inconsistency is defined as: G = 10lg(Ps /Pn ) ,

(1)

in which, Ps and Pn are respectively the maximum and minimum standard deviations of local images. According to the working condition of the instrument during the observation, the typical size of stellar images can be obtained, then accordingly the template size is adjusted, this paper selects the template of 9×9 to spread all over the image. According to the above definition of grey-value inconsistency, we can find that this characteristic quantity reflects the overall (background) fluctuation of grey value and the inconsistency of local specials (stars or other astronomical objects) in the characteristic scale of the image. The magnitude of grey-value inconsistency of the image has an important reference meaning for the image quality, when clouds exist in the image, the image’s greyvalue inconsistency will have a violent variation. 2.1.2 Texture features On the basis of the image’s grey-value inconsistency, adopting the statistical information— texture features can make a better description on the cloud layer. The grey-value coexistence matrix can represent the synthetical information about the direction, interval, and variation amplitude of the image’s grey value, hence it suits the texture recognition of the cloud image, and the verification whether clouds exist in the astronomical image. The elements of the grey-value coexistence matrix that satisfy a certain spatial relationship are: g(i, j) = #{f (x1 , z1 ) = i, f (x2 , z2 ) = j|(x1 , x2 ), (x2 , z2 ) ∈ M × N } ,

(2)

in which, i and j are the line and column numbers of the matrix g, M and N are the numbers of the lines and columns, g is the grey-value coexistence matrix of the image f , #(x) expresses the number of x elements in the set. If (x1 , z1 ) and (x2 , z2 ) are the two points in the matrix g, the distance between the two points is d, their angle with

132

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

the abscissa axis is q, then we can obtain the grey-value coexistence matrixes of various distances and angles g(i, j, d, q). Here, what we studied are astronomical images, the images of most observed objects are of rotational symmetry, this character leads to the fact that the effects of grey-value coexistence matrixes of different directions on the image’s texture feature only have a very small difference, hence here we select d = 1, q = 0. Considered the characteristics of clouds in the astronomical images obtained when the telescope makes automatic observations, we select the following 4 texture features to make the evaluation on the images: (1) Energy: Energy is the sum of squares of all elements, it is a measurement on the image’s flatness. If all the values in g are homogenous, then the value of ASM will be relatively small, in the opposite, when clouds exist, the values are rather large in some areas and in other areas the values are smaller, then the value of ASM is rather large. Therefore, ASM can be taken as a recognition index of clouds. ASM =

Mg Ng  

[g(i, j)]2 ,

(3)

i=1 j=1

in which, Mg and Ng are respectively the numbers of the lines and columns of the matrix g. (2) Contrast: Contrast can reflects the situation of local grey-value variation. The existence of clouds will lead to a very large grey-value variation in a part of region, hence the cloud image has a large value of CON, and therefore CON can be taken as an other recognition index. Mg Ng   CON = (i − j)2 g(i, j) . (4) i=1 j=1

(3) Inverse difference moment: Inverse difference moment represents the homogeneity of the image texture, it can be used to measure the number of local variations of the image texture. As a cloud generally has certain extended structures, the variation of its texture feature will be rather large, so IDM can be taken as an additional important index. IDM =

Mg Ng   i=1 j=1

g(i, j) . 1 + (i − j)2

(5)

(4) Entropy: Entropy is a measurement of the information quantity in the image, and the measurement of image randomness. When clouds exist under the similar observational condition the entropy value will increase compared to that when no clouds exist. As the objects in the time-domain astronomical image are mainly point sources, the entropy value of the image will be smaller than that of the image with clouds existed under the same condition. Hence, ENT can be taken as the final important index for the cloud detection. ENT = −

Mg Ng   i=1 j=1

g(i, j)log2 g(i, j) .

(6)

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

2.2

133

Classifier of Cloud Images

2.2.1 SVM SVM is a kind of common-used classification algorithm proposed by Vapnik et al.[16] . It was used most early for solving the following dual classification problem: wT · x + b = 0 ,

(7)

in which, w=(w1 , w2 , · · · , wn ) is the normal vector that determines the super plane direction, n is the dimension of x, x represents the data point, b is the distance between the super plane and the origin, SVM realizes the maximization of geometric interval between two different classes by searching for a super plane. The geometric interval s is defined as: s = yl ,

(8)

in which, y indicates the class represented by these data points, and it is expressed by 1 and -1, l is the function interval, which can be derived by the following formula: l = y(wT · x + b) .

(9)

Then the problem of maximizing the geometric interval can be transformed into the problem of convex optimization as follows: 1 min ||w||2 , s.t. yi (wiT xi + b)  1, i = 1, · · · , n , 2

(10)

and the above problem can be solved directly by using the quadratic programming optimization package[17] . However, from the discussion on the descriptive model of image characteristics in the above subsection, we can find that the various evaluation indexes are not independent to each other, because of the complexity of the image, these indexes are generally correlated with each other. When the cloud layer is relatively thin, as the area of cloud layer increases, the whole-body fluctuation of the grey value becomes larger, the value of ASM will increase, at the same time because that the grey values in local areas increase, the value of CON will change; while the thickness of the cloud layer increases simultaneously, the grey-value inhomogeneity in local areas will reduce, in this case the corresponding value of CON will remain constant, even become smaller. Because of the complicated correlations among these indexes, in the condition when the requirement can be satisfied, if further increasing other indexes the correlations will damage the structure in the feature space, leading to the SVM unable to find the super plane, and therefore degrading the image classification ability. For this problem, SVM can realize the classification of image characteristics by mapping the linear features to a higher-dimensional feature space[15] . A relatively simple method is taking advantage of kernel function, to transform the above classification function as: n  f (x) = ai yi K(xi , xj ) + b , (11) i=1

134

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

in which, K(xi , xj ) is the kernel function, ai is the Lagrange’s multiplier, after the transformation, the convex optimization problem becomes the following form: n 1  maxa ai − ai aj yi yj K(xi , xj ) , 2 i,j=1 i=1 n 

s.t. ai  0, i = 1, · · · , n , n 

ai yi = 0 .

i=1

2.2.2 Process to establish the SVM cloud image classifier At first, we take a linear kernel as the SVM kernel function, and make evaluation according to its classification performance for the test set. When the system’s classification performance is worse (the accuracy lower than 90%), we will further consider to convert the SVM kernel into a nonlinear kernel. The flowchart of the classification system is shown as Fig.2. From Fig.2 we can find that the whole real-time selection system of cloud images includes the following procedures: (1) To read out the data in the every frame of original images, and artificially make label according to whether there is cloud in the original data; (2) To divide the readout data into two parts, one is used as the training set, another is used as the test set. According to experiences, the size of the training set must be much larger than the feature dimension, generally, about 100 frame images can satisfy the training purpose; (3) To make the respective feature extractions on the 5 dimensions of the training set and test set, and make the normalization treatment on the extracted features. The dimensions of the feature space are given in Table 1, in which ”Original feature” indicates the original eigenvalue, ”Normalized feature” indicates the normalized eigenvalue, we select X1 as a reference, and have all eigenvalues mapped into the same order of magnitude as X1 , in which X2 is normalized as 10000X1 , X3 is normalized as Ln(X3 ), X4 is normalized as 100X4 , and X5 keeps its original value; (4) To make classification on the test set by using the SVM classifier after training. The normalization of X1 –X5 is purposed to eliminate the influence of the differences in the orders of magnitude of the different eigenvalues, and to ensure the data be at the same order of magnitude during the SVM training. It is noteworthy that the normalization depends on the data, namely, for other observations the manner of normalization has to be designed according to the data characteristics so that they will have no difference in the order of magnitude[18] . For the image examples in Fig.1, from left to right we calculate the eigenvalues of the images, the calculated eigenvalues are displayed in Table 2.

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

135

Fig. 2 System flowchart

Table 1 Dimensions of feature space Eigenvalue

Feature type

2.3

G

ASM

CON

IDM

ENT

Original

X1

X2

X3

X4

X5

Normalized

X1

10000X2

Ln(X3 )

100X4

X5

GPU Acceleration

In the practical work, this image classifier will be loaded in the control computer to make classification on the data collected by the CCD. However, in the G-extraction, the ergodic extraction on the whole image is adopted, making the running speed slow down. For the image with the size of 4096×4096, it needs the time consumption of 300 s·frame−1 only for the extraction of the feature G, hence, we have to make further acceleration on the algorithm. GPU is called also the image processor, it has a strong floating-point and parallel computation capability, compared with the CPU it is more suitable for processing a large number of parallel data. When the method proposed in this paper is used to calculate the grey-value inconsistency, because of the independence between any two computations, the GPU is more suitable for the acceleration of calculation. If the image size taken by the telescope is M × N , the template size for the ergodic extraction is m × n, then one frame of image needs (M − m + 1) × (N − n + 1) times of computation.

136

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

By tests, we find that after the GPU is used for acceleration, the extraction speed of the G feature attains 0.43 s·frame−1 , the acceleration factor is about 700. Table 2 The contrast between the eigenvalues of normal images and those of the images with clouds Eigenvalue

Image type

Normal

Cloud

2.4

G

ASM

CON

IDM

ENT

69.2

10.8

24.9

44.6

7.3

69.6

10.9

24.9

44.7

7.3

69.4

11.0

24.9

44.7

7.3

55.0

0.95

23.2

37.1

10.5

54.5

2.73

22.8

35.8

9.4

55.7

2.71

21.9

38.6

9.4

Result of Classification Experiment and Analysis

2.4.1 Experimental data In this paper, the data obtained by the 18 cm telescope of the Time Domain Observatory of Nanjing University in the test observation at the Xuyi station of Purple Mountain Observatory are selected as the experimental data. The telescope has a field of view of 12◦ , it adopts the Andor’s Ikon XL-series CCD with 4096×4096 pixels. From which we select 665 frame images as the experimental sample, and make the artificial labeling on these data, in which there are 150 frame images with clouds, there are 515 frame images without clouds, and they are labeled as Class-1 and Class-2, respectively. 2.4.2 Experimental process and result We stochastically select 50% respectively from the Class-1 and Class-2 data as the training set, and the remained are taken as the test set. Then, according to the flowchart in Fig.2 we make the experimental test, and the finally obtained result is expressed by the confusion matrix in Fig.3. From Fig.3 we can find that in the 332 frame images, Class-1 has totally 75 frames, in the practical test only one frame of image is wrongly classified as Class-2, the error rate is 1.3%; Class-2 has totally 257 frames, in the practical test 3 frames are wrongly classified as Class-1, the error rate is 1.2%. The overall error rate of Class-1 and Class-2 is 1.2%. This classification accuracy has basically attained our requirement, and provided a reference for the further extraction of clouds.

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

137

Fig. 3 The confusion matrix

3.

EXTRACTION OF CLOUDS

In order to ensure the the spatial and temporal continuity of observations, even though the quality of the image with clouds is degraded, but by referring to the index figure a part of data interfered by clouds can be deleted from the observed data, the remained data can still be used. Fig.4 shows the cloud image recognized by the classification algorithm designed by us. In the following, taking Fig.4 as an example, we describe the process to extract the index figure of clouds. As the cloud in the whole has a certain profile and texture, from the frequency-domain analysis on the image information, its information is mainly distributed in the low and intermediate frequencies; from the analysis on the grey value distribution of the image pixels, when the grey value of the cloud is rather high (relatively thick), the pixels included in the cloud will produce a peak in the histogram, when the grey value of the cloud is rather small (relatively thin), the pixels included in the cloud will extend the width of the peak that produced by the background pixels. Hereby, we have designed the following process to extract the part included in the cloud from the image: (1) To extract the grey value histogram of the image, to make the multi-peak Gaussian fitting on the image, and to derive the inflection points of the fitting function. As the grey value of the image is mainly distributed in the background region of the image, the image grey-value histogram exhibits approximately a Gaussian distribution, therefore, we adopt the following multiple Gaussian functions to make fitting: 2

2

2

h = a1 e−[(x−b1 )/c1 ] + a2 e−[(x−b2 )/c2 ] + a3 e−[(x−b3 )/c3 ] ,

(12)

138

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

in which, a1 , b1 , c1 , a2 , b2 , c2 , a3 , b3 , and c3 are the coefficients of the fitting function, to derive the fitting function coefficients, and make the 2nd-order derivative on h, the intersecting points with the x axis are the inflection points of the function, then take the values of two inflection points as the upper and lower threshold values of the background grey value, Fig.5 is the fitting curve h of the grey value histogram.

Fig. 4 Image with clouds

(2) To take the values of two inflection points as the upper and lower threshold values of the background grey value, and have the part between the two threshold values divided into 6 equal divisions, then to draw the contour lines for the different threshold values and delete the region with an area less than 500, taking this threshold value as the grey value corresponding to this gradient, the obtained grey-value gradient diagram is shown in Fig.6. (3) To make filtering on the original image, as the background and cloud in the image are of low-frequency information, while the stellar images in the image are of high-frequency information, by using a low-pass filter the interference of individual bright stars can be reduced. Therefore, we select the Gaussian filtering, by through SExtractor we extract the position information of stars, and estimate the maximum diameter of stellar images in the image, in combination with experiences to select the kernel with a suitable size to make filtering on the image, here we select the kernel size to be 50.

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

139

Fig. 5 The multi-peak Gaussian fitting of the gray value histogram

(4) On the filtered image, according to the gradient derived in (2), to make the following operation: ⎧ ⎨x  , x  x ¯+S , (13) x= ⎩x, x < x ¯+S in which, x ¯ is the mean value of the image, S is the standard deviation of the image, and x is the mean value in the region of x < x ¯ + S. The derived index figure of clouds is shown in Fig.7. From this figure we can find that the profile of most clouds has been extracted as a reference for the photometry and subsequent dark object detection. But the influence of bright stars is rather difficult to be eliminated, this is the major factor to influence the performance of this method. Fig.8 shows the obtained results for more images with clouds, in which Panels (a–c) are the images with clouds, Panels (d–f) are the index figures of clouds.

140

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

Fig. 6 The grayscale gradient of the image

Fig. 7 The index figure of clouds

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

141

Fig. 8 Images with cloud and their process result. (a)–(c) images with cloud; (d)–(f) the index figure of clouds

4.

SUMMARY

According to the requirements on the data processing of time-domain astronomical observations, aiming at the characteristics of clouds in the astronomical images, and combining the image features of the grey-value inconsistency and grey-value texture, this paper has proposed a realtime cloud image classification system based on the SVM algorithm, to extract the profile of clouds in the image based on the result obtained from the classification. The classification accuracy of this system can attain 98% or better, with a high accuracy and strong robustness, it has remedied the shortage of low automation in the artificial classification at the same time. In the practical application, the system has adopted the GPU technique to satisfy the requirement of high-speed processing. From the index figure of clouds obtained after processing, the profile of clouds in the image can be represented. But a part of results may be influenced by bright stars, this problem remains to study further. ACKNOWLEDGEMENTS Thanks to the referees for their good suggestions, and to Sun Rong-yu of Purple Mountain Observatory for the support to this work. References 1

Chambers K. C., Magnier E. A., Metcalfe N., et al., arXiv:1612.05560

2

Diehl H. T., Neilsen E., Gruendl R., et al., SPIE, 2016, 9910: 99101D

142

JIANG Hao-xuan and JI Jiang-hui / Chinese Astronomy and Astrophysics 43 (2019) 128–142

3

Tyson J. A., SPIE, 2002, 4836: 10

4

Yuan X. Y., Cui X. Q., Wang L. F., et al., IAUGA, 2015: 2256923

5

Pollacco D. L., Skillen I., Collier Cameron A., et al., PASP, 2006, 118: 1407

6

´ Csubry Z., Penev K., et al., PASP, 2013, 125: 154 Bakos G. A.,

7

Wheatley P. J., Pollacco D. L., Queloz D., et al., EPJ Web of Conferences, 2013, 47: 13002

8

Graham M. J., Djorgovski S. G., Mahabal A., et al., Distributed and Parallel Databases, 2012, 30: 371

9

Bertin E., Mining Pixels: The Extraction and Classification of Astronomical Sources// Banday A. J., Zaroubi S., Bartelmann M., Mining the Sky. Berlin, Heidelberg: Springer, 2001: 353-371

10

Magnier E. A., Chambers K. C., Flewelling H. A., et al., arXiv:1612.05240

11

Laher R. R., Masci F. J., Groom S., et al., arXiv:1708.01584

12

Gao D., Zhang Y. X., Zhao Y. H., MNRAS, 2008, 386: 1417

13

Zhang Y. X., Zhao Y. H., Applications of Support Vector Machines in Astronomy. Astronomical Data Analysis Software and Systems XXIII, Hawaii, September 29-October 3, 2014

14

Zhang P., Wang L., You X., Journal of Chendu Science and Technology University (Natural Science Edit.), 2017, 44: 247

15

Li Z. L., Zhang Q. F., Pen Q. Y., et al., AcASn, 2017, 58: 39

16

Cortes C., Vapnik V., Machine Learning, 1995, 20: 273

17

Zhou Z. H., Machine Learning, Beijing: Tsinghua University Press, 2016: 60-229

18

Chang C. C., Lin C. J., ACM Transactions on Intelligent Systems and Technology, 2012, 2: 27