Optik 126 (2015) 5650–5655
Contents lists available at ScienceDirect
Optik journal homepage: www.elsevier.de/ijleo
Fabric seam detection based on wavelet transform and CIELAB color space: A comparison Bo Zhu, Jihong Liu, Ruru Pan, Shanshan Wang, Weidong Gao ∗ Key Laboratory of Eco-Textile, Jiangnan University, No 1800 Lihu Avenue, Wuxi, Jiangsu, China
a r t i c l e
i n f o
Article history: Received 19 August 2014 Accepted 24 August 2015 Keywords: Seam detection Calendaring process Wavelet transform CIELAB color space Feature extraction
a b s t r a c t In calendering process, the fabric undergoes luster treatment to make the surface uniformly glossy at a high operation speed. It is easy to be misjudged for the seam detection by human vision in the process, especially for the seam detection of light fabric. To improve quality products and revenues, effective and fast feature extraction algorithms are urgently needed to solve the problem. This paper presents a new wavelet energy measure method and also compares it with a novel scheme for automated textural feature extraction implementation in CIELAB color space. In wavelet-based approach, energy measures are extracted in the best number decomposition. In the CIELAB color space, we proposed to calculate characteristic parameters included mean, standard deviation and variation coefficient (CV). Then the feature data obtained from the two approaches are analyzed to determine optimized threshold values in order to recognize seam information. Compared the detection results of the two algorithms, the characteristic parameters extraction in the CIELAB color space shows higher accuracies and more computationally efficient than wavelet-based approach on detecting fabric seam in calendering process. © 2015 Elsevier GmbH. All rights reserved.
1. Introduction In the dyeing industry, heating roller presses and irons fabrics to make the surface uniformly luster in order to improve the fabric aesthetic in the calendering process [1]. To keep the calendaring process carried out continuously, fabrics are connected by seams. However, a fabric seam [2] may leave a mark on the heating roller (seam will cause heating roller defective) if they contact with each other, and fabric defects will occur when the following fabrics are pressed and ironed by the defective roller. In addition, heating roller is expensive and defective roller is difficult to be found out. The seam will not only cause equipment loss, but also leads to the fabric disqualification. So the detection of fabric seam is a necessary part in the calendering process. Fig. 1 shows the calendering process of velvet fabrics. Traditionally, workers control heating rollers going up and down to avoid fabric seams. As high-speed operation in calendering process (the speed is 30–50 m/min), visual inspection for a long time results in visual fatigue and slow response which will bring about seam misjudgment. An automatic method using contacting roller has been developed to detect the fabric seams. A contacting roller is set in front of the heating roller. Heating roller is raised through
∗ Corresponding author. Tel.: +86 18262299633. E-mail address:
[email protected] (W. Gao). http://dx.doi.org/10.1016/j.ijleo.2015.08.163 0030-4026/© 2015 Elsevier GmbH. All rights reserved.
a control system when a seam reaches the contacting roller. However, with the development of high-grade thinner fabric, system using contacting roller has faced limitations on inducing the seams. There are still industrial needs to develop a rapid system for fabric seam detection that can decrease or eliminate the demand for manual inspection and increase calendering quality. In fact, image analysis techniques [3–7] have been widely studied in recent decades and applied to fabric inspection in automated manufacturing to ensure product quality and yield. Various algorithms designed for fabric defect detection [8–13] are developed to tackle the detection problem, such as the Fourier transform [14,15], the Gabor filters [16,17] and the wavelet transform [18,19]. These methods have achieved a high detection success rate and some of them can recognize over 95% of defect on those plain and twill fabric [20]. Seam detection using image process techniques is less developed internationally except for Wong et al. [21], in which wavelet transform and BP neural network are combined to detect and classify stitching defect, yet no consideration is attended to computationally efficient. In this paper, feature extraction methods based on wavelet transform and using CIELAB color space were proposed to the fabric seam detection. Optimized threshold values were determined through the data analysis in order to recognize seam information. The remaining sections of the paper are organized as follows. Section 2 introduces the experiment setup including the fabric samples and image acquisition. Section 3 gives detail descriptions on
B. Zhu et al. / Optik 126 (2015) 5650–5655
5651
Fig. 3. (a) Original image; (b) wavelet subimages at level one; (c) wavelet subimages at level two.
Fig. 1. Calendering process.
methods of detection algorithms based on wavelet transform and using CIELAB color space. Section 4 presents the comparison of experimental results and Section 5 discusses the influences of seam positions and seam conditions on the detection results. Finally, the conclusions from this research are summarized in Section 6. 2. Experiments In the experiment, four light fabric samples made of polyester filament from Toray Fibers (Nantong) Co., Ltd were prepared. One sample contains two pieces of fabric using stitching connection. In samples 1, 2 and 3, fabrics with the same color are connected by seams, while the colors of fabrics beside the seams in sample 4 are different. The joint way of fabrics is abutted stitching with white threads, which is commonly used in factory. The fabric images were captured with an industrial camera. In order to guarantee the precision, the diameters of seams thread are about 0.2 mm at least that should occupy one pixel at least. It takes up 5 pixels per millimeter at least and the corresponding resolution is 127 dpi. The image size is set 640 × 640 pixel (the corresponding actual fabric size is 12.8cm × 12.8 cm). According to the transmission direction, three frame images were selected which stand for the front of fabric seam, fabric seam image, and the back of fabric seam, respectively, as shown in Fig. 2. For discussion image hereafter, image without seam is defined as non-seam image while the other kind is the seam-containing images. 3. Methods 3.1. Textural features extracted using wavelet energy method (M1) In this paper, the plain fabrics are homogenously-textured sample. The seams of fabric can be portrayed as thin surface anomalies embedded in homogenous textures and uniform structural. Wavelet texture analysis is proposed to extract the wavelet energy for the seam detection of fabrics, taking sample 1 for example.
Fig. 2. Images of fabric samples 1, 2 3 and 4.
3.1.1. Wavelet transform In recent pasts, multi-resolution decomposition schemes based on wavelet transform have been a popular alternative for texture feature extraction [22,23]. The multi-resolution wavelet representation allows an image to be decomposed into a hierarchy of local subimages at different spatial frequencies [24]. Then the textural features are extracted from the decomposed subimages in different subband and different decomposition levels. The two-dimensional wavelet transform could be regarded as one-dimensional wavelet transform applied along the horizontal and vertical axes. With one level of wavelet decomposition, the original image first passes through the one-dimensional low-pass and high-pass decomposition filters to generate four subimages which are obtained using image reconstruction algorithm based on wavelet coefficients. (1) One is low-pass (smooth) subimage labeled fS (x, y) which is the approximation of the original image. And a set of high-pass subimages are further divided into the horizontal, vertical and diagonal (1) (1) detailed subimages which respectively label fH (x, y), fV (x, y), and (1)
(1)
fD (x, y). Then, fS (x, y) can be performed further decomposition (2)
(2)
(2)
(2)
to obtain four subimages fS (x, y), fH (x, y), fV (x, y), and fD (x, y) at level 2 as shown in Fig. 3. The decomposition process will finish until final level is reached. In addition, the selection of wavelet base has an important impact on the detection results. Wavelet base function is mainly classified into semi orthogonal wavelet, orthogonal wavelet and biorthogonal wavelet. Daubechies wavelet, Coiflet wavelet, Meyer wavelet, Haar wavelet and Battle–Lemarie wavelet are commonly used. In the research, the Haar wavelet is employed in the wavelet decomposition due to its characteristics of simple calculation and convenient construction. Fig. 4 demonstrates one level of four wavelet subimages of seam-containing images after graying, respectively. 3.1.2. Wavelet texture analysis The basic idea of wavelet texture analysis is to generate textural features from wavelet detailed subband or subimages at each scale [25]. The square of a wavelet coefficient in a decomposed subimage is denoted by wavelet energy and typically applied to describe the fabric texture feature. In our research, we calculated the energy signature from the corresponding frequency subband using wavelet coefficient of a decomposed subimage. The energies
Fig. 4. Four wavelet subimages of seam-containing image after graying (a1) seamcontaining image (b1) seam-containing gray image (c1) four wavelet subimages of seam-containing image.
5652
B. Zhu et al. / Optik 126 (2015) 5650–5655
Table 1 Energy features of the fabric images. Sample
1
j EH j EV j ED
2
3
b1
c1
a2
b2
c2
a3
b3
c3
a4
b4
c4
262 332 299
1845 1235 1144
261 335 288
780 493 286
1206 1544 1459
845 543 315
295 334 166
1049 1274 904
284 301 168
229 218 155
564 752 648
180 172 139
of the decomposed smooth, horizontal, vertical, and diagonal subimages for pixel coordinates (x,y) at decomposition level j are respectively given by j
FS (x, y) =
x
j
FH (x, y) = j
j
2 (1)
j
fH (x, y)
2 (2)
y
x
FD (x, y) =
j
fS (x, y)
y
x
FV (x, y) =
j
fV (x, y)
2 (3)
y
x
j
fD (x, y)
2 (4)
y
J denotes the maximum decomposition level with wavelet transform. The total energy of the image in J levels is given by F=
J FS
J
+
j FH
j=1
+
J
j FV
j=1
+
J
j FD
(5)
j=1
The higher level of decomposition generates the fusion effect for the anomalies and may cause the localization errors of the detected seam. The choice of a proper number of decomposition levels is relied on the energy ratio of detailed coefficients at two consecutive levels. Nj denotes the normalized energy sum of the three detailed subimages at decomposition level j.
Nj =
j
j
j
FH + FV + FD
(6)
F
The energy ratio of detailed coefficients Pj between decomposition levels j and j − 1 is given by: Pj =
Nj Nj−1
,
j = 2, . . ., J
(7)
The best number of the decomposition level is determined when Pj is the minimum. For the images of the tested samples, we found the minimal energy ratio occurred at level 3, so we chose level 3 as the final number of the decomposition level. The smooth coefficients are generally not included as a textural feature but mainly represent the lighting or illumination variation. In the frame of wavelet texture analysis, the features are almost calculated in the high frequency subbands. Using the assumption that the energy distribution in frequency domain identifies texture, we compute energies of wavelet subband as texture feature j [26]. In this paper, we propose a measure Ek as feature in the best decomposition level shown as follows:
j Ek
=
j
Fk (x, y) h×w
4
a1
j
Obviously as be seen from Table 3, the measure Ek of seamcontaining detailed coefficients is absolutely greater than that of non-seam detailed coefficients. Due to the seam line running throughout vertically, we choose the vertical measure as seam detection criterion. Threshold value is used to detect fabric seam and expressed as follows: T = ymin +
ymax − ymin 2
The greatest value is 543 among the non-seam subimages whereas the smallest one is 752 among the seam-containing subimages. To guarantee the strong robustness in the algorithm, the average 647 were selected as the threshold value. If the vertical measure of one fabric image is beyond the threshold value, the seam-containing result is output and the heating roller is raised by a control system. 3.2. Textural features extracted in CIELAB color space (M2) The framework of proposed seam detection scheme is illustrated in Fig. 5. Taking sample 1 for instance, three luminosity characteristic parameters are calculated in the CIELAB color space after transforming the images from RGB to CIELAB color space. Then, through the analysis of the data, an optimized threshold of luminosity variation coefficient will realize to detect the fabric seam. 3.2.1. CIELAB color space The image is generally described in RGB color space [27]. RGB is an acronym for red, green and blue. The coordinates of RGB color space are denoted by three components: R, G, and B. A particular RGB color space is defined by three chromaticities of red, green, and blue additive primaries and can produce any chromaticity that is the triangle defined by those primary colors [28]. The relevance is high among the three components, so the color information cannot be determined by only one component. Extracting the characteristic value in gray space is considerable due to the simple and fast algorithm. Yet for the sample 3, the gray-value of seam is extremely closed to that of fabric which is favorable to the seam recognition. However, the most used of these is the CIELAB color space [29] due to the uniform distribution of colors, and because it is very close
1/2 (k = H, V, D)
(8)
where H, V, D denote the horizontal, vertical and diagonal high frequency subband. h × w is the size of the coefficient matrix. Table 1 presents energy features of the fabric images.
(9)
Fig. 5. Framework of proposed seam detection scheme.
B. Zhu et al. / Optik 126 (2015) 5650–5655
5653
Table 2 Luminosity characteristic parameters of fabric image. Sample
1
Mean value Standard deviation CV (%)
2
3
b1
c1
a2
b2
c2
a3
b3
c3
a4
b4
c4
63.1 2.9 4.6
73.8 16.4 22.2
62.6 2.9 4.7
70.5 3.8 5.4
82.0 17.7 22.0
71.6 3.6 5.1
154.7 3.6 2.3
156.2 17.3 11.1
153.5 3.5 2.3
161.3 1.4 0.9
108.6 54.9 50.6
50.2 1.7 3.3
to human perception of color [30]. The intention is that color differences a human perceives as equal correspond to equal Euclidean distances in CIELAB space [31]. The coordinates of CIELAB color model are denoted by three components: L, a, and b [32]. Component a and b just reflect color information, while L is an acronym for luminosity in CIELAB color space. Obviously, the luminosity of fabric background and seam is absolutely different. Using luminosity component without color information to calculate the fabric characteristic parameters is intelligent, which excludes the influence of fabric color and has a small amount of calculation at a high rate. An image from RGB to CIELAB color space has two steps. Besides, the CIEXYZ color space needs to be determined. The first step carries out the RGB to CIEXYZ transformation [33] (the RGB components are made linear with Gamma correction).
⎡ ⎤
⎡
X
0.4124
0.3576
0.1805
⎤⎡ ⎤ R
⎣ Y ⎦ = ⎣ 0.2126 0.7152 0.0722 ⎦ ⎣ G ⎦ Z
0.0193
0.1192
0.9505
(10)
B
⎧ Y 1/3 Y ⎪ ⎪ − 16 > 0.01 ⎨ 116 Y Yn n , L= ⎪ Y ⎪ ⎩ 903.3 Y , ≤ 0.01 Yn Yn 1/3 1/3
b = 200
X Xn
1/3 Y Yn
−
−
Y Yn
(11)
,
Z 1/3 Zn
where Xn = 95.07, Yn = 100.00, and Zn = 108.81 are the valued according to the reference blank of D65 standard illuminant. L value can be calculated from Eqs. (1) and (2):
L=
variance. The luminosity variation coefficient (CV) is the ratio of standard deviation and the corresponding mean value. H(i) represents pixels statistical results in each luminosity level. The range of luminosity value is linearly enhanced to 0 to 255 in order to facilitate to make a comparison in this article. Computational formulas respectively are:
1 i × H (i) h1 × w1 255
x¯ =
=
1 2 (i − x¯ ) × H (i) h1 × w1 255
× 100% x¯
0.2126R + 0.7152G + 0.0722B > 0.01
903.3 (0.2126R + 0.7152G + 0.0722B) ,
0.2126R + 0.7152G + 0.0722B ≤ 0.01
L = 116(0.2126R + 0.7152G + 0.0722B)1/3 − 16
(13)
3.2.2. Characteristic parameter analysis Due to the high-speed of calendering process, the selection of characteristic parameters should meet the requirement of high detection rate and concise calculation. So the luminosity mean value, standard deviation, and variation coefficient is chosen to extract after converting the image from RGB into CIELAB color space. Mean value refers to average luminosity values of all pixels; Standard deviation is the arithmetic square root of luminosity
(15)
(16)
where x¯ , denote mean value, standard deviation, respectively. h1 and w1 represent the image height and width. It can be easily seen from Table 2, the luminosity mean values of seam-containing images are greater than the non-seam images in the first three cases, while the situation changed due to the different color beside the seam in the fourth sample. The standard deviations and variation coefficient of seam-containing images are larger than that of non-seam images. Nevertheless, to evaluate the discrete degree among two or more groups of data, the variation coefficient is the only effective parameter when the mean values of the data are different. Using the variation coefficient is the best option to express the discrete degree of luminosity distribution between seam-containing and non-seam images. In Table 2, luminosity variation coefficients of seamcontaining images are all far smaller than that of non-seam images.
116(0.2126R + 0.7152G + 0.0722B)1/3 − 16,
As can be seen from Eq. (3), the second formula can be applied in the absence of light or extremely low light degree. However, the fabric images are collected in the standard light condition. In the experiment which is impossible in weak light. Thus, the first case is needed to be discussed in this paper. In the paper, we extract the characteristic parameter of luminosity in CIELAB color space and the luminosity for pixel coordinates (x,y) is given by
(14)
i=0
i=0
CV =
And the second step carries out the CIEXYZ to CIELAB transformation:
a = 500
4
a1
(12)
From the analysis from Table 2, the greatest data is 5.4% among the non-seam images and the smallest data is 11.1% among the seam-containing images. According to Eq. (9), we may select their average (8.25%) as the threshold value in order to ensure the algorithm robustness. The seam-containing result is output and the heating roller is raised automatically if the luminosity variation coefficient is greater than the threshold value. 4. Result comparison The computer used in the experiment has an Intel Core i5 2.27 GHz CPU, and Windows7 system. The algorithms were validated in the environment of MATLAB 2012b. After operation of 200 times for each sample, the detection results on the 100 test samples present in Table 3 to compare the detection results. In
5654
B. Zhu et al. / Optik 126 (2015) 5650–5655
Table 3 Detection results on the 100 test samples. Method
Number of samples
M1 M2
50 50
Detection results Accuracy (%)
Time
92 94
0.15 1.4
Fig. 8. Damaged seams of fabric images.
Fig. 6. Different seam positions of fabric images.
Fig. 7. Slant seams of fabric images.
the experiment results, we note that there is not an insignificant difference of identification accuracies between the two methods. The average computation time of the proposed methods were 1.4 s and 0.15 s for an image of size 640 × 640 pixel (the corresponding actual fabric size is 12.8cm × 12.8 cm). So the dynamic detection speeds are 5.5 m/min and 51.2 m/min, respectively. The range of operation speed in calendering process is 30–50 m/min. It can be indicated that the proposed algorithm M2 has a better real-time performance to realize the seam detection in the calendering process. If the luminosity variation coefficient of per frame image is larger than the threshold, the image is determined to be a seamcontaining one then the heating roller uplifts through a control system to avoid fabric seam. 5. Discussions In this part, we will respectively discuss the influence of seam positions and conditions on the detection results using the M2. 5.1. Seam positions In the dynamic detection, seam position can be divided into four categories, left position, middle position, right position, and edge position. There is a special case to consider, edge position are classified into left edge and right edge position. Different seam position of fabric images are shown in Fig. 6. To each image of (b5)–(b8), the entire image information slightly changes except the seam position. The luminosity variation coefficients of fabric images b1, 5, 6, 7 and 8 are 22.2%, 21.8%, 22.6%, 22.0% and 22.3%, respectively, and the corresponding standard deviation is 0.003. The CV considered as constant makes the detection accuracy keep in range of allowable which can be proven that the seam can be stably detected in the dynamic detection. 5.2. Seam conditions The seam might be slant because of the wrong sewing machine and the improper sewing way by workers. Similarly, slant seam means that the seam deflects the vertical direction. It also can be divided into two categories, clockwise and counterclockwise slant seam. The seam area is largest in the diagonal position.
Slant seam of fabric images are shown in Fig. 7. The luminosity variation coefficients of fabric images b1, 9, 10, 11 and 12 are 22.2%, 25.4%, 23.8%, 23.9% and 25.6%, respectively, and the corresponding standard deviation is 0.014 which can be negligible. The standard deviation of the data illuminates that the larger seam area makes the luminosity variation coefficient increasing. The algorithm proposed is also applicable to this slant condition in the paper. In addition, the conditions of damaged seams usually occur in sewing process by human or machine factor. The damage like Fig. 8 (b13) is named slight damage. The severe damage like Fig. 8 (b14) will make the fabric apart under the external force. The damaged seam information decreases make the discrete degree of luminosity lower. The luminosity variation coefficients of fabric images (b1), (b13) and (b14) are 22.2%, 21.5% and 20.6%, respectively. However, damaged seams are unfavorable to detect the fabric seam and will affect the detection results. In the actual production, the workers should carefully check the stitching quality before the calendering process. 6. Conclusion Due to a high operation speed in calendering process, it is easy to be misjudged for the seam detection by human vision in the process. Effective and fast methods are necessarily proposed to solve the problem. In this paper, two feature extraction algorithms are presented for the fabric seam detection. Energy measures are extracted in the best number decomposition in wavelet-based approach. In the CIELAB color space, we proposed to calculate characteristic parameters included mean, standard deviation and variation coefficient. Then the feature data obtained from the two approaches are analyzed to determine optimized threshold values in order to identify the seam information. Compared the detection results of the two algorithms, the characteristic parameters extraction in the CIELAB color space shows higher accuracies and more computationally efficient than wavelet-based approach on detecting fabric seam in calendering process. Compared with the traditional method of controlling heating rollers going up and down by workers in the calendering process, the method proposed in this paper is more advantageous to realize automatic detection for fabric seam. With the tremendous response in the contemporary textile industry, it is expected to promote in the near future. Acknowledgements This work was supported by “the Scientific Research Innovation Projects of Jiangsu Province” [CXZZ13 0749] and the Fundamental Research Funds for the Central Universities [JUSRP51417B]. References [1] Chung-Feng Jeffery Kuo, Chien-chou Fang, An entire strategy for control of a calender roller system, Text. Res. J. 77 (2007) 343–352. [2] Kuo Hsing-Chia, Li-Jen Wu, An image tracking system for welded seams using fuzzy logic, J. Mater. Process. Technol. 120 (2002) 169–185.
B. Zhu et al. / Optik 126 (2015) 5650–5655 [3] Kolbl Wolfgang, Meta torch a universal seam tracking system for arc welding and similar applications, Int. J. Cloth. Sci. Technol. 21 (1995) 33–35. [4] C.S. Cho, B.M. Chung, M.J. Park, Development of real-time vision based fabric inspection system, IEEE Trans. Ind. Electron. 52 (2005) 1073–1079. [5] S. Kim, H. Bae, S.P. Cheon, K.B. Kim, On-line fabric-defects detection based on wavelet analysis, Lect. Notes Comput. Sci. 3483 (2005) 1075–1084. [6] P. Mitropulos, C. Koulamas, Y. Karayiannis, S. Koubias, G. Papadopoulos, An a roach for automated defect detection and neural classification of web textile fabric, Mach. Graph Vis. 9 (3) (2000) b1. [7] Y.T. Ngan Henry, Grantham K.H. Pang, Nelson H.C. Yung, Automated fabric defect detection—a review, Image Vision Comput. 29 (2011) 442–458. [8] A. Mukhopadhyay, R.C.D. Kaushik, Effect of air-jet texturing process variables on physical bulk obtained by image analysis method, Indian J. Fibre Text. Res. 25 (2000) 264–270. [9] Jagdish Lal Raheja, Sunil Kumar, Ankit Chaudhary, Fabric defect detection based on GLCM and Gabor filter: a comparison, Optik 124 (2013) 6469–6474. [10] Jagdish Lal Raheja, Bandla Ajay, Ankit Chaudhary, Real time fabric defect detection system on an embedded DSP platform, Optik 124 (2013) 5280–5284. [11] S. Dariush, H. Mehdi, S. Mohammad, R. Zary, Study of structural parameters of weft knitted fabrics on luster and gloss via image processing, Ind. Text. 63 (2012) 42–47. [12] J. Liu, B. Zhu, H. Jiang, W. Gao, Image analysis measurement of cottonseed coat fragments in 100% cotton woven fabric, Fibers Polym. 14 (2013) 1208–1214. [13] H. Jiang, J. Liu, R. Pan, W. Gao, H. Wang, Auto-generation color image for fabric based on FFT, Ind. Text. 64 (2013) 195–203. [14] C. Chi-ho, K. Grantham, Fabric defect detection by Fourier analysis, IEEE Trans. Ind. Appl. 36 (2000) 1267–1276. [15] S.M. Abdel, Y.D. Jean, B. Laurent, et al., Optimization of automated online fabric inspection by fast Fourier transform (FFT) and cross-correlation, Text. Res. J. 83 (2013) 256–268. [16] J. Raheja, S. Kumar, A. Chaudhary, Fabric defect detection based on GLCM and Gabor filter: a comparison, Optik 124 (2013) 6469–6474. [17] A.S. Tolba, Neighborhood-preserving cross correlation for automated visual inspection of fine-structured textile fabrics, Text. Res. J. 81 (2011) 2033–2042. [18] J. Jing, Z. Zhang, X. Kang, Objective evaluation of fabric pilling based on wavelet transform and the local binary pattern, Text. Res. J. 82 (2012) 1880–1887. [19] Wei Deyun, Yuan-Min Li, Generalized wavelet transform based on the convolution operator in the linear canonical transform domain, Optik 125 (2014) 3845–3856.
5655
[20] Henry Y.T. Ngan, Grantham K.H. Pang, S.P. Yung, Michael K. Ng, Wavelet based methods on patterned fabric defect detection, Pattern Recognit. 38 (2005) 559–576. [21] W.K. Wong, C.W.M. Yuen, D.D. Fan, Stitching defect detection and classification using wavelet transform and BP neural network, Expert Syst. Appl. 36 (2009) 3845–3856. [22] S.G. Mallat, A theory for multiresolution signal decomposition: the wavelet representation, IEEE Trans. Pattern Anal. Mach. Intell. 11 (1989) 674–693. [23] S.G. Mallat, Multifrequency channel decompositions of images and wavelet models, IEEE Trans. Acoust. Speech Signal Process. 37 (1989) 2091–2110. [24] I. Zavorin, J.L. Moigne, Use of multiresolution wavelet feature pyramids for automatic registration of multisensory imagery, IEEE Trans. Image Process. 14 (2005) 770–782. [25] J. Liu, B. Zuo, X. Zeng, Wavelet energy signatures and robust Bayesian neural network for visual quality recognition of nonwovens, Expert Syst. Appl. 38 (2011) 8497–8508. [26] J. Liu, B. Zuo, X. Zeng, Nonwoven uniformity identification using wavelet texture analysis and LVQ neural network, Expert Syst. Appl. 37 (2010) 2241–2246. [27] Jeffrey Kuo Chung-Feng, Chien-Tung Max Hsu, Wen-Hua Chen, Chin-Hsun Chiu, Automatic detection system for printed fabric defects, Text. Res. J. 82 (2012) 591–601. [28] Lu Yuzheng, Weidong Gao, Jihong Liu, Color separation for colored fiber blends based on the fuzzy C-means cluster, Color Res. Appl. 37 (2011) 212–218. [29] Han Yu, Dejun Zheng, George Baciu, Xiangchu Feng, Min Li, Fuzzy region competition-based auto-color-theme design for textile images, Text. Res. J. 83 (2013) 638–650. [30] Fleiss A. Connolly, Study of efficiency and accuracy in the transformation from RGB to CIELAB color space, IEEE Trans. Image Process. 7 (1997) 1046–1048. [31] Yan Wan, Li Yao, Bugao Xu, Xiongying Wu, Separation of clustered fibers in cross-sectional images using image set theory, Text. Res. J. 79 (2000) 1658–1663. [32] B. Xu, Y. Huang, M.D. Watson, Cotton color distributions in the CIELAB system, Text. Res. J. 77 (2001) 1010–1015. [33] C.A. Wilson, P.H. Gies, B.E. Niven, A. McLennan, N.K. Bevin, The relationship between UV transmittance and color-visual description and instrumental measurement, Text. Res. J. 78 (2008) 128–137.