Infrared Physics and Technology 102 (2019) 103048
Contents lists available at ScienceDirect
Infrared Physics & Technology journal homepage: www.elsevier.com/locate/infrared
Automatic defects detection in CFRP thermograms, using convolutional neural networks and transfer learning Numan Saeeda, Nelson Kinga, Zafar Saidb, Mohammed A. Omara, a b
T
⁎
Industrial and Systems Engineering Department, Khalifa University, Abu Dhabi 54224, United Arab Emirates Department of Sustainable and Renewable Energy Engineering, University of Sharjah, Sharjah 27272, United Arab Emirates
A B S T R A C T
Recent advancements in the field of Artificial Intelligence can support the post-processing of thermographic data, efficiently, especially for nonlinear or complex thermography scanning routines. This study proposes the implementation of an autonomous/intelligent post-processor that is capable of automatically detecting defects from given thermograms via a Convolutional Neural Networks (CNN) coding, in tandem with a Deep Feed Forward Neural Networks (DFF-NN) algorithm to estimate the defect depth as well. Thus, the proposed NN combination will process (detect and quantify) the defects from acquired thermograms in real-time, and without any human (inspector) intervention. The study shows that employing a pre-trained network, using a relatively small dataset of thermograms for training, can detect and quantify defects in thermographic sequences. In this paper, pre-trained networks with CIFAR-10 and ImageNet databases are used, and followed by a finetuning step of the later layers in the network; using a relatively small thermograms dataset. This text will also provide several in-depth studies to compare how transfer learning, state of the art object detection architectures, and the convolutional neural networks influence the performance of the trained post-processing system. The proposed post-processor applied to thermograms obtained from a pulsed-thermography setup testing a Carbon Fiber Reinforced Polymer (CFRP) sample with artificially created sub-surface defects validates the CNN approach.
1. Introduction Composite structures in general and Carbon Fiber Reinforced Polymers (CFRP) in particular emerged in many different applications; thanks to their outstanding mechanical properties, namely; lightweight, high specific strength, and readiness for customized designs. CFRP can be found in a broad range of engineering structures and products; the aerospace industry [1,2] automotive structures, wind turbines [3] and even sports products [4]. Quality checks on large, heavy structures (sometimes with limited access) such as aerospace fuselage, and windturbines blades are of high importance, to avoid unexpected failures due to cracks, impact damage, or production-borne delamination’s [5,6]. Several Non-Destructive Testing techniques offer the opportunity for a human inspector to detect, and quantify the presence of defectives in composites, in the different stages of manufacturing, and while in operation. Those Non Destructive Testing techniques include ultrasonic testing [7], Infrared Thermography [8,9,10], X-ray [11], and others. Although ultrasonic testing is one of the oldest and well-investigated techniques, yet it traverses at a slow scanning speed, require human input for interpretation and analysis, and it requires an intimate coupling/contact with the surface, which makes it impractical for large structures. On the other hand, thermography testing can be done at faster inspection rates [12] and can be used to assess remote areas. Yet, thermography has its own limitation as well, specifically the limited ⁎
depth of penetration in low conductive materials such as CFRP composites, and the need for a robust and autonomous processing for nonlinear heat conduction scenarios, such as heat-diffusion in anisotropic materials, and line scanning modes, etc. [13]. To overcome this problem, several image enhancement techniques are employed to improve the thermograms’ contrast and hence improve defects visibility [14]. This includes but not limited to Differential Absolute Contrast (DAC) [15,16], Thermographic Signal Reconstruction (TSR) [17,18], Principal Component Thermography (PCT) [13], Correlation Extraction Algorithm (CEA) [19], 3D Normalization Algorithm (3DNA) [20], two Point Spread Functions (PSF) [21], Gapped Smoothing Algorithm (GSA) [22]. After Utilizing one or a combination of these image enhancement techniques will surely improve the contrast of the images, which is then used by an operator to detect and localize defectives or suspicious locations; this manual assessment is still a tedious process and may introduce human error. Several mathematical techniques were demonstrated in literature to automatically detect defects in thermograms such as the IR self-referencing method; a prior knowledge of a defectfree area is not needed [23]. A recent study by R. Marani used several machine learning techniques such as K-nearest neighbors and decision trees to classify defective areas in CFRP structures and compard their performance [24]. In another study, they used exponential modeling for the thermal contrast curves and fed it to decision forest for training and classification [25]. Thus opening the door for more robust processing of
Corresponding author. E-mail address:
[email protected] (M.A. Omar).
https://doi.org/10.1016/j.infrared.2019.103048 Received 29 April 2019; Received in revised form 18 August 2019; Accepted 21 September 2019 Available online 23 September 2019 1350-4495/ © 2019 Elsevier B.V. All rights reserved.
Infrared Physics and Technology 102 (2019) 103048
N. Saeed, et al.
object or several objects within an image. There are many CNN models, which were presented in the literature that vary in their designs and performance such as AlexNet [35], Cuda-Convnet [36] ZF Net [37], VGG Net [38], ResNet [39]. For an object detection problem, a simple approach may consist of taking different regions of interest from the image; then a CNN is used to classify the presence of any object in that region. The object of interest can have different spatial locations or aspect ratios within an image. Hence, one has to select many different regions for classification, which may increase the computational cost. Therefore, different object detection frameworks such as, but not limited to, R-CNN [40], Faster R-CNN [41] and Yolo [42] are developed and more commonly used. This paper begins by comparing the performance of different object detection frameworks such as R-CNN, Faster R-CNN, and Yolo, in detecting the defects from the thermographic scans of a composite structure. Such comparative analysis is warranted due to the specific nature of the thermographic images (processing a sequence of transient heat flow), and the nature of the defectives to be localized (for example, lateral diffusion affects the defects sharpness and morphologies). These frameworks use a convolutional neural network to first classify regions within the image. These object detector frameworks process only the regions, which are likely to contain an object rather than using a sliding-window. Hence, the CNN approach greatly reduces the computational cost of object detection. Cuda-Convnet and Alexnet convolutional neural networks were used in R-CNN and Faster R-CNN object detector frameworks, while yolonet is used in the Yolo framework.
acquired thermograms using recent advances in artificial intelligence techniques. In the last few years, rapid development in Convolutional Neural Network (CNN elsewhere) allowed automatic analysis of images and videos, to perform tasks that were considered to be challenging without human intervention; such as image classification [26], object detection or recognition [27] and video analysis [28,29]. Thus, CNN has become the tool of choice for image processing among other applications. In more relevant NDT implementations, CNN has been reported to detect concealed cracks in asphalt pavement using Ground Penetrating Radar (GPR) images in [30,31]. In this work, we are embedding a Convolutional Neural Network algorithm to detect defects in composite structures automatically. Once defects are localized, the defects’ depth value is estimated using a Deep Feed Forward Neural Networks (DFF-NN) routine. Thus, the novelty and aim of this work is in the implementation of the CNN algorithms for processing thermograms (of CFRP composites) for autonomous defect localization; followed by a DFF-NN approach to estimate the depth of these defects in real-time. This research paper will also explain how to use transfer learning and pre-trained networks for accomplishing our specific task of defects detection, as deemed relevant to the pulsedthermography domain. This work has the potential to demonstrate fully automatic post-processing of thermograms in real-time. Such approach can be useful, when human intervention is not available (inspection done by robotics, or drones, or for remote/un-accessible areas), or if it delays the inspection cycle in a production line. Moreover, this proposed approach can speed up the inspection routine and hence can be coupled (directly) to another system-level quality-control tracking in production facilities. This comparative study also presents a review of the state-of-the-art convolutional networks application to IR thermograms to ensure full coverage of the topic. The manuscript structure is organized as follows; Section 2 provides a brief background on the different CNN types, their design and applications; followed by a comparison between the different architectures used for object detection. The methods used to optimize the training process within the network, are then discussed; while Section 3 addresses the robustness issues. The discussion section details the proposed method and the combined automated system (defect detection and depth estimation) is thoroughly deliberated in Section 4. The study results and discussion are then reported in Section 5 and concluded in Section 6.
3. Transfer learning To achieve high accuracy of image classification or object detection tasks using CNN, a very large and diverse training dataset is required. This dataset must consider the different possible variations of images; especially, variations in size, rotation, and the number of objects present per image. Training a CNN with such large dataset is a time-consuming and computationally intensive process. In some other cases, the dataset available for training is intrinsically limited; thus the transfer learning technique has been commonly used, in deep learning, to help tackle both of the problems above [43,44]. In transfer learning, a network is pre-trained on a large dataset that contains images of many classes of objects, not necessarily relevant to the specific object detection problem. The pre-trained CNN becomes the starting point for solving the current specific object detection problem. The idea behind transfer learning is that pre-training the CNN with a large dataset allows it to learn a rich set of image features, which are common in all images such as recognizing straight lines with different orientations, curves, shapes, etc. Once the CNN learns the general features to detect any problematic object through the large dataset; we retrain the CNN using a specific dataset relevant to the defect detection task at hand. In more words, after pre-training the network, we freeze the weights (features) of the initial layers and fine-tune the latter layers using our image dataset; such that the feature representations are adjusted to support our object detection task. In this work, Cuda-Convnet is trained with CIFAR-10 dataset, which has 50,000 images and ten object classes [45]; while Alexnet was trained with ImageNet that has more than 1 million images and 1000 classes [46]. Fig. 1a shows the weights (features) of the first convolutional layer of the pre-trained Alexnet; it can be seen that the layer can recognize simple features such as lines with variable orientations; and thus, is frozen while training the CNN with the thermography dataset. The final classification layer is thus adjusted in our case to consider only one class (defect) of objects. Our training dataset consists of thermographic images (scans) for composite test coupons tested under pulsed thermography protocol. The experimental setup consists of a thermal detector and a heating lamp with the following specifications: a 3–5 µm Indium Antimonide InSb cooled detector (GF309 from FLIR®) with a
2. Convolutional neural networks A Convolutional Neural Network is a class of deep feedforward neural network, which uses multilayers perceptrons commonly employed for analyzing visual imagery [32]. CNNs were inspired by biological processes, such that the connectivity pattern of the perceptron’s in the CNN resembles that in the visual cortex. CNN consists of an input layer, hidden layers, and the output layer. The CNN consists of the following hidden layers: convolutional layers, fully connected layers, pooling layers, and normalization layers [33–35]. The convolutional layer, which distinguishes CNN's from other types of networks, performs a cross-correlation operation, using several filters, which are adapted during the training process. In other words, during the training process, the filters learn how to extract specific features from the input image. Pooling layers, on the other hand, perform non-linear “downsampling” such that it combines the outputs from one layer and feed into a single neuron in the next layer. Fully connected layers are not different from the conventional Multi-Layer perceptron networks, where every neuron in one layer is connected to every neuron in the next network. CNN's can be employed for different types of image analysis problems such as image classification and object detection. In image classification, an input image is classified to one of the predefined categories; while in object detection, the network is trained to detect an 2
Infrared Physics and Technology 102 (2019) 103048
N. Saeed, et al.
Fig. 1. (a) Visualization of the Alexnet's first convolutional layer. (b) Thermographic scans from the prepared dataset.
Fig. 2. Flow diagram of the proposed method.
Fig. 3. Defect detection using Convolutional Neural Networks.
from thermographic scans, we propose using a hybrid system, which utilizes the convolutional neural network first, for localizing defects in CFRP structures; then it utilizes a deep feed-forward network for estimating the depth of each defect location. Fig. 2 shows the flow diagram of the proposed method. Before evaluating this proposed approach, both the CNN and the DFF are trained using thermographic scans and thermal contrast curves, respectively. The thermal contrast is calculated using Eq. (1), where Tc (t ) , Td (t ) and Ts (t ) represents absolute thermal contrast, the temperature on the defect and temperature on the sound surface respectively.
focal plane array FPA of 320 × 240 pixels and a 2.5 kW halogen lamp synchronized to the processing unit using an automation box (commercial name NI-MAX) with an acquisition rate of 25 Hz. The detector has a Noise Equivalent Temperature Difference NETD of 0.025 K, and a 0.4 × 0.4 mm/pixel resolution. More details on the test sample are in Section 5. To increase the size of our training dataset; the augmentation technique is used to populate the set with variations of the original images in terms of size, rotation, and cropping as shown in Fig. 1b. A task-specific dataset (containing 200 images) is then used to fine-tune the CNN. The proposed DFF approach and accompanied reviewed literature, specific to thermography can be found in [43].
Tc (t ) = Td (t ) − Ts (t )
(1)
Fig. 3 depicts an illustration of how a typical CNN works under an object detection framework. Thermogram images (input) are fed into the CNN and processed through a sequence of layers such as the convolutional, pooling, and fully connected layers; the function of each of
4. Proposed method To automate the process of defects detection and depth estimation 3
Infrared Physics and Technology 102 (2019) 103048
N. Saeed, et al.
Fig. 4. (a) Alexnet architecture. (b) Yolonet architecture.
Fig. 5. Defect depth estimation using Deep Feed Forward network.
training time, and evaluation time. The architecture of Cuda-ConvNet, Alexnet, and Yolonet are shown in Figs. 3, 4a and 4b, respectively. As aforementioned, we took leverage of the transfer learning technique by pre-training Cuda-ConvNet with CIFAR-10 dataset and Alexnet with ImageNet dataset. The second part of the system estimates the depth of the defects as localized by the CNN. The thermal contrast curves of the detected defects are acquired by choosing defects’ surface locations either manually or automatically, followed by extracting the thermal contrast profile from these locations. The thermal contrast curves are then fed into the DFF, where regression is performed to predict the depth of the
the layers is explained in Section 1. The output of the object detection framework is bounding boxes that identify the locations of the defects classified by the CNN. An illustration of how a typical CNN works under an object detection framework is displayed in Fig. 3. The object detection framework generates Region of Interests (ROI) or region proposals, which are a list of bounding boxes of likely positions of objects. As mentioned earlier, we have trained, tested and compared the performance of 3 different CNN types under 3 frameworks namely; CudaConvNet and Alexnet under R-CNN and Faster R-CNN and Yolonet (You only look once), under Yolo object detection framework. The three frameworks differ in their architecture, ROI list, complexity, accuracy, 4
Infrared Physics and Technology 102 (2019) 103048
N. Saeed, et al.
Table 1 Defects characteristics. Row#
Depth
Shape
Defect size
1 2 3 4
0.25 mm 0.50 mm 1.00 mm 2.00 mm
Square Circular Square Circular
Side = 3 mm, 5 mm, 7 mm, 10 mm Diameter = 3 mm, 5 mm, 7 mm, 10 mm Side = 3 mm, 5 mm, 7 mm, 10 mm Diameter = 3 mm, 5 mm, 7 mm, 10 mm
Fig. 6a. 3D Design of CFRP structure with 16 embedded defects.
corresponding defects. Fig. 5 is an illustration of the depth estimation process using a DFF. The DFF architecture and the process of training and testing the network is thoroughly explained and can be found in one of our recent publications [47]. 5. Experimental validation and analysis a. Experimental Setup Fig. 7. Pulsed Thermography experimental setup.
To test the performance of the CNN and their object detection frameworks, a 3D printed CFRP sample as shown in Figs. 6a and 6b is fabricated using Markforged 3D printer, with 16 embedded defects (air pockets) of different sizes and shapes, and located at different depths as mentioned in Table 1. The details of the printing process and the design of the sample can be found and justified in [10], where the 3D printed defects are found to provide better representation than Teflon implants or back-drilled holes. The printer has a 100 μm Z-layer resolution and is capable of printing CFRP structures; The Carbon Fiber feeder material has a flexural strength of 420 MPa and flexural stiffness of 21 GPa. The printing is done layer by layer in 4 printing orientations (0°, ± 45° , 90° ) leading to 22 layers of bi-directional carbon fiber sample. The sample was scanned using a pulsed thermography setup, and the acquired thermograms were processed using multiple combinations of CNNs and object detection frameworks. The Pulsed Thermography (PT) setup, shown in Fig. 7, consists of three main components: a thermal radiation source (2.5 kW Halogen lamp), a thermal camera (InSb mid-wave) and mounting arms for holding the sample. The test settings are as described in Section 3. The National Instruments NI Automation box was used to control the sequence of the thermal pulses deposited on the sample, and set the acquisition rate at 25 Hz. The heating is set at a 5 s period.
Table 2 Comparison between the different object detection frameworks and CNN types with respect to their training and evaluation time. Object detection framework
Convolutional neural network
Training time (minutes)
Evaluation time (milliseconds)
R-CNN
Cuda-ConvNet Alexnet Cuda-ConvNet Alexnet Yolonet
9 50 42 60 150
35 60 43 70 29
Faster R-CNN Yolo
6. Results and discussion An Intel® Xeon ® 3 GHz CPU processor and a GPU card model Quadro M2000M by NVIDIA were used to accommodate the intensive computational training requirements of models above, and to accelerate the process. Table 2 shows the comparative results from the different CNN under different object detection frameworks; in terms of their training and evaluation time. The reported training time is the time
Fig. 6b. The 16 embedded defects depth and shape configurations. 5
Infrared Physics and Technology 102 (2019) 103048
N. Saeed, et al.
Fig. 8. Detection results using RCNN object detector framework.
Fig. 9. Detection results using Faster RCNN object detector framework. Table 3 The estimated depth for the defects located diagonally in the sample. Test#
Expected Output
Estimated Output (Sample 1)
1 2 3 4
0.25 mm 0.50 mm 1.00 mm 2.00 mm
0.22 mm 0.52 mm 0.93 mm 2.03 mm
the more time it needs to train the weights. The training and evaluation times listed in Table 2, can be used to compare between the different object detections frameworks and CNN types when applied to the acquired thermograms. As can be observed from Table 2, that CudaConvNet is taking less time to train than Alexnet for both the R-CNN and Faster R-CNN object detection frameworks. This is because Alexnet has eight convolutional layers to be trained while Cuda-ConvNet has six convolutional layers. The number of parameters in Alexnet to be trained are nearly equal to 62.3 million parameters. Furthermore, Yolonet has 24 convolutional layers, and thus, it requires substantially more time to train, as shown in Table 2. The R-CNN object detector framework was trained for 40 epochs with the learning rate set to 0.001 and the batch size to 128. Additionally, the threshold value for choosing a bounding box was optimized to be 0.55. Using R-CNN as a framework with Cuda-ConvNet yielded lower accuracy values as can be seen from Fig. 8, where onlytwo defects out of 16 were detected. However, Alexnet bounding boxes are more localized such that only the defective area is bounded. The results of Faster R-CNN, on the other hand, as shown in Fig. 9 are better, as more defects were detected compared to R-CNN framework. The Faster-RCNN framework was trained for 40 epochs with a learning rate set to 0.001 and the batch size to 10. The batch size was reduced due to the limitation of the processing resources. As in the case of R-CNN, utilizing Alexnet yields more accurate localization of the defects. Using Cuda-ConvNet resulted in some bounding boxes that contained portions of non-defective areas. The defects which are closer to the surface, or larger in size, has more contrast compared to the
Fig. 10. Detection results using Yolo object detector framework.
Fig. 11. Thermal contrast of 1.0 mm deep defect.
taken by the different pre-trained networks to fine-tune the classification layers’ weights using our 200 image-dataset. The time is different for different networks depending upon their size, the larger the network
6
Infrared Physics and Technology 102 (2019) 103048
N. Saeed, et al.
Declaration of Competing Interest
deeper or smaller ones. Considering the Faster R-CNN result for Alexnet network, we can observe that defects closer to the right top corner are well detected and centered in the bounding boxes when compared to the left bottom corner. Such defects on the right side are larger in size along the row compared to the left ones, and shallower on the top side compared to the bottom side. Additionally, image enhancement convolutional layers can be added in the first layers which are trained to provide a tailored enhancement to obtain the best detection of defects for a specific input dataset. Lastly, Yolonet under Yolo framework performs the best amongst the other frameworks in terms of accurate defect detection and the number of defects detected, as shown in Fig. 9. However, Yolo takes the longest time in training due to the large network size; but its evaluation time is the shortest. Thus, for cases that the user can manage or sustain the relatively longer time of training, Yolo maybe the best framework to be used; as evidenced in Fig. 10. After the detection of the defects using the Yolo framework, the center point of the bounding box was selected to extract the defect temperature or Td . Similarly, a point outside the bounding box is selected to extract the surface (non-defective) temperature or Ts . Using Eq. (1), one can plot the thermal contrast curve, as shown in Fig. 11. The thermal contrast curves will de-noised as well using a SavitzkyGolay filter. Other filter types can be used including; Gaussian filter and a linear regression based filter. The process is then repeated for all the detected defects obtaining a thermal contrast curve for each of the defects. These thermal contrast curves are then fed into a Deep Feedforward network to estimate the defect depth. The architecture and training parameters of this Deep Feedforward network and specific implementation issues are all explained in our recent work in [47], which also validates this NN implementation using a deterministic (noise-free Comsol® simulation) and experimentally. Table 3 shows the estimated defect depths by using this network, and the results are promising. The accuracy of this network is nearly 88% when considering the worst case.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix A. Supplementary material Supplementary data to this article can be found online at https:// doi.org/10.1016/j.infrared.2019.103048. References [1] C. Meeks, E. Greenhalgh, B.G. Falzon, Stiffener debonding mechanisms in postbuckled CFRP aerospace panels, Compos. A Appl. Sci. Manuf. 36 (2005) 934–946. [2] G. Marsh, Composites conquer with carbon supercars, Reinf. Plast. 50 (2006) 20–24. [3] C.C. Ciang, J.-R. Lee, H.-J. Bang, Structural health monitoring for a wind turbine system: a review of damage detection methods, Meas. Sci. Technol. 19 (2008) 122001. [4] H. Ullah, A.R. Harland, V.V. Silberschmidt, Dynamic bending behaviour of woven composites for sports products: Experiments and damage analysis, Mater. Des. 88 (2015) 149–156. [5] N.P. Avdelidis, B.C. Hawtin, D.P. Almond, Transient thermography in the assessment of defects of aircraft composites, Ndt E Int. 36 (2003) 433–439. [6] S. Sfarra, C. Ibarra-Castanedo, N.P. Avdelidis, M. Genest, L. Bouchagier, D. Kourousis, et al., A comparative investigation for the nondestructive testing of honeycomb structures by holographic interferometry and infrared thermography, Journal of Physics: Conference Series, vol. 214, IOP Publishing, 2010, p. 012071. [7] A. Dillenz, T. Zweschper, G. Busse, Progress in ultrasound phase thermography, Thermosense XXIII, vol. 4360, International Society for Optics and Photonics, 2001, pp. 574–580. [8] M. Grosso, J.E. Lopez, V.M. Silva, S.D. Soares, J.M. Rebello, G.R. Pereira, Pulsed thermography inspection of adhesive composite joints: computational simulation model and experimental validation, Compos. B Eng. 106 (2016) 1–9. [9] J.-C. Krapez, X. Maldague, P. Cielo, Thermographic nondestructive evaluation: data inversion procedures, Res. Nondestruct. Eval. 3 (1991) 101–124. [10] N. Saeed, M.A. Omar, Y. Abdulrahman, S. Salem, A. Mayyas, IR thermographic analysis of 3D printed CFRP reference samples with back-drilled and embedded defects, J. Nondestruct. Eval. 37 (2018) 59. [11] D.M. McCann, M.C. Forde, Review of NDT methods in the assessment of concrete and masonry structures, Ndt & E Int. 34 (2001) 71–84. [12] A. Manohar, F.L. di Scalea, Modeling 3D heat flow interaction with defects in composite materials for infrared thermography, NDT and E Int. 66 (2014) 1–7. [13] N. Rajic, Principal component thermography for flaw contrast enhancement and flaw depth characterisation in composite structures, Compos. Struct. 58 (2002) 521–528. [14] F. Ciampa, P. Mahmoodi, F. Pinto, M. Meo, Recent advances in active infrared thermography for non-destructive testing of aerospace components, Sensors 18 (2018) 609. [15] M. Pilla, M. Klein, X. Maldague, A. Salerno, New absolute contrast for pulsed thermography, Proc. QIRT vol. 5, (2002). [16] C. Meola, G.M. Carlomagno, Impact damage in GFRP: new insights with infrared thermography, Compos. A Appl. Sci. Manuf. 41 (2010) 1839–1847. [17] S.M. Shepard, J.R. Lhota, B.A. Rubadeux, D. Wang, T. Ahmed, Reconstruction and enhancement of active thermographic image sequences, Opt. Eng. 42 (2003) 1337–1343. [18] S.M. Shepard, Advances in pulsed thermography, Thermosense XXIII, vol. 4360, International Society for Optics and Photonics, 2001, pp. 511–516. [19] Q. Tang, J. Dai, C. Bu, L. Qi, D. Li, Experimental study on debonding defects detection in thermal barrier coating structure using infrared lock-in thermographic technique, Appl. Therm. Eng. 107 (2016) 463–468. [20] S.S. Pawar, V.P. Vavilov, Applying the heat conduction-based 3D normalization and thermal tomography to pulsed infrared thermography for defect characterization in composite materials, Int. J. Heat Mass Transf. 94 (2016) 56–65. [21] M. Omar, M. Hassan, K. Saito, Optimizing thermography depth probing with a dynamic thermal point spread function, Infrared Phys. Technol. 46 (2005) 506–514, https://doi.org/10.1016/j.infrared.2005.02.002. [22] B. Li, L. Ye, E. Li, D. Shou, Z. Li, L. Chang, Gapped smoothing algorithm applied to defect identification using pulsed thermography, Nondestruct. Testing Eval. 30 (2015) 171–195. [23] M. Omar, M.I. Hassan, K. Saito, R. Alloo, IR self-referencing thermography for detection of in-depth defects, Infrared Phys. Technol. 46 (2005) 283–289. [24] R. Marani, D. Palumbo, V. Renò, U. Galietti, E. Stella, T. D’Orazio, Modeling and classification of defects in CFRP laminates by thermal non-destructive testing, Compos. B Eng. 135 (2018) 129–141. [25] R. Marani, D. Palumbo, U. Galietti, E. Stella, T. D’Orazio, Enhancing defects characterization in pulsed thermography by noise reduction, NDT E Int. 102 (2019) 226–233. [26] D. Cireşan, U. Meier, J. Schmidhuber, Multi-column deep neural networks for image classification, ArXiv Preprint ArXiv:12022745 2012. [27] D. Maturana, S. Scherer, Voxnet: A 3d convolutional neural network for real-time
7. Conclusion This research paper proposed a two-step, fully autonomous postprocessor for thermography defects detection and depth estimation. The presented study demonstrated the proposed processor capabilities using an artificial 3D printed CFRP sample, with embedded air pockets in different sizes, shapes, and depths. The analysis was comprised of two neural networks; the CNN to help detect the defects based on an object detection scheme and an intensive training process; while the depth estimation is done using a deep learning algorithm. This combined approach highlights the capabilities of Artificial Intelligence tools in thermography processing. The reported accuracies in detection and depth estimation are very promising and do match the requirements for industrial quality control. The proposed processing scheme leverages several data science techniques in deep learning, including dataset augmentation and transfer learning. Several object detection techniques were compared quantitatively for different CNN models. The defect detection system approach, described in this work, can detect embedded defects with high accuracy without any human intervention; thus it may be helpful for autonomous inspection routines, as in inspecting ships’ ballast tanks, or remote, inaccessible objects or targets. Future work could extend the scope of the trained networks to detect other types of defects in composite structures or other host materials. The proposed defect detection scheme focused on single class detection; however, the system may be extended to detect and distinguish a range of different defect types (impact damage, delaminations, etc.). The proposed defect detection system is relatively accurate, but it does require a complex and computationally intensive training process. Thus, future work could focus on developing a standardized method of representation, making it easier to distribute the trained models. 7
Infrared Physics and Technology 102 (2019) 103048
N. Saeed, et al.
[28]
[29] [30]
[31]
[32] [33] [34]
[35]
[36] [37]
European Conference on Computer Vision, Springer, 2014, pp. 818–833. [38] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition. ArXiv Preprint ArXiv:14091556 2014. [39] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. [40] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587. [41] S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: towards real-time object detection with region proposal networks, Advances in Neural Information Processing Systems, 2015, pp. 91–99. [42] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, realtime object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788. [43] Z. Kolar, H. Chen, X. Luo, Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images, Autom. Constr. 89 (2018) 58–70. [44] Y. Gao, K.M. Mosalam, Deep transfer learning for image-based structural damage recognition, Comput.-Aided Civ. Infrastruct. Eng. 33 (2018) 748–768. [45] A. Krizhevsky, G. Hinton, Learning multiple layers of features from tiny images, Citeseer (2009). [46] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: a large-scale hierarchical image database, Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, IEEE, 2009, pp. 248–255. [47] N. Saeed, M.A. Omar, Y. Abdulrahman, A neural network approach for quantifying defects depth, for nondestructive testing thermograms, Infrared Phys. Technol. (2018).
object recognition, Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE, 2015, pp. 922–928. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, L. Fei-Fei, Large-scale video classification with convolutional neural networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1725–1732. S. Ji, W. Xu, M. Yang, K. Yu, 3D convolutional neural networks for human action recognition, IEEE Trans. Pattern Anal. Mach. Intell. 35 (2013) 221–231. Z. Tong, J. Gao, H. Zhang, Recognition, location, measurement, and 3D reconstruction of concealed cracks using convolutional neural networks, Constr. Build. Mater. 146 (2017) 775–787. Z. Long, B. Xing, Hai Liu, Q. Liu, Hyperbola recognition from ground penetrating radar using deep convolutional neural networks, DEStech Trans. Comput. Sci. Eng. (2017). Y. LeCun, LeNet-5, convolutional neural networks. URL: http://Yann Lecun Com/ Exdb/Lenet 2015:20. H.H. Aghdam, E.J. Heravi, Guide to Convolutional Neural Networks: A Practical Application to Traffic-sign Detection and Classification, Springer, 2017. D.C. Ciresan, U. Meier, J. Masci, L. Maria Gambardella, J. Schmidhuber, Flexible, high performance convolutional neural networks for image classification, IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol. 22, 2011, p. 1237. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 2012, pp. 1097–1105. Google Code Archive - Long-term storage for Google Code Project Hosting. n.d. https://code.google.com/archive/p/cuda-convnet/ (accessed January 15, 2019). M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks,
8