Structural damage localization using probabilistic neural networks

Structural damage localization using probabilistic neural networks

Mathematical and Computer Modelling 54 (2011) 965–969 Contents lists available at ScienceDirect Mathematical and Computer Modelling journal homepage...

651KB Sizes 2 Downloads 141 Views

Mathematical and Computer Modelling 54 (2011) 965–969

Contents lists available at ScienceDirect

Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm

Structural damage localization using probabilistic neural networks Peng Li ∗ School of Mechanical and Electronical Engineering, East China Jiaotong University, Nanchang, 330013, PR China

article

info

Article history: Received 13 August 2010 Accepted 4 November 2010 Keywords: Damage localization Probabilistic neural networks Smoothing parameter

abstract In this paper, the structural damage localization on a simple composite plate specimen is identified using probabilistic neural networks. First, the category to be identified is defined according to the structural location, and the number of categories is reduced by grouping neighboring elements to one category. Second, the state data of damaged structure are collected by a data collection system, and are utilized as feature vectors for the probabilistic neural network. Finally, the smoothing parameter in the probabilistic neural network is studied. When this trained network is subjected to the measured response, it should be able to locate existing damage. The effectiveness of the proposed method is demonstrated. © 2010 Elsevier Ltd. All rights reserved.

1. Introduction Structural damage may occur as a result of normal operations, deterioration or severe natural events. Any damage or cracks can seriously influence the structural stability and integrity [1]. Any belated discovery of structural failures will require expensive remedial measures [2]. Thus, an appropriate damage localization method to assess the likelihood of structural failures is needed to provide an alert as early as possible. Previous studies have applied artificial neural networks (ANNs) with the back-propagation (BP) learning algorithm for structural damage localization [3,4]. The model provides a good knowledge acquisition tool for damage localization, however, without considering the real-time problem. Recent developments of artificial neural networks bred a new type of neural network capable of very fast training on real-time problems: probabilistic neural networks (PNNs) [5,6]. PNNs are a class of neural networks which implement a Bayesian decision strategy for pattern classification problems. A PNN requires less training time than the BP network and is very efficient because a PNN can easily generalize to new patterns. Compared with BP, for a given level of performance, the speedup is very efficient [7]. This paper applies a probabilistic neural network for locating structural damage based upon a set of data collected from an optical fiber sensor network and digital signal processing (DSP). The probabilistic neural network model and its architecture are described. Using the set of collected data, the smoothing parameter in the PNN is studied. 2. Materials and methods 2.1. Probabilistic neural network The probabilistic neural network model, described by Specht [8], is a neural implementation of the Parzen [9] windows probability density approximation method. It is mainly suited to classification problems. Parzen’s method is an attractive estimation procedure, as it is a fast and straightforward way of learning probabilities from a training set. And the nonparametric structure leads naturally to a neural implementation of the method.



Tel.: +86 15170052867. E-mail address: [email protected].

0895-7177/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2010.11.023

966

P. Li / Mathematical and Computer Modelling 54 (2011) 965–969

Fig. 1. Schematic diagram showing the data collection system.

Fig. 2. Photographs of the data collection system.

Consider a pattern vector X with m dimensions that belongs to one of two categories K1 and K2 . Let F1 (X ) and F2 (X ) be probability density functions (pdfs) for category K1 and category K2 , respectively. From the Bayes decision rule, X belongs to K1 if (1) is true, or belongs to K2 if (1) is false: F1 (X )L2 P1 > F2 (X )L1 P2 ,

(1)

where L1 is the loss or cost function associated with misclassifying the vector as belonging to the category K1 while it belongs to the category K2 ; L2 is the loss function associated with misclassifying the vector as belonging to the category K2 while it belongs to the category K1 ; P1 is the prior probability of occurrence of the category K1 ; P2 is the prior probability of occurrence of the category K2 . In many situations, the loss functions and the prior probabilities can be considered equal. Hence the key to using the decision rule given by (1) is to estimate the pdfs [10]. In the PNN, a nonparametric estimation technique known as Parzen windows is used to construct the class-dependent pdf for each classification category required by Bayes’ theory. Combining with the pdf of each category, the PNN selects the most likely category for the given pattern vector. Both Bayes’ theory and Parzen windows are theoretically well established, have been in use for decades in many engineering applications, and are treated at length in several statistical textbooks. If the jth training sample for category K1 is Xj , then the Parzen estimate of the pdf for category K1 is F1 (X ) =

1

n −

(2π )m/2 σ m n

j=1

[ exp −

] (X − Xj )T (X − Xj ) , 2σ 2

(2)

where n is the total number of training samples in the category K1 ; m is the input vector dimension; j is the training sample number in the category K1 ; σ is an adjustable smoothing parameter. 2.2. Sample data collection Figs. 1 and 2 depict major hardware modules of the data collection system. It consists of an optical fiber network, photoelectric conversion, central data processing unit, and host computer. In this study, a total of eight fiber optic microbend sensors are bonded to the structural surface orthogonally and are incorporated into the optical fiber network to provide structural state information (Fig. 3). Optical fibers transmit the state information back to the central data processing unit by photoelectric and A/D conversions. The central data processing unit is a DSP device (TMS320LF2407) in which the signal can be filtered and analyzed. The central data processing unit communicates with the host computer by means of the DSP device’s serial communications interface (SCI), using a standard RS232 protocol. The system can monitor the structural state and provide data as the input vector for the probabilistic neural network.

P. Li / Mathematical and Computer Modelling 54 (2011) 965–969

967

Fig. 3. Optical fiber network and categories for damage localization.

Fig. 4. Probabilistic neural network structure for damage localization.

2.3. PNN model construction 2.3.1. Structure of a PNN for damage localization In this study, the application of a PNN for damage localization is suggested on a simple composite plate specimen. The category is defined according to the location of the damaged structural members (Fig. 3). Each category represents one of structural damage locations. To reduce the number of categories to be identified, in total 16 categories (θ1 , θ2 , θ3 , . . . , θ16 ) are considered in the PNN. The data collected from the data collection system above are represented by an 8-dimensional vector (X = {x1 , x2 , . . . , x8 }) as input samples and presented to the PNN. Fig. 4 shows the basic design of a PNN used for damage localization. The structure of the PNN is a three-layer network, and it directly reflects the Bayesian criterion applied to the Parzen estimation method. The input, hidden and output layers constitute a number of sensory units. The information passes from the input layer through the hidden layer to the output layer; that is, with the signal processing from neurons of one layer to the neurons of the succeeding layer. The main role of the input layer is to map all signals into the hidden layer. In this study, the input layer has 8 neurons. The neurons in the hidden layer enable mapping of the nonlinearity relations between the input and output values, and thus give PNN models a better performance over others. There is one hidden neuron per training sample. The hidden neuron Xaj corresponds to the jth, j = 1, . . . , na , training sample in the category θa . The output of the hidden neuron Xaj with respect to X is expressed as daj (X ) =

[

1

(2π )4 σ 8

exp −

] (X − Xaj )T (X − Xaj ) , 2σ 2

(3)

where σ denotes the smoothing parameter. The output layer has 16 neurons for the 16 categories, respectively. The ath output is formed as fa (X ) =

na 1 −

na j = 1

daj (X ),

where fa (X ) is the value of the pdf of category θa ; na is the total number of training samples in category θa .

(4)

968

P. Li / Mathematical and Computer Modelling 54 (2011) 965–969

Table 1 Damage localization accuracy versus the smoothing parameter σ .

σ 0.1 0.2 0.25 0.3 0.4 0.8 1.6

Damage localization accuracy for 16 damage categories (%)

Average

θ1

θ2

θ3

θ4

θ5

θ6

θ7

θ8

θ9

θ10

θ11

θ12

θ13

θ14

θ15

θ16

96 97 97 93 97 97 97

96 92 88 94 87 86 85

96 97 96 95 95 95 95

91 94 95 90 95 95 95

96 93 94 91 94 94 94

96 94 95 90 96 96 96

94 91 90 95 90 90 90

90 90 88 89 87 86 86

87 87 85 86 83 83 83

98 92 92 92 92 92 92

91 92 92 92 92 92 92

93 92 92 92 92 92 92

92 92 92 92 92 92 92

93 92 92 92 92 92 92

95 92 92 92 92 92 92

93 92 92 92 92 92 92

91.9 93.1 93.2 92.9 92.4 92.3 92.2

The final output classifications are determined by the values of the output layer generated by the PNN model. The network produces activation in the output layer, corresponding to the pdf estimated for each category. The highest output represents the most probable category. The output layer classifies the sample X to category k which satisfies k = arg max{ fa (X )|a = 1, 2, . . . , 16}.

(5)

It is clear that the PNN classifies a sample by comparing a set of pdfs of the sample conditioned on different categories, where the pdf is constructed using Parzen windows. The PNN determines which category that one pattern belongs to. It measures how far a given sample pattern is from patterns of the training set in the 8-dimensional space. In this study, the a priori probability for each category is the same, and the loss associated with making an incorrect decision for each category is the same. 2.3.2. Determination of the smoothing parameter There is no general method available to determine the smoothing parameter. However, the value is usually determined by error. So, a numerical example analysis was performed on a simple composite plate specimen with 16 damage categories (θ1 , θ2 , θ3 , . . . , θ16 ) respectively, to evaluate the performance of the PNN with different values of the smoothing parameter σ for damage localization. Training and test samples belonging to a certain category were generated by perturbing the element in that category. Approximately 70% of the datasets were used as training sets; the remaining portions were used as test sets. Seven different values (0.1, 0.2, 0.25, 0.3, 0.4, 0.8, and 1.6) were adopted in order to better evaluate the test patterns in this study. Table 1 shows the estimation results of the PNN for all test patterns using the above values of σ . It can be seen that the value of σ affects the estimation accuracy in PNN, and that the smoothing parameter σ = 0.25 provides the smallest estimation error. 3. Results and discussion The proper choice of the value for the smoothing parameter σ is the one that improves the performance of the PNN. If σ is small, individual training patterns will be considered only in isolation, and we will be left with a nearest-neighbor classifier. However, if the value of σ is high, details of the density will be blurred together. From the above investigation (Section 2.3.2), it is seen that the PNN with the smoothing parameter σ = 0.25 gives a good performance, with a high damage localization accuracy. Table 1 shows that the model with other smoothing parameters can also provide an acceptable error. There are two reasons to explain the result. (1) The orthogonal arrangement of the optical fiber network can essentially provide a valuable feature vector to reflect the structural state. (2) In this study, the type of structural damage refers only to a single damage situation. In order to evaluate the performance of the PNN, a learning vector quantization (LVQ) neural network was applied with the same data sets as in the PNN model, and the results obtained by the LVQ model were compared. The damage localization accuracy obtained by the LVQ neural network is 81.2%, which is less than that obtained by the PNN. Compared with the LVQ neural network, the speedup of the PNN is about 73:1, since the PNN does not require an error correction learning algorithm. Apparently, the PNN approach gives better localization performance than the LVQ neural network. 4. Conclusion A promising technique for applying a PNN for the structural damage localization of a simple composite plate specimen is presented in this paper. The structural state information is collected by a data collection system which consists of an optical fiber network, photoelectric conversion, DSP, and host computer. The collected data are utilized as the input feature vectors for damage localization. The category is defined according to the location of damaged structural members. The effectiveness of the proposed method is demonstrated by comparison with the LVQ neural network method. The results can be summarized as follows.

P. Li / Mathematical and Computer Modelling 54 (2011) 965–969

969

(1) The effect of smoothing parameter σ on the performance of the PNN is studied. According to the result, it is seen that a good selection of the smoothing parameter (σ = 0.25) boosts the accuracy of the network. (2) Probabilistic neural networks shows good estimation results in detecting single damage localization. (3) It is easy to implement probabilistic neural networks, since the training process is just allocating some training patterns to a certain category. Therefore, the probabilistic neural network technique is more effective for damage localization than conventional neural networks such as an LVQ neural network. (4) The main limitation of this study is that the number of categories may not include all types of structural damage. Further investigation is needed to explore this problem. Acknowledgements This work is supported by National Natural Science Foundation of China (NO. 61065002), Research Foundation of ECJTU (NO. 09102005), and Youth Science Foundation of Education Department of Jiangxi. References [1] F. Capezzuto, F. Ciampa, G. Carotenuto, A smart multifunctional polymer nanocomposites layer for the estimation of low velocity impact damage in composite structures, Composite Structures 92 (8) (2010) 1913–1919. [2] S.G. Pierce, F. Dong, K. Atherton, B. Culshaw, et al., Damage assessment in smart composite structures: the damascos programme, Air & Space Europe 3 (3–4) (2001) 132–138. [3] R. Tarapada, C. Debabrata, Optimal vibration control of smart fiber reinforced composite shell structures using improved genetic algorithm, Journal of Sound and Vibration 319 (1–2) (2009) 15–40. [4] K.V. Yuen, H.F. Lam, On the complexity of artificial neural networks for smart structures monitoring, Engineering Structures 28 (7) (2006) 977–984. [5] J.J. Lee, D. Kim, S.K. Chang, An improved application technique of the adaptive probabilistic neural network for predicting concrete strength, Computational Materials Science 44 (3) (2009) 988–998. [6] C.Y. Tsai, An iterative feature reduction algorithm for probabilistic neural networks, Omega 28 (5) (2000) 513–524. [7] F. Ancona, A.M. Colla, S. Rovetta, Implementing probabilistic neural networks, Neural Computation and Applications 5 (3) (1997) 152–159. [8] D.F. Specht, Probabilistic neural networks, Neural Networks 3 (1) (1990) 109–118. [9] E. Parzen, On estimation of a probability density function and mode, Annals of Mathematical Statistics 33 (3) (1962) 1065–1076. [10] T.C. Goh, Probabilistic neural network for evaluating seismic liquefaction potential, Canadian Geotechnology Journal 39 (1) (2002) 219–232.