Fault diagnosis of electronic analog circuits using a radial basis function network classifier

Fault diagnosis of electronic analog circuits using a radial basis function network classifier

Measurement 28 (2000) 147–158 www.elsevier.com / locate / measurement Fault diagnosis of electronic analog circuits using a radial basis function net...

554KB Sizes 0 Downloads 121 Views

Measurement 28 (2000) 147–158 www.elsevier.com / locate / measurement

Fault diagnosis of electronic analog circuits using a radial basis function network classifier Marcantonio Catelani a , *, Ada Fort b a

Dipartimento di Ingegneria Elettronica, Universita` di Firenze, via Santa Marta 3, Firenze, Italy b Dipartimento di Ingegneria dell’ Informazione, Universita` di Siena, via Roma 56, Siena, Italy Received 27 July 1999; received in revised form 16 January 2000; accepted 28 January 2000

Abstract In this paper a fault diagnosis technique, which employs neural networks to analyze signatures of analog circuits, is proposed. Radial basis functions networks (RBFN) are used to process circuit input–output measurements, and to perform soft fault location. Both noise and effect of parameter variations in the tolerance ranges of non-faulty components are taken into account. The network is trained with circuit signatures, obtained by measuring and coding both circuit input and output signals, which are contained in a ‘fault dictionary’. In this context RBFN architecture is selected, because it is able to cope with ‘new fault’ conditions not well represented in the fault dictionary used for network training. The RBFN classifier was applied to linear and non-linear sample circuits, considering faults both at sub-system level and at component level. Simulations and experimental results show that the developed nets succeeded in classifying faults. The nets trained with single faults has in many cases detected also multiple faults.  2000 Published by Elsevier Science Ltd. All rights reserved. Keywords: Radial basis function networks; Fault diagnosis; Neural classifiers

1. Introduction Fault diagnosis represents a fundamental task in the preventive maintenance of electronic circuits and systems. In the standard CEI IEC 50 (191) [1], maintenance is defined as: ‘all actions necessary for retaining an item (system, sub-system or component) in or restoring it to a specified condition’. So maintenance actions involve, besides servicing and inspections, fault diagnosis in both the aspects concerning the detection of a fault condition letting the system behave out of the desired specifications, *Corresponding author. Tel.: 139-55-479-6377; fax: 139-55494-569. E-mail address: [email protected] (M. Catelani)

and the subsequent location and identification of the failure. On the basis of such considerations, our attention is focused on the aspect concerning fault detection. Nevertheless, while for digital circuits many diagnosis methodologies based on the implementation of automatic test systems have been developed in the past, for analog and mixed signal circuits there is a decrease of efficiency due to the lack of fault models and the proposed methods are, in general, quite complex [2,3]. In fact, faults in analog circuits are not only related to on-off conditions but also to abnormal deviations of the circuit (or of some circuit components) behavior from the nominal operating conditions (belonging to the tolerance range). These kinds of faults are classified as ‘soft faults’ or

0263-2241 / 00 / $ – see front matter  2000 Published by Elsevier Science Ltd. All rights reserved. PII: S0263-2241( 00 )00008-7

148

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

parametric faults; they do not change the circuit topology and are due to variations of the value of some circuit components. Thus the ensemble of possible fault conditions is a continuous set containing infinite elements. Moreover noise and variations of the value of non-faulty components in their tolerance range must be taken into account. In this context a ‘fault dictionary’ made up by fault examples cannot be complete, and the diagnostic system must be able to generalize from examples, i.e. to interpolate information contained in the fault dictionary. As a consequence, the application of neural net based techniques for fault detection and classification is particularly promising. In fact, neural networks proved to be very efficient when dealing with problems involving poorly defined system models, noisy signals and non-linear behaviors. In this paper a fault diagnosis technique based on neural network analysis of signatures is presented; this technique is a development of the diagnosis system proposed in Ref. [4]. The concept of accessibility is assumed, that is the property of the nodes of the circuit under test (CUT) that are both controllable and observable [5]. A node is said to be controllable if an excitation signal (a test stimulus) can be directly injected in such a node and it is observable if it can be accessed in order to perform voltage and / or current measurements. In order to implement the automatic diagnosis system we consider a radial basis functions network (RBFN), a particular neural net that allows to process input–output measurements and to perform fault location. The network is trained by means of a ‘fault dictionary’, previously constructed, containing examples of fault signatures. The RBFN architecture is selected, due to the advantages over other commonly employed networks such as traditional classifiers based on multi-layer perceptron networks trained with back-propagation algorithm [6]. The main advantage is related to the capability of RBFN to cope with ‘new fault’ conditions, i.e. faults which are not well represented in the fault dictionary used for network training: the use of RBFNs reduces the risk of obtaining false classifications. In fact, an index of the novelty of the fault can be obtained as an output of the net, and this can be used as a measurement of the efficiency of the classification. Advantages in the reduction of training time,

generality and simplicity of the network architecture can also be found. The paper is organized as follows: Section 2 introduces the RBFN architecture and its capability when applied to diagnosis of electronic analog systems. In this section the framework of diagnosis of analog electronic circuits is also outlined. Section 3 describes the proposed technique in more detail, with particular reference to the implementation of the fault signature dictionary and of the automatic diagnosis system. In order to prove the capability of the RBFNs, fault diagnosis of linear and non-linear analog circuits are performed and the results so obtained are presented in the paper.

2. Theoretical background

2.1. Radial basis function and network structure Radial basis function networks have a three-layer architecture with no feedback, as shown in Fig. 1. The input layer is made up of N nodes (N dimension of the input vector x 5 (x 1 , x 2 , . . . x N ) [ R N ; their connections to the hidden nodes are not weighted and implement a fan-out of the input components to the hidden layer. This last consists of H hidden neurons (radial basis units), with radial activation functions. A typical choice for this function is the Gaussian function which has a peak at the center c and decreases monotonically as the distance from the center increases. So, the output of the h-th hidden neuron, a h (x), is a radial basis function that defines a

Fig. 1. Structure of the radial basis function network.

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

spherical receptive field in R N given by the following equation: a h (x) 5 exp(2ix 2 c h i 2 /s h2 ) h 5 1, . . . H.

(1)

In other words, each neuron in the hidden layer has a substantially finite spherical activation region, determined by the Euclidean distance between input vector, x, and the center, c h , of the function a h (x) normalized with respect to the scaling factor sh . Obviously the radius of the Gaussian spherical receptive field shrinks as the scaling factor sh decreases. From Eq. (1) we can deduce that each hidden neuron is associated with N 1 1 internal parameters: the N components of vector c h , that represents the N-dimensional position of the radial function, and sh that represents a ‘distance scaling parameter’ which determines the receptive field of the neuron, that is the region of the input space over which the neuron has an appreciable response. The set of hidden neurons is designed so that they cover all the significant regions of the input vector space. The output layer is made up of M linear summation units, linked to the hidden layer by weighted connections w mh . Hence the network output is a vector y(x) 5 ( y 1 (x), y 2 (x), . . . ,y M (x)) [ R M , where the m-th component y m (x), is given by the following equation:

Ow H

y m (x) 5

a (x) m 5 1, . . . M.

mh h

set so as to minimize a function of the error between the actual network output and the desired output over the vectors in the training set [7]. In the reference phase, where the net is used to diagnose the CUT, input vectors are applied and output vectors are produced by the network. The network can be trained in three sequential steps. At first the position of centers c h of the hidden unit activation regions are assigned. Because training vectors tend to occur in clusters, a common approach is to find the center of each cluster and to locate a hidden neuron at such a point as shown in Fig. 2. To this aim we used the Fuzzy C-Mean clustering algorithm [8] that finds the set of cluster centers and partitions data into subsets by minimizing the following cost-functional E(H, f ):

OO (B H

E(H, f ) 5

K

hk

) f D 2kh

(3)

h 51k 51

where Dkh 5 ix k 2 c h i 2 is the Euclidean distance between cluster centers c h and input training vectors x k , that is the K examples in the training set. In Eq. (3) we assume f . 1 ( f, fuzziness index). Bhk is a H 3 K matrix (H is the number of clusters), called partition or membership function. In contrast with deterministic methods, such as K-mean clustering, the membership function can assume every value in [0,1] to represent confidence that a training vector belongs to a given cluster.

(2)

h 51

In the paper this network is used as a classifier; consequently the dimension M of the output layer is equal to the number of fault classes to be detected. Similarly to the back-propagation classifier, the network training set is composed by input vectors which represent circuit signatures and target vectors which represent the corresponding class label. Target output vectors show a ‘1’ in the position corresponding to the correct class and ‘0s’ elsewhere. Consequently we assume the following classification criterion: input vector x belongs to class Ci if the output y i (x) is the largest. The network has two operating modes, that is the training mode and the reference mode. During training the adjustable internal parameters of the net (c h , sh , and the output layer weight matrix W ) are

149

Fig. 2. Training set clustering.

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

150

A recursion is used to minimize the functional E( ? ) in Eq. (3), that can be described by the following steps: • an initial set of H cluster centers is selected (random positions or H training examples characterized by the larger mutual distances); • matrix Bhk is evaluated as follows: 1 Bkh 5 ]]]] ; H 2 Dkh ] f 21 ] j 51 Dkj

OS D

(4)

• given matrix Bhk , the position c h of the cluster centers is updated according to the following equation:

OB x c 5 ]]] OB K

f hk k

k 51 K

h

h 5 1, . . . H

(5)

f hk

k 51

and the new value of E(H, f ) is evaluated; • the previous steps are repeated while E(H, f ) (or the variation DE of E(H, f )) goes under a predefined threshold. The sum of membership function corresponding to a given point in the data set must be equal to one, that is:

OB H

hk

5 1 k 5 1, 2, . . . K.

h51

The fuzzy approach is particularly convenient when the data set is not made up of separated classes, this is always the case when handling diagnostic problems applied to an analog circuit where the presence of noise can produce a superimposition between classes. The second step of the training process sets the value of the scaling factors sh , that is the width of the activation regions. This step is fundamental for the accuracy of the diagnosis system. The goal in setting the widths of the hidden units is to cover the input space in order to allow a smooth fit of the desired output. In this paper the sh value is determined by a p-nearest neighbor heuristic, where for the h-th hidden neuron, sh is the RMS distance

between its center c h , and the centers of its p nearest neighbor centers c j :

SO p

1 sh 5 ] ic 2 c j i 2 p j 51 h

D

1/2

.

(7)

In this work p is assumed equal to 3. The number of clusters (hidden units) H in Eq. (3) must be a-priori set. We select H by evaluating the performance of the RBFN, i.e. by observing the rate of classification error (i.e. the ratio between number of classification errors and total number of input vectors) during training and during normal net operation. In fact too low H values give nets which cannot be trained adequately. On the other hand, very high H values (near to K, i.e. a hidden node for each training example) can give very low rates of error during training but can produce a network unable to generalize and a high rate of errors during normal operations. Some problems can be better solved by using a partition of the net. In this case the clustering algorithm is applied separately to the input patterns belonging to each fault class used for the training process. This corresponds to substitute the unsupervised training of the hidden layer with a supervised learning technique, since the information contained in the target is embedded in the class separation. With this procedure the positioning of some cluster centers in a mean position between data belonging to different classes can be avoided and a greater class selectivity of net can be obtained. In this case the number of hidden neurons Hi related to the i-th class is selected by repeating the clustering algorithm for a set of possible Hi (usually Hi , Ki / 3, Ki is the number of examples in the i-th class), and the choice of the ‘optimum’ Hi is performed on the basis of indexes that evaluate the performance of the clustering algorithm, when varying Hi . Some, commonly used, measure the entropy of the partition (Eq. (8.1)) or the compactness of the clusters (Eq. (8.2)) are

OOB ind2 5 2 O O B K

H

2 hk

ind1 5

(8.1)

k 51 h51 K

H

hk

log(Bhk ).

(8.2)

k 51 h51

In this paper an index which measures a combination

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

Fig. 3. RBFN for classifying faults.

between compactness and separation among clusters is used. Such index is a modified version of the measure presented in Ref. [8] and is defined as follows:

OOB K

H

ix k 2 c k i 2 1 ind3 5 ]]]]]] 1 2 ]3 , 2 K min ic i 2 c j i H k 51 h51

2 hk

S

D

i, j

i ± j.

(8.3)

The index ind3 should be minimized by the ‘optimum number of clusters H’, nevertheless this index decreases when H grows, hence the optimum H is selected as the number of clusters which gives a sudden change of ind3. After having determined the centers and widths of the hidden units, in the last step of the training process the weight matrix W is evaluated. To this aim a supervised process is used: a least square regression that relates the target outputs to the hidden node activations is considered. In fact each training example is passed through the first layer and the hidden nodes to produce a corresponding hidden node activation vector a k 5 (a 1 (x k ), a 2 (x k ), . . . ,a H (x k ))T . The objective is to find the weights of Eq. (2) which minimize the squared norm of the residuals: miniT 2 WAi 2

(9)

where T is the M 3 K matrix of training targets, A 5 (a 1 , a 2 , . . . ,a K ) is a H 3 K matrix, and W is the M 3 H weight matrix. In general the training of the RBFN is an order of magnitude faster than the

151

training of a comparably sized back propagation network. On the other hand the RBFN are local based networks, which usually solve the same problem using a larger number of nodes in the hidden layer than the back-propagation. When using a neural classifier, the network can classify accurately only those inputs that belong to classes that are well represented in the training set. Nevertheless, the finite area of the activation regions of radial basis units, gives to this network the ability of detecting novel cases. In fact, a novel case will produce a very low output, since it does not belong to the activation region of any hidden unit. Novel cases will often occur in practice because it is not possible, and not convenient, to produce a complete training set (that is the fault dictionary) representing all the possible faults under all the possible conditions. This means that, while multi-layer perceptron networks construct a global approximation of the input–output mapping, RBFNs construct local approximations because they use exponentially decaying localized non-linearities (Gaussian). In particular RBFNs can be extended to estimate an index of the classification efficiency r (x) [9,10], which corresponds essentially to the probability density of the training data, and is found as follows:

O a (x)r H

h

h51

h

r (x) 5 ]]]]]]] H a h (x) 1 1 2 max(a h )

O

(10)

h 51

where rh represents the local data density associated with the hidden unit h, evaluated according to:

O a (x ) K

h

k51

k

rh 5 ]]]] . K(p 1 / 2 sh )N

(11)

Due to the presence of the term max(a h ), r (x) is near to zero if point x is far from every cluster center c h . It can be seen that a large value of the index r (x) points out that data x belongs to a cluster that is densely populated by examples, hence to a class well represented in the training set. Novelty is highlighted by a low value of r (x) when x has a low probability of coming from the same distribution as the training data.

152

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

Fig. 4. (a) Circuit under test; (b) RBFN architecture.

2.2. Radial basis function networks for classifying faults As said above, we use the RBFN for classifying faults. So, the dimension M of the output layer is equal to the number of fault classes to be detected and each output neuron corresponds to a fault class. Consequently we deduce architecture for the fault diagnosis of the CUT as shown in Fig. 3. Fault diagnosis is based on the analysis of the circuit input–output signals; an arbitrary level of diagnosis can be reached by adding other test points. To this aim, the detection of the faulty element causing the failure of the CUT is handled by the following steps. • Define the fault conditions and level (component

or sub-system), select the input nodes (controllable nodes) where a particular test stimulus is injected and choose the output test points (observable nodes) where measurements of voltage and / or current can be made. • Define a circuit signature capable of highlighting faults (sensitivity evaluation). • Analyze the uniqueness of the signatures for the interesting faults; this step allows to construct the fault dictionary used in the training phase of the RBFN. • Define the classifier architecture according to the structure shown in Fig. 3. As far as the last points are concerned, it must be added that not all faults are related to a fixed tolerance error. Equal variations of components’

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

behavior give obviously different effects on the observed output quantities. Sensitivity gives an indication of the difficulty in the detection of a particular fault, and can be measured with the parameter Dx (net input variation)

O Dx N

Dx 5

n

(12)

n51

Dx n being the absolute value of the variation of the net input components between faulty and nominal conditions. A sensitivity analysis may give important hints in the selection of a particular classifier architectural solution, or in the building of the training set. In fact, when using RBFNs, high sensitivity to a circuit parameter corresponds to a fault that can be easily detected, if it is well represented in the training set. To obtain fault classification the signatures have to be sufficiently unique: the analysis of signal similarity can be performed by evaluating a measure S defined in Ref. [11]: R xy S 5 ]]]]] R xx 1 R yy 2 R xy

(13)

where x(t) and y(t) are the considered signals (here the circuit signatures) and R xy 5 1 /T e0T x(t)y(t)dt.

153

uniform sampling) of the frequency transfer function were measured. In order to prove the proposed technique the results obtained by testing three different circuits will be shown in what follows. For the circuit shown in Fig. 4a we consider Ra, Rb, Rc, and C as potentially faulty elements in the CUT. The RBFN has five output nodes since five signature classes are considered: four fault classes at component level considered ( #1 : Ra faulty; #2 : Rb faulty; #3 : Rc faulty; #4 : C faulty) and one class for the fault-free circuit ( #5 : operating condition). The faulty elements vary outside the tolerance range (10% for each component). The results presented hereafter are obtained by letting each faulty component vary in the intervals [0.1Xn , 0.9Xn ] and [1.1Xn , 2Xn ], Xn being the nominal value of the n-th component: 120 examples are considered for the fault dictionary construction used in the training phase. Circuit signatures are obtained by applying four sine waves with constant amplitude and test frequencies 300 Hz, 500 Hz, 1 kHz and 2 kHz. Each response is made up of four Fourier components. The corresponding architecture is shown in Fig. 4b. Referring to Eq. (12) the net input variations Dx as functions of the parameter variations are plotted for each circuit component and shown in Fig. 5. It can be seen that variations of Ra give large variations of the network input, while the network input

3. Experimental results The RBFN classifier was applied to non-linear and linear sample circuits, considering faults at component and sub-system level. For non-linear circuits the signatures which allow the ‘fault dictionary’ construction, are obtained by injecting in the controllable node (input) a set of stimuli consisting of sine waves with the same amplitude and four different test frequencies. Each stimulus is injected into the input of the circuit under test and the corresponding output (response) is measured at the output test point. In the paper each response is considered made up by four Fourier components. As a result the net input layer consists of 16 nodes. For linear circuits, N samples (taken with non-

Fig. 5. Net input variation as a function of the potentially faulty component with value X and nominal value Xn for the circuit in Fig. 4a.

154

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

shows the lowest sensitivity to variation of the element C. The number of neurons representing the fault class Ra is larger than the number of neurons relative to the other classes. Hence the training set was formed by including a different number of examples for each class, the largest number for class Ra, and the smallest for class C. The classification error rate as a function of the number of the hidden nodes H, during training and during normal net operation is shown for the circuit considered above, in Fig. 6. We can observe that too low values of H give a net that cannot be adequately trained. On the other hand, very high H values (near to the number of the training examples) give very low error rate in the training phase but produce a higher error in normal operation because the net is unable to generalize. In this case the net succeeded in classifying 99% of the test examples with a reasonable degree of complexity (40 hidden layer neurons). It must be stressed that the error rate reported in the figure, is the number of mis-classifi-

Fig. 6. Classification error rate for the circuit shown in Fig. 4a as a function of the hidden units number. The net is trained with 120 noise-free examples. No faulty components are characterized by their nominal values.

cations divided by the total number of test patterns; if a Go / NoGo (i.e. faulty circuit classified as operating) test is taken into account, the classification error number becomes significantly lower. In fact, in this example, mis-classifications are mostly due to fault similarity. Moreover many errors occur in the neighborhood of the tolerance region, where faulty and operating circuits behave in a very similar manner; in fact, in analog circuits, there is a continuous transition between ‘operating’ and ‘non-operating circuits’. The error rate grows to 10% with a SNR equal to 20 dB. In Fig. 7 the classification error rate versus SNR is plotted; here the hidden node number H is 40 and the net is trained with noise-free examples. The network performance becomes poor when SNR is lower than 20 dB. In low SNR condition, the net must be trained with a larger example set, and a larger number of nodes is required. The net trained with single faults has, in many cases (90% of the examples), detected also multiple faults and in 80% of the cases recognized one of the two faults.

Fig. 7. Classification error rate for the circuit shown in Fig. 4a versus SNR. The net is trained with 120 noise-free examples. No faulty components are characterized by their nominal values.

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

These results were obtained when all components in the circuit under test except the faulty one, assume the corresponding nominal values Xn . By letting vary the non-faulty component value in the tolerance range (10%), the error rate grows to 20%. Better results can be obtained by partitioning the net, i.e. by applying the clustering algorithm, separately to the input patterns in each fault class used for the training process (see Fig. 8). The training set used was larger; in fact each fault example was applied thrice, letting vary the non-faulty components randomly in the corresponding tolerance range. As shown in Fig. 8 the performance of the net remains satisfactory, the error rate being near 5% when the total number of hidden units is 45. It must be pointed out that, with this approach, the training time decreases. Net partitioning allows to reduce training time by a factor M, where M is the number of the signature classes. To find out the capability of the net to handle new cases and to generalize, the net was trained with faults corresponding to variations of component parameters in the intervals [0.4Xn , 0.9Xn ] and [1.1Xn ,

Fig. 8. RBFN performance: effect of the tolerance of the circuit components. Classification error rate for the circuit shown in Fig. 4a as a function of the total hidden node number given by a partitioned network. The net is trained with 360 noisy examples. Non-faulty component values vary in their tolerance ranges.

155

1.5Xn ], and tested with single faults due to component variation in the range [0.1Xn , 0.9Xn ] and [1.1Xn , 2Xn ]. In Fig. 9, the novelty index, r (x) (Eq. (10)), gives a measurement of the confidence of the classification and a tool for the rejection of false classifications. In fact by analyzing the lower plots, the net classifies correctly the capacitor, C, as the faulty element of the CUT, the corresponding output neuron (symbol ‘1’ denotes C faulty) being the highest in value. Ambiguity in the classification of the faulty element occurs when the index is very low. Similar results are obtained for the resistor Rb. The results so obtained were compared to those obtained by a neural three-layer auto-associative classifier trained with back-propagation algorithm, with a structure similar to that reported in Ref. [4]; RBFN gives a similar result with a training time

Fig. 9. RBFN performance: capacity of the net to handle new cases and to generalize. Upper plots: s, novelty index r (x); continuous line, threshold equal to the smaller r (x i ) (x i [ training set). Lower plots: network outputs, s, output of neuron associated to class Ra faulty; *, output of neuron associated to class Rb faulty; 3, output of neuron associated to class Rc faulty; 1, output of neuron associated to class C faulty; ?, output of neuron associated to class operating condition of the CUT. The plots on the left are obtained by processing with the net 20 faults of the capacitor C, while the plots on the right are related to faults of the resistance Rb.

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

156

which is approximately one order of magnitude smaller. The proposed method was applied also to the analog bandpass filter shown in Fig. 10 with center frequency 24 kHz. The response is obtained by evaluating the CUT transfer function in frequency domain (20 samples in the 6 kHz to 44 kHz range). Consequently, the number of nodes in the input layer of the RBFN can be determined. We assume that the output of neural network is constituted by seven nodes, that is: five operating classes for single faults of the passive elements considered ( #1 : R 1 faulty; #2 : R 2 faulty; #3 : R 3 faulty; #4 : C 1 faulty; #5 : C 2 faulty), one class, #6 , for the gain of the negative feedback amplifier (k), and one class, #7 , denoting the normal operating (fault-free) condition. For this circuit the diagnosis results and the classification error rate on a test set of 500 patterns are reported in Table 1. These results are obtained with SNR equal to 30 dB and all parameters varying in the tolerance range except the faulty one. It can be seen that with an 83 hidden nodes net a classification error rate lower than 10% and a Go / NoGo error rate near to 3% are obtained. The developed technique was also applied to the diagnosis of subsystem level faults in the circuit shown in Fig. 11a. The circuit has the frequency response shown in Fig. 11b, and is made up of four second order filters and an adder. For this circuit five fault classes are considered ( #1 : filter 1 faulty; #2 :

Table 2 Component values for circuit in Fig. 11 Nominal values

Tolerance

Filter 1: Highpass Cut-off 10 Hz

R1 R2 C1 C2 A v1

320 kV 320 kV 50 nF 50 nF 1.75

10% 10% 5% 5% 1%

Filter 2: Lowpass Cut-off 100 kHz

R2 R4 C3 C4 A v2

32 V 32 V 50 nF 50 nF 1.75

10% 10% 5% 5% 1%

Filter 3: Highpass Cut-off 10 kHz

R5 R6 C5 C6 A v3

320 V 320 V 50 nF 50 nF 1.75

10% 10% 5% 5% 1%

Filter 4: Lowpass Cut-off 100 Hz

R7 R8 C7 C8 A v4

32 kV 32 kV 50 nF 50 nF 1.75

10% 10% 5% 5% 1%

Adder

R9 R10 R11

1 kV 1 kV 1 kV

1% 1% 1%

Fig. 10. Analog bandpass filter: center frequency 24.5 kHz, bandwidth 11 kHz.

filter 2 faulty; #3 : filter 3 faulty; #4 : filter 4 faulty; #5 : adder faulty) plus one for the operating circuit ( #6 ). Faults are defined as deviations of the circuit frequency response from the nominal one. In particular there is a fault when one of the subsystems of the circuit does not work properly, i.e. when the amplitude of the frequency response of the filter, evaluated at the nominal cut-off frequency differs more than 20% from the nominal value. The fault dictionary is built up by considering single faults of the circuit components as responsible for the general sub-system fault. In Table 2 the component values used for the circuit are listed together with the tolerance range for each component.

Table 1 Errors for circuit in Fig. 10

Table 3 Errors for the circuit in Fig. 11 (% of 2000 performed tests)

#1

#2

#3

#4

#5

#6

#7

Go / NoGo

Total error (%)

#1

#2

#3

#4

#5

#6

1.3%

1.5%

0.7%

0.8%

2%

1.5%

2.8%

3.1%

8%

0.5%

0.5%

1%

1.5%

1.5%

3%

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

Fig. 11. (a) Active four stages filter; (b) amplitude of the frequency response of the filter (a).

157

158

M. Catelani, A. Fort / Measurement 28 (2000) 147 – 158

The fault dictionary is built by considering that one of the circuit components has a value outside the tolerance range which gives a variation larger than 20% of the filter frequency response at the nominal cut-off frequency, while the other components have values belonging to their tolerance range. The circuit signature is built in the same manner used for the circuit in Fig. 8, where eight frequency samples were taken into account. The net with 114 hidden nodes trained with 3000 training vectors with an SNR equal to 30 dB, gives the results reported in Table 3. It can be seen that for this circuit the error rate is near 8%.

4. Conclusions In this paper a fault detection technique based on RBFN network is presented. The application of the proposed technique to both linear and non-linear analog circuits has given classification errors lower than 10%. The net is capable of providing a ‘novelty index’, which can give a measure of the efficiency of the classification. Moreover the net is flexible and allows the addition of new fault classes, with a low computational burden. In fact, since a partitioned net was used, the addition of new fault classes requires to train again only the output layer and to perform clustering on the newly added example data set.

References [1] International Standard CEI IEC 50 (191), International Electrotechnical Vocabulary, Geneva, Switzerland, 1990. [2] J.W. Bandler, A.E. Salama, Fault diagnosis of analog circuits, Proc. IEEE 73 (8) (1985) 1279–1325. [3] R.W. Liu, Testing and Diagnosis of Analog Circuits and Systems, Van Nostrand Reinhold, New York, 1991. [4] M. Catelani, M. Gori, On the application of neural network to fault diagnosis of electronic analog circuits, Measurement 17 (1996) 73–80. [5] J.L. Huertas, Test and design for testability of analog and mixed-signal integrated circuits: theoretical basis and pragmatical approaches, in: ECCTD ’93 Circuit Theory and Design ’93: Selected Topics in Circuits and Systems, Davos, 1993, pp. 75–151, Chapter 2. [6] J.A. Leonard, M.A. Kramer, Radial basis function networks for classifying process faults, IEEE Contr. Syst. 11 (1991) 31–37. [7] P.D. Wassermann, Advanced Methods in Neural Computing, Van Nostrand Reinhold, New York, 1993. [8] X.L. Xie, G. Beni, A validity measure for fuzzy clustering, IEEE Trans. Pattern Anal. Machine Intell. 13 (8) (1991) 841–847. [9] J.A. Leonard, M.A. Kramer, Diagnosing dynamic faults using modular neural nets, IEEE Expert (1993) 44–53. [10] J.A. Leonard, M.A. Kramer, L.H. Ugar, Using radial basis functions to approximate function and its error bounds, IEEE Trans. Neural Net. 3 (4) (1991) 624–627. [11] R. Spina, S. Upadhyaya, Linear circuit diagnosis using neuromorphic analyzers, IEEE Trans. Circ. Syst.-II 43 (3) (1997) 188–196.