SEPARATION OF BROADBAND SOURCES PROCESSING CONCEPT OF THE LABRADOR SOFTWARE

SEPARATION OF BROADBAND SOURCES PROCESSING CONCEPT OF THE LABRADOR SOFTWARE

Mechanical Systems and Signal Processing (1997) 11(1), 91–106 SEPARATION OF BROADBAND SOURCES PROCESSING CONCEPT OF THE LABRADOR SOFTWARE J. M. D...

445KB Sizes 0 Downloads 21 Views

Mechanical Systems and Signal Processing (1997) 11(1), 91–106

SEPARATION OF BROADBAND SOURCES PROCESSING CONCEPT OF THE LABRADOR SOFTWARE J. M. D  R. A Direction Ge´ne´rale pour l’Armement Centre Technique des Syste`mes Navals, Toulon, France (Received January 1996, accepted July 1996) When several mechanisms cannot work independently, it is often difficult to ascertain the degree to which each contributes to the radiated noise. Therefore, it is necessary to identify by appropriate signal processing, the effects of each source in each input transducer which are considered to be good representatives of these mechanisms. As existing second-order methods are not very satisfactory, we have developed a new approach which is better adapted to our problem, and implemented it in a software tool, Labrador. This method is largely inspired by the singular value decomposition (SVD) method, with two main improvements. The first concerns the SVD base itself, which is modified by rotations in order to find a better base of fictitious transducers, the nearest possible to the input’s transducers. The second allows us to rearrange the vectors of the base (regarded as fictitious transducers associated with the mechanisms) in order to obtain continuous spectra along the frequency axis and a better physical representation of the energy contribution of each source. 7 1997 Academic Press Limited

1. INTRODUCTION

In order to solve a broadband sources separation problem, it is first necessary to make a list of the mechanisms which can create an undesirable radiated pressure on vibrations. The second step is to equip each of these mechanisms with a transducer (input), and to place others (outputs) wherever we want to know their contribution. Each of the inputs is a linear combination of all the mechanisms (assuming that a transducer near a particular mechanism is more influenced by this one than by the others). The main difficulty is due to interactions between inputs. Although the mechanisms are linearly independent, or incoherent to one another, the inputs are partially correlated because of the transfers between them. To solve this problem, it is necessary to build an orthogonal base in the inputs’ space. Different second-order methods have been described in the literature. One of these, multiple input single output (MISO) is equivalent to a Gram–Shmidt orthogonalisation of the inputs’ cross-spectral matrix. Others methods are based on the singular value decomposition (SVD) of this matrix. Any of these methods are satisfactory. In this paper, therefore, after some notations and definitions, we describe the statement of our problem, and each of the existing methods, and we develop the concepts of our method (largely inspired by the SVD method). The basic assumption is that the engineer’s skill allows him to place input transducers at the best locations, i.e where the transducers are mainly influenced by the mechanism to which they are connected. The greatest advantage of our method is that it can automatically and simultaneously take into account a large number of inputs. Finally, we show an illustrative example. 91 0888–3270/97/0100091 + 16 $25.00/0

7 1997 Academic Press Limited

92

. .   . 

Figure 1. Decomposition of signal in independent observations.

2. NOTATIONS AND DEFINITIONS

All of the treatments required must operate on signals recorded with a synchronising control. Each of these signals is decomposed into independent observations of the same length (Fig. 1). The observations are transposed in the frequency domain using Fourier transform and Hanning windowing (Fig. 2). We define X (f) as the vector formed at the f frequency, by the complex values of the Fourier transform for each observation. This vector belongs to the observations space of dimension k. Then we can define, omitting for all what follows the f dependency —the auto spectrum of the transducer i: Sii =

X hi X i k

(1)

where the h exponent denotes the hermitian transposition; —the cross spectrum between transducer i and j: X hi X j k

(2)

Sij X hi X j = Sii X hi X i

(3)

Sij = —the transfer function between i and j: Hij =

Figure 2. Transposition of the signal in the frequency domain.

   

93

Figure 3. Two inputs with additive noise.

—the coherence function between i and j: gij2 =

=Sij =2 Sii Sjj

(4)

The functions described above are statistical data obtained from a set of independent observations. 3. STATEMENT OF THE PROBLEM

Let us reduce the problem to a model consisting of two inputs and one output (Fig. 3). The linear combination coefficients are L1y and L2y , and B is an additive noise uncorrelated with the inputs. Each of the inputs is a linear combination of both sources s 1 and s 2 . Our aim is to determine what the contributions of sources s 1 and s 2 are to the output Y . The quantities required are =U1y =2(s h1 s 1 ) and =U2y =2 (s h2 s 2 ) (Fig. 4), but we only know s 1 and s 2 through X 1 and X 2 . It is important to note here that we do not want to quantify the sources, but only their contributions to the output.

Figure 4. Decomposition of Y .

. .   . 

94

It is possible to solve the classic linear problem, expressed in matrix formulation: Y = [X 1

$ %

X 2 ]

L1y + B L2y

Y = [X]L + B

(5)

By multiplying left and right members of (5) by [X]h, we obtain: [X]hY = [X]h[X]L + B

$ % $

(6)

%

S1y S = 11 S2y S12

S21 L + B S22

(7)

To extract the solutions L1y and L2y , it is necessary to invert the cross-spectral matrix [X]h[X]. This is only possible if the cross-spectral matrix is not singular. In the simple case of two inputs, this means that the inputs are not fully correlated (in the general case of n inputs with n q 2, the inversion is possible only if any of the input transducers is a linear combination of the others). When transducers 1 and 2 are not fully correlated, the solution of (7) is given by: L =

$

1 S22 S11 S22 − S12 S21 −S21

$ %

−S12 S11

%$ %

$

S1y S2y

%

L1y 1 S22 S1y − S12 S2y = L2y S11 S22 − S12 S21 −S21 S1y + S11 S2y

(8)

Now assuming that the inputs are independent: S12 = S21 = 0

(9)

and using (9) in (8) we obtain:

$ % $

% $ %

L1y S /S H1y = 1y 11 = L2y S2y /S22 H2y

(10)

Thus, in the case of uncorrelated inputs the linear combination coefficients are equal to the transfer functions. The power spectral density can be written in this case as: Syy = =H1y =2S11 + =H2y =2S22 + Sbb

(11)

where Sbb is the power spectral density of noise and the contribution of each input to the output is clearly identified as =H1y =2S11 for input 1 and =H2y =2S22 for input 2. Remark: all combinations of the following form: [X 1

X 2 ] =

$

s 1 zs h1 s 1

s 2 zs h2 s 2

%$

cos u −sin u ei8 −i8 cos u sin u e

%

correspond to the particular case of independant inputs. However, even if the output PSD has the form of (11), with the share of each inputs clearly identified, because of the angular indetermination, the share of each physical sources remains unknown. Any of the methods

   

95

Figure 5. Generation of X 2.1 .

described in this paper allow us to remove the doubt about this particular situation. This is why the engineer’s expertise is very important in the choice of transducers location. Unfortunately, in most cases the inputs are partially correlated, so S12 $ 0 and S21 $ 0, and the solution for the linear combination coefficients take the general form of (8). The coefficients are not transfer functions, and in that case the power spectral density must be written as: Syy = =L1y =2S11 + =L2y =2S22 + Sbb + L* 1y L2y S12 + L1y L* 2y S21

(12)

where the contribution of each input to the output is not clearly identified due to their mutual interactions. Figure 4 shows the difference between the linear combination coefficient, the transfer function and the unitary contribution of a source. When the inputs are uncorrelated the three projections are identical. Thus, when the inputs are partially coherent, we need to build virtual sources from inputs. The general idea in the MISO and SVD methods is to generate an orthogonal (uncorrelated) set of fictitious input transducers from the original set. The set of fictitious input transducers is not unique. There is no exact solution to this problem. Sources built from inputs are not likely to be identical with the real ones. However, we hypothesise here that the engineer’s ability allows him to place all of the input transducers at the best locations, i.e. at a location where transducers are primarily influenced by the source they represent, and not overly perturbed by other mechanisms. We confined our investigations to second-order methods.

4. THE MISO METHOD

The MISO method generates an orthogonal base by successive iterations (Gram–Schmidt orthogonalisation), (for detailed information see [1]). Let us consider the problem depicted in Fig. 3. We have seen that it is difficult to interpret the contribution of the inputs when they are partially correlated. So the problem of (5) is transposed in a new one described by the following equation: (1) (1) + X 2.1 G2y + B Y = X 1 G1y

(13)

where X 1 is unchanged and X 2.1 is a fictitious input which is the original input X 2 minus the part fully correlated to X 1 : X 2.1 = X 2 − H12 X 1 = X 2 − Figure 5 illustrates the generation of X 2.1 .

S12 X S11 1

(14)

96

. .   .  S21 → X S22 2 → X1.2

→ X1

→ X2 Figure 6. Generation of X 1.2 .

(1) (1) It is easy now to calculate the linear combination coefficients G1y and G2y , and finally the contribution of each of the inputs X 1 and X 2.1 to the power spectral density Syy : (1) 2 (1) 2 Syy = =G1y = S11 + =G2y = S22.1 + Sbb

(15)

However, this method suggests that transducer 1 is not contaminated by mechanism 2 and that transducer 2 is contaminated by mechanism 1. That is a possibility, but there is another, which is illustrated by Fig. 6 and (16): (2) (2) Y = X 1.2 G1y + X 2 G2y + B

(16)

This new possibility gives us a second solution which, although different, is as acceptable as the first. In fact, the greater the number of inputs, the greater the number of solutions; exactly n! where n is the number of inputs. This number of solutions is a theoretical one, and engineering skills allow us to reduce this number. However, the solution depends on the operator’s ability, and the greater the number of inputs, the worse the solution because of the cumulative errors at each step of the iterative orthogonalisation. So, one critical problem of this approach is to assume priority among the inputs. Park and Kim [2] proposed a method to determine priority among multiple inputs correlated to each other, using causality between correlated inputs. In regard to the two sources example of Fig. 3, this method assumes either U12 W U21 or U21 W U12 to find a relevant causal relationship between the inputs. This is the foundation of the MISO method, which means that there is just one transducer contaminated (in the two source example), and the causality is a good method to find this contaminated transducer. Nevertheless, the relationship between U12 and U21 must be the same on the whole bandwidth to use causality. Unfortunately, this relationship may change along frequency; this is why we propose, an independent treatment for each frequency line.

Figure 7. Singular vectors of two transducers with coherence 0.1 and 10 log (>X1 >2) = 10 log (>X2 >2).

   

97

5. THE SVD METHOD

The SVD method consists of finding the principal components of the cross-spectral matrix [X]h[X] of (7). Useful information can be found in [3]. For example, let us reconsider a set of two independent mechanisms. Each of them is equipped with a transducer but each of the transducers is a linear combination of the two mechanisms, so the cross-spectral matrix is not diagonal: [X]h[X] =

$ %

X 1 h [X 1 X 2 h

X 2 ] =

$

S11 S21

S12 S22

%

(17)

Now, let us make a singular value decomposition of the [X]h[X] matrix: [X]h[X] = [U][S]2[U]h = [U]

$

s1 0

%

2

0 [U]h s2

(18)

We can interpret the [S]2 matrix as a power spectral density of a set of two independent sources, and the matrix [U] as linear combination coefficients which build the inputs with these sources. This kind of decomposition is interesting when we want to distinguish two different spaces: one for the signal, the other for the noise. It allows us to create a minimum size space of representation [4]. A current application in underwater acoustics is the determination of the number of sources, and the localisation of these sources with introduction of a priori assumptions (plane waves for example) [5]. Another use is the propagation from a near to a far field where there are several sources [6]. For our application, we first try to build a set of fictitious transducers as close (correlated) as possible to the initial transducers (inputs) but uncorrelated. Then, we calculate the contribution of all of these fictitious transducers to the output. As the common space for the inputs and output is the observation space, it is necessary to express the fictitious transducers (supposed proportional to the real sources) onto the observations base. This is easy to do by applying SVD to the spectral matrix of inputs [X] whose dimensions are the number of inputs times the number of observations: [X]h = [U][S][V]h

(19)

where matrices [U] and [a] are the same as (18). Matrix [V] has the same dimension as matrix [X] and is constituted by orthonormal vectors: [V] = [V 1

V 2 ]

(20)

Each of these vectors has a length equal to the number of independent observations, represents a fictitious transducer of unit modulus and is uncorrelated to the others. In our two-source example we have: V h1 V 2 = 0,

V h2 V 1 = 0,

V h1 V 1 = V h2 V 2 = 1

(21)

Let us now consider an output transducer represented by Y vector at frequency f. We can calculate the contribution of the fictitious transducers s1 V 1 and s2 V 2 to the power spectral density of the output as the sum of two parts, one due to the inputs, the other not, by means of the virtual coherences Ci : Syy = s Ci Syy + Sbb i

with Ci =

=V hi Y =2 Y hY

(22)

98

. .   . 

Figure 8. Singular vectors of two transducers with identical modulus and coherence 0.1 and 10 log (>X1 >2) = 10 log (>X2 >2).

However, there is no reason why these sources issued from the SVD should be equivalent to the physical mechanisms. How can we assert that the fictitious transducers represent the sources? There is an angular ambiguity in the actual position of the base. Our objective is to create a base as close as possible to the set of inputs. So let us analyse several situations where two inputs are partially correlated. First we consider that one of these two inputs has a modulus much greater than the other (+12 dB) and we consider that the coherence value is 0.1 (this means that the angle between the two transducers is approximately 70°), then we apply an SVD to these inputs. Figure 7 shows the result of SVD in the X 1 , X 2 plane. It can be seen that when there is a transducer with a modulus greater than the other, the SVD solution is close to the MISO solution with the greatest modulus transducer in the first step. Let us now consider the situation where two transducers have an identical modulus. The SVD result is shown in Fig. 8. When the two moduli are identical, the direction of the first SVD vector is exactly the bisector of X 1 and X 2 . This particular solution has a poor chance of being the good one, and does not satisfy our objective, which is to build a base as close as possible to the inputs. The solution given by the SVD method is dependent on the relative moduli of the inputs. Only when all of the inputs are similar transducers (all are accelerometers or all are pressure gauges) can the relative modulus make sense. Therefore, in the SVD method, it is important not to merge inputs of different physical quantity. However, in the case of similar input modulus, the orthonormal base created with SVD is not the nearest to the inputs.

Figure 9. Singular vectors of two transducers with identical modulus and coherence cos2 a.

   

99

Figure 10. Zp modulus much greater than Zp modulus.

The SVD method has the advantage of being a sole solution (independent of the user), with optimised and fast calculations. Our goal is to preserve the performance of the SVD method and its independence of the user, so the method we describe below starts with SVD, but we then add some criteria to reject the critical situations described above in order to replace the SVD solution in these cases by a new one closer to the inputs. 6. THE LABRADOR METHOD

Labrador (Large bande recherche analyse et de´termination des origines) comes from a French Navy study on broadband source separation. It is based upon the SVD method described above. It has two particularities. The first one concerns the situation described in Fig. 8, which Labrador detects and corrects. The second one is the separation itself. The SVD algorithm puts singular values in decreasing order. This has two results: (a) there is no direct links between input transducers and fictitious transducers of SVD (the first fictitious transducer is not necessarily linked to the first input transducer and so on); and (b) there is no physical continuity from one frequency line to the next one (the order of the fictitious transducers has no reason to be the same from one frequency to another). As an example, physical phenomena have power spectral densities which may cross one another, and these crossings are lost by SVD ordering. Labrador correctly rearranges the fictitious transducers for each frequency line and restore spectrum continuity along frequency. Finally, Labrador calculates the contribution of all fictitious transducers to the power spectral density of one or more outputs using (22), but on a modified base, and plots it on an original graph to give an immediate visual interpretation. 6.1.       We must define a criterion which allows us to keep SVD results or reject them, and search for better ones. This criterion is directly linked to the proximity of singular values. Let us consider two inputs with the same modulus A at frequency f and with a coherence of cos2 a between them as shown on Fig. 9. The first singular vector is supported by the bisector of X 1 and X 2 , so the first and second singular values are described by: a 2 a s22 = 2A 2 sin2 2

s12 = 2A 2 cos2

(23)

So we can write the gap between s12 and s22 as: DdB = 10 log10

01

0

s12 cos2 a/2 2 = 10 log10 s2 sin2 a/2

1

(24)

. .   . 

100

Then we must define an angular limit a (or coherence value of cos2 a) below which we can assume the existence of a unique source and above which it is necessary to consider a second one. In the first case (0 Q a Q alim ), SVD results are conserved. This means that the two input transducers do not permit us to distinguish two sources. So the first singular vector is assimilated as a signal and the second one as a noise. In the second case (a q alim ), we have to calculate another base close to the inputs by achieving rotations on singular vectors. From experience, we have fixed the angular limit a to 30° (coherence value limit to 0.75). Putting this value in (24), we obtain DdB = 12 dB. The relationship between a and DdB in equation (24) is only valid in the case of inputs with strictly equal modulus. This relation permits us to determine a limit below which we must investigate to verify if it is a critical SVD situation or not. This is just a criterion of warning. Thus, as soon as the difference between singular values is lower than 12 dB, and after verifying that a is greater than 30°, we will achieve a rotation on singular vectors to obtain a base close to the inputs. Here we exploit the hypothesis that the input transducers are good representatives of the sources, and so the vectors of the base must be close to them. DdB is a parameter of the Labrador software, of which the default value is 12 dB. 6.2.      When the gap on singular values is lower than DdB the rotation operation is not always as simple as before because in general we have not only two input transducers, but many more. So the criterion of DdB with alim = 30° is still valid, but must be applied in steps. There is one step less than the number of inputs. The nth step consists of: (i) applying a window beginning at sn , ending DdB lower; (ii) achieving the rotation described below in all the planes included in the subspace generated by the vectors associated to the values inside the window of (i). The vectors obtained after the rotations and the associated values are not likely to be singular, but for simplicity we still call them ‘singular’. From experience, we assume that there is no significative difference on the resulting base according to the order of the planes into the subspace. In order to operate a rotation in a plane, we must first select two input transducers to be associated to the two vectors of the plane. The choice is made with [U] matrix of (19). We make a new matrix which is formed by the squared modulus of each elements of [U]. The element of row i column j of this new matrix represents the contribution in per cent of the squared modulus of transducer i projected on vector j, into the ‘singular’ value j. So for each of the two near ‘singular’ values sm and sn (with sm q sn ) we select the transducer which has the greatest contribution and project it into the plane formed by the two ‘singular’ vectors V m and V n associated with the two near ‘singular’ values. Let us consider Z p and Z q these two projections: [Z p

Z q ] = [V m

V n ]

$

sm 0

%$

0 sn

U* pm U* pn

U* qm U* qm

%

(25)

In order to respect continuity with the results obtained from SVD, when the two projections don’t have exactly the same modulus, we must favour the biggest one. To illustrate our goal and define a general law for the rotation, let us now examine three simple cases. (1) Z p modulus is much greater than Z q modulus. Z p vector must attract first vector V m (Fig. 10).

   

101

Figure 11. Zq modulus much greater than Zp modulus.

(2) Z q modulus is much greater than Zp modulus. Z q vector must attract first vector V m (Fig. 11). (3) Z p modulus is exactly equal to Z q modulus. The angle between Z p and V m must be the same as the one between Z q and V n for a q 30°, otherwise the angle between Z p and V m must be the same as the one between Z q and V m (Fig. 12). Thus we can define a general law for the rotation: u = −g1m −

0 1

>Z q >2 p −a >Z p >2 + >Z q >2 2

u = −g1m +

>Z q >2 a >Z p >2 + >Z q >2

for a q 30°

for

a Q 30°

(26a)

(26b)

where glm is the angle between Z p and the first of the two vectors V m issued from SVD. The new vectors V 'm and V n' are calculated by the following equation: [V 'm

V 'n ] = [V m

0

V n ]

cos u −sin ue−i8 +i8 sin ue cos u

1

(27)

where 8 is a term of phase between Z p and Z q . The new modulus must be calculated for the new fictitious transducers m and n according to the following equation: s'm =

X

s =V 5mh X i =

(28)

i

and the new linear combination coefficients for all the inputs: U'im =

X hi V m' s'm

and U'in =

X hi V n' s'n

Figure 12. Zp modulus equal to Zp modulus.

(29)

102

. .   . 

Figure 13. Power spectral density vs frequency for (a) simulated sources; (b) simulated transducers (mixed sources); (c) fictitious transducers without rotation; (d) SVD of simulated transducers; (e) fictitious transducers of Labrador.

6.3.       Finally, after rotating, we hope the orthogonal base to be close to the input transducers for each discrete frequency. Thus, it becomes easy to attribute each fictitious transducer to only one physical transducer by means of the linear combination coefficients Uij . First, we make a new matrix with the squared modulus of each elements of [U]. Next we look for the greatest element of this new matrix. Its row and column position give us the first attribution and so on, until all of the fictitious transducers have been attributed. This operation is of the greatest importance, because it permits us to find a physical sense to the mathematical results with a frequency continuity of spectra which allows the power spectral density of fictitious transducers to cross one another, and finally to represent the power spectral density of independent sources. The physical sense of this attribution is confirmed by continuous aspect of the resulting sources spectra, as can be seen in the example below.

   

103

This part presents no particular difficulty and is completely dependent of the quality of the base. If the frequency continuity of spectra is not reached, this may have at least two possible explanations. (i) There are at least two inputs transducers which are sensible to a sole source (redundant information). The spectrum of the source is then affected to one of these inputs or the other, and appears chopped along the frequency. It is still possible to suppress one of these inputs for better results. (ii) There exists some frequencies for which the mixing of at least two sources is equivalent on at least two inputs. This case is exceptional and leads to a great discontinuity on the sources spectra. Finally, the fictitious transducers appears as physical transducers cleansed of all undesirable effects due to their mutual interactions. We can now reach our goal which is to determine the share of each source into the power spectral density of the output by means of equation (22). The difference Sbb between The power spectral density (PSD) of the output due to the identified sources and the total PSD measured Syy is explained either by noise or non-instrumented sources (not viewed by any of the inputs). 6.4.  To illustrate this method, we use simulated signals such as [7] which describes a technique to determine the number of incoherent sources and the effect of all the signal processing parameters on the results. Simulated sources s1 to s4 were made from four white noises passed through four systems with a single dof that have frequencies of 100, 200, 300, and 400 Hz and damping ratios all equal to 0.05. In the following simulations a sample rate of 1000 sample/s was used. Figure 13(a) shows the power spectral density of the four simulated sources estimated with segments of 1024 points and an average calculated on 100 segments. Then four simulated physical input transducers were built from mixed sources as: x1 (t) = s1 (t) + 13 s2 (t)

x2 (t) = s2 (t) + 13 s3 (t)

x3 (t) = s3 (t) + 13 s4 (t)

x4 (t) = s4 (t) + 13 s1 (t)

(30)

Figure 13(b) shows the power spectral density of the four simulated transducers estimated with segments of 1024 points and an average calculated on 100 segments. The four simulated physical inputs can be interpreted as transducers near one source with a transfer function equal to the scalar one, and a transfer function equal to the scalar, one-third towards another source. The results of the singular value decomposition of the matrix composed for all frequencies with the four simulated physical inputs (when the spectral are estimated with the same parameters as before) are shown in Fig. 13(d). The shape of the singular values comes from the characteristics of the four systems with a single dof. The SVD operation puts singular values in decreasing order for each discrete frequency, so that all of the resonant effects are visible on the first singular value. No crossing appears on the singular values curves and there is no direct link between singular values and physical transducers. When the PSD of inputs are similar we can see that the singular values are different due to their position on the bisector of inputs as we have seen above. Fig. 13(c) shows the link realised by Labrador without any rotation of singular vectors. Because of their location on bisector, it is difficult to choose the best representative input for each vector and gaps exist at every crossing. Finally, the sources identified by the Labrador software with corrective rotations on singular vectors are shown in Fig. 13(e). The fictitious transducers power spectral densities cross one another and can be interpreted

104

. .   . 

as physical transducers cleansed of all undesirable effects due to the interactions between the physical sources. The comparison between simulated sources in Fig. 13(a) and identified sources in Fig. 13(e) shows good concordance. Each of the fictitious transducers has only one resonant effect in concordance with the simulated source for which it has a transfer function of unity. Crossing gaps have vanished with the rotations on the singular vectors. Finally, we want to know the contribution of all of the identified sources to the power spectral density of one or more outputs. Let us consider the simulated output calculated as the sum of the four simulated sources: y(t) = s1 (t) + s2 (t) + s3 (t) + s4 (t)

(31)

Because of the orthogonal properties, it is easy now to calculate the contributions of fictitious transducers through the virtual coherences of (22). The lower graph of Fig. 14 shows the number of vectors needed to make up at least 80% of the output power spectral density for each frequency. It is a way of determining the signal space dimension and consequently the noise space dimension. The signal space dimension is representative of the number of sources which contribute to the PSD of the concerned output. The centre graph shows the proximity from the orthogonal basis to the input transducers only for the useful vectors. This is an a posteriori checking of the initial hypothesis which gives the mean angle between the transducers oblique base and the source vectors

Figure 14. Contributions of identified sources to the output.

   

105

orthogonal base. The upper graph shows the contribution of all of the identified sources to the output power spectral density. The black shading shows the background noise measured (when it exists) and the upper dark line shows the power spectral density of the output. In Fig. 14, no background noise is measured, so it is represented by a constant which is the minimum value of the output power spectral density minus 10 dB. 7. CONCLUSION

All the calculations in the Labrador software need only the inputs for the sources, separation step. Then the output is needed for the determination of the sources, contribution by means of the virtual coherences. There is no need to cut singular values in order to separate a signal space and a noise one. This operation is always delicate because of the unknown frequency response function between the real sources and the inputs. We simply ascertain the independence of a source toward the output by means of the associated virtual coherence value. When this value is very low, this source either is an input additive noise or has no influence toward the output. In other words, if a source contributing to the output doesn’t excite any of the inputs, then a non-coloured area would appear in Fig. 14. Our goal to quantify the contribution of each source toward the output does not allow us to determine the number of contributing incoherent sources. We are only able to determine this number when all the sources contributing to the output are represented by at least one input. Otherwise, decomposition of a set of output is needed as shown by [7]. Nevertheless, the results in Fig. 14 would show a non-coloured area in this case, unmasking at least one non-equipped source. Even if no exact solution exists to our problem, Labrador sources are more accurate than those of MISO or SVD. MISO requires the inputs to be ordered for all discrete frequencies (needing high skills) before making orthogonalisation and then the distances from the real sources to the MISO ones increase rapidly with the number of inputs. So the validity of the MISO results is founded on the ability of the operator to ordering the inputs and on the hope that the number of incoherent sources contribution to the output is low. The SVD method does not require any special skills on the part of the operator, but needs a link between the singular vectors and the inputs as Labrador, and its validity is dependent on very different energy levels of sources. If this is not the case, singular vectors will represent an orthogonal set of mixing sources. Finally, Labrador, as SVD, does not require any special expertise from the operator, and shares out errors on the sources locations (in the observations space) by making an orthogonal base as close as possible to the inputs. In this paper, we have described a method to separate sources and estimate the contribution of each one to the radiated noise. We have shown that the fictitious source transducers calculated with the Labrador method are equivalent to physical transducers cleansed of all undesirable effects due to interactions between physical sources. As there is no unique solution to that problem at the second-order level, the process includes an heuristic part, but this practice has proven reliable in real applications whenever the skill of the engineers has allowed them to respect the basic assumption that the input transducers are well placed. The continuity of the fictitious transducers’ power spectral density along frequencies is a good indicator of the treatment legitimacy. Another advantage is that the process continues systematically for all frequencies and, more important doesn’t need to separate noise from signal space. It conserves all of the information right to the end of the treatment and shows the operator a synthetic global result with some confidence indicators.

106

. .   .  REFERENCES

1. J. S. B and A. G. P 1980 Engineering Applications of Correlation and Spectral Analysis. New York: John Wiley. 2. J. S. P and K. J. K 1992 Mechanical Systems and Signal Processing 6, 491–502. Determination of priority among correlated inputs in source identification problems. 3. J. L, D. R and D. O 1987 Proceedings of ISATA 87, Florence, Italy 2, 487–504. Use of principal component analysis for correlation analysis between vibration and acoustical signals. 4. B. D M 1993 IEEE Transactions on Signal Processing 41, 2826–2838. The singular value decomposition and long and short spaces of noisy matrices. 5. H. M 1992 Journal de Physique Deuxieme Congre´s Franc¸ais d’Acoustique 1, C1-19–C1-26. Analyse du champ acoustique. 6. J. H 1989 Technical Review 1 B & K Publication. STSF—a unique technique for scan based near field acoustic holography without restrictions on coherence. 7. M. S. K, P. D, R. J. B and D. A. U 1994 Mechanical Systems and Signal Processing 8, 363–380. A technique to determine the number of incoherent sources contributing to the response of a system.