Comparative investigation of unfolding methods

Comparative investigation of unfolding methods

350 Nuclear Instruments and Methods in Physics Research A303 (1991) 350-369 North-Holland Comparative investigation of unfolding methods V.B. Anykey...

1MB Sizes 0 Downloads 82 Views

350

Nuclear Instruments and Methods in Physics Research A303 (1991) 350-369 North-Holland

Comparative investigation of unfolding methods V.B. Anykeyev, A.A. Spiridonov and V.P. Zhigunov Institutefor High Energy Physics, Protvino, USSR Received 20 July 1990

The problem of reconstructing the function S(x) from the finite number of measured functionals Yn = jdx An(x)S(x ) is studied in the class of linear estimates. Functions An(x) characterise the resolution of the experiment or the response function of the detector. The bias and error operator of estimate S(x) are studied as well. The least squares method, the spectral window method, the iterative method, Tikhonov's regularization method and others are investigated as unfolding (deconvolution, unsmearing) methods. These methods are compared for two numerical examples. The problem of choosing the regularization parameter and some related problems are also discussed.

1. Introduction

First of all it must be n o t e d that when for model 1) we introduce

A lot of experiments in nuclear physics measuring spectra, differential cross-sections, a n d signals are described by one of the following m a t h e m a t i c a l models: 1) F u n c t i o n m e a s u r e m e n t s in separate points xn:

A,(x) -~ K(x,,, x),

Y,=fdxK(x,, x)S(x)+%,

n = l . . . . N,

Vn=f,7°+'dx'fdx K(x', x)E(x', N,

(2)

S(x), K(x', x) are defined as for model 1), E(x', x) is the event detection efficiency (acceptance), where

Yn is the n u m b e r of events in the h i s t o g r a m b i n [x n, xn+l), and % is the fluctuation of this number, 3) F u n c t i o n m e a s u r e m e n t by i n d e p e n d e n t detectors:

Y,=fdxA,(x)S(x)+%,

n=l

.....

N,

A,(x)=-

f£"+ldx' K(x', x )E(x',

x ),

(1)

where S(x) is the function being studied, K(x', x) is the detector resolution (response function, a p p a r a t u s function), 11, is the m e a s u r e m e n t itself, a n d % is a m e a s u r e m e n t error. 2) F u n c t i o n m e a s u r e m e n t as a h i s t o g r a m plot:

n = 1....

a n d for model 2)

(3)

where An(x ) is the response function of the n t h detector, a n d S(x), 1I. a n d % are defined as for models 1) and 2). Since the integration of S(x) with the functions K(x, x') a n d Am(x ) can cause noticeable distortions of S(x) in comparison with %, the p r o b l e m of o b t a i n i n g estimates S ( x ) taking into account eqs. (1)-(3) arises. In experimental physics it is called an unfolding p r o b lem.

the m e a s u r e m e n t p r o c e d u r e according to 1) a n d 2) are equivalent in a m a t h e m a t i c a l sence to 3). Therefore in what follows we shall consider the p r o b l e m of o b t a i n i n g the estimate of S(x) only in the framework of model 3) =1 F o r m e a s u r e m e n t errors % we assume i n = 0 a n d the n o n s i n g u l a r covariance m a t r i x K,,, = e,e,~ to be k n o w n a priori. W e also assume the a p p a r a t u s functions An(x ) to b e k n o w n a priori a n d linearly independent, i.e. they form a subspace of d i m e n s i o n N. The estimates S ( x ) a n d their properties studied below will b e illustrated by two numerical examples: Example 1:

S(x) = C A 1 (x

- -

Xl) 2 +

E(x',x)=_E(x)=I-[ K(X',

I x) = 2~o~oexp

G 2

+A2 (x

--

X2) 2 +

G 2

'

x-x1]

x 2 ---x~ + 2G 2 '

[(Xt--X)2]~N(X, O2), 20 2

where x ~ [4,16], A 1 = 2 , A 2 = 1 , G I = G 2 = 1 , x a = 1 0 ,

~1 Mathematicians call problem 3) when % = 0 a generalized moment problem.

0168-9002/91/$03.50 © 1991 - Elsevier Science Publishers B.V. (North-Holland)

V.B. Anykeyev et aL / Unfoldingmethods

where x ~ [ 0 , 1], a = 0 . 7 , / 3 = 4 , B(a, 1 + / 3 ) is the Euler B-function, o ( x ) = 0.01 + 0.24x is the resolution of experiment C H A R M [3], a n d C is chosen from the c o n d i t i o n that fdx S ( x ) = 104, i.e. the n u m b e r of detected events is 104 (fig. 2a). In b o t h cases we consider the m e a s u r e d distribution to be a h i s t o g r a m Y,. F o r the first example the interval of the m e a s u r e d x was divided into 10 bins of equal size (see fig. la), while for the second example a n irregular division into 10 bins was chosen (0.00, 0.025, 0.05, 0.10, 0.15, 0.20, 0.30, 0.40, 0.50, 0.75, 1.00). The last one was induced b y variable resolution for this example. We consider the p r o b l e m of m a t c h i n g of the bin width with the resolution of the e x p e r i m e n t in the appendix.

x 2 = 14, o = 1.5, a n d C is chosen from the c o n d i t i o n that fdx S(x) = 10 4 (see fig. la). W e used this example in refs. [1,2]. Example 2: m e a s u r e m e n t of the do~/dx differential cross section of n e u t r i n o interactions as the function of Bjorken scaling variable x which can b e represented using some simplified conditions as follows:

S ( x ) = 1 0 4 C B ( a , 31 +

/3)x"(l- x )

#,

E(x) =1, K ( x ' , x) = N ( x ,

a2(x)),

A

351

;~

a) 3-

I I I I

¢¢-

b)

I I I i

2-

1 -

/.,/!

/I

i\.,, 2-,I / ~ ,\

\

/~

I\

I

/

O-

/~.

/

16.

X 16.

/A\

c) -.=

d)

10.

X 16.

.,

]

• .,:

:/ I

3.04]

~l \

:L ' ~ ' - " I \ \ \ :

:/

2.

.

?.5t

II ~

;

t

1.0-1

I

/

~

X

I

t /

V

x

9.5] \

~

1;.

X 1;.

I

I

Fig, 1. Example 1. (a) Histogram - mean of the distribution ( d Y / d x ) . (e. = 0), solid line - SA(X), dashed line - function S(x), dotted line - invisible component SA~ (X). (b) Solid line - SA(X), dashed line SA(X) + 5SA~ (x), (C) Histogram - simplest estimate S0, dashed line function S(x), dotted line - mean of the distribution ( d Y / d x ) . (e. =0), markers - Monte Carlo generated distribution (dY/dx). (e,~4=0); SPA°=3.03, S p Z ° = l . 8 6 x 1 0 4 . (d) Solid line - LSM estimate S + ( x ) ( % 4 0 ) , dashed line function S(x); SpA~ = 10, SpZ + = 1.11 x 109.

V.B. Anykeyec et al. / Unfolding methods

352

¢-

a)

4

b)

2

-i.4

1-1 ]

-i

o..1 ....

• ..................................

I

0.0

0.5

" >..

1.lO

X

I

01.5

0.0

X

1.0

\., c)

\\

.15 \\

\l\ \.,, ,, "~'\.,. [ . 0

olo

015

'

X

lr.O

olo

0,5

I

X

1.0

Fig. 2. Example 2. Notations as in fig. 1; SpA~ =10, Sp~ + = 5.30x 106; SpA ° = 4.56, Sp~Z° =1.54x103.

For the covariance matrix of measurements we used the approximation that K.m = Y.~.m and ~. are distributed as N(0, yn). Any function S ( x ) can be uniquely represented as follows (see figs. l a and 2a):

S(x)= SA(x) + S~ (x),

(4)

where SA(x ) is the projection of the function S ( x ) on the subspace spanned by the set of functions A . ( x ) : N

s~(x) = S, coA.(~), n=l

It is evident that the component S J - ( x ) cannot be estimated on the basis of Y. only, as it follows from eq. (5) that Y. does not depend on it (that was noticed in refs. [1,4-6]). In fact the functions SA and SA + 5S~ presented in figs. l b and 2b for the given A . ( x ) will produce just the same measurements Y.. So if S ~ ~ 0, the finiteness of the number of measurements makes independently from the estimation method, that the estimate S ( x ) is biased. Sometimes SA(X ) is called a " v i s i b l e " ("measurable") component of S(x), while S ~ is an "invisible" (" unmeasurable") one. The linear estimate S ( x ) in general looks like:

and S ~ (x) is orthogonal to this subspace: N

f dx A.(x)S#-(x) =0.

(s)

g(~) = Z .o(~)Y.n=l

(6)

V.B. Anykeyev et al. / Unfoldingmethods F r o m eqs. (4) and (5) it follows that, for reasonable estimates and for estimates which did not use a priori information, a,,(x) are to belong to the subspace spanned by A,,(x). If we use relation (3) S ( x ) can be represented as follows:

)fdx' A.(x')S(x') + E a , ( x ) , , .

g ( x ) = Ea°( n

n

(7) Averaging eq. (7) over the measurement errors we obtain

~(x) = Ea.(x) fdx' A.(x')S(x') n

(8)

= fd~' AR(x, x')S(x'), where the function

AR(X, x') = ~-2a,(x)A,(x')

(9)

II

can be called a residual apparatus function [1]. It follows from eq. (8) that, when AR(X , x') does not possess the property

(10)

fdx' AR(X, x')SA(x') = SA(x), the estimate S ( x ) will have extra bias, which to the choice of the unfolding method (i.e. choice of the functions a,(x) in eq. (6)). The noise component of estimate S ( x ) as from eqs. (7) and (8) is characterized by operator [1]:

_--

-

is related a definite it follows an error

353

the " n e w experiment" (respectively the bias of estimate S ( x ) ) can be characterised with the value SpA R [7]. In section 2 we consider the estimate by using the least squares method (LSM). We prove that it gives an unbiased estimate of " v i s i b l e " component S(x). However, this estimate has an inappropriate noise component. This leads to the necessity of its regularization. In sections 3 - 6 we consider regularization methods providing the linear estimate: the spectral window method, the iterative method, Tikhonov's method and the dual method. The use of these methods leads to regularization of the LSM estimate - the noise in the estimate is reduced, but it turns out to be a biased estimate of " v i s i b l e " component SA(X ). In section 7 we consider the following problem: what unfolding method provides the minimal distance between estimate S ( x ) and " v i s i b l e " component SA(x)? The problem of choosing the regularization parameter is considered in section 8. The use of estimate S ( x ) for the estimation of the vector of parameters a in the function S ( x ) = f ( x , a) given in parametric form is studied in section 9. Finally some results which are aside the main contents of the article are given in the appendix. The main part of the analytic results were presented in our other paper [8].

2. I~ast squares method (LSM) First of all we consider the LSM estimate properties. By inspection of the visible component of S(x) we want to find the estimate S(x) in the form

~(x) = E coA°(x)

-

n

= Y~ a , ( x ) K , ma,,(x').

(11)

Hence a linear estimate of general form (6) may be interpreted as the result of a new experiment with a new apparatus function AR(X, x') and noise component ~(x) = E , a , ( x ) c , with error operator (11). Let us consider the mean square of the distance between S(x) and S ( x ) :

= f d x [ ( S ( x ) - g(x))+ (~(x)- ~(x))] 2 = f d x [ S ( x ) - A . s l 2 + spz, where the spur of X(x, x ' ) is

SpZ= f f dx d~' Z(~, x'). From this one can see that it is convenient to characterise the noise component of estimate S ( x ) with the value SpX [1]. We will see later that the resolution of

from the condition of the minimum of the functional

which in that case looks like

where W = K -1, Gnk = (An I Ak) = fdx A,,(x)Ak(x) is Gram's matrix of functions A,,(x). Differentiation of functional (12) over C i leads to the following system of equations for vector C:

GTWGC = GTwy.

(13)

Because matrices G and W are positively defined, the solution of this system of equations can be written as follows

~=G-ly.

V.B. Anykeyev et al. / Unfoldingmethods

354

So the estimate S + ( x ) is, in accordance with LSM,

S+(x) = E An(X)Gnlym "

(14)

n,m

It is clear, that fdx A.(x)ff+(x)= Y. and functional (12) becomes zero. The residual apparatus function AR(X, x') in LSM

A~(x, x') = E A . ( x ) G . ~ A ~ ( x ' )

(15)

It is in agreement with a logical statement - the more measurements we have or the larger the dimension of the visible space is the better the estimate S ( x ) can, in principle, be obtained on the basis of measurements

go. The error operator and its spur for the LSM estimate are given by expressions

y"+(x, x')=

n,m

has the maximum possible resolution because it does not distort the visible component of S(x) (relation (10) is fulfilled)

f dx' A~(x, x')S~(x')

= f d ~ ' E A.(x)G;AAm(x')~C,A,(x') n,m

l

= F. C,&(x)GZ2G",= ~C,A,(x) = SA(X). n ,m,l

~

A.(x)G;2Km,G;~'&(x').

(lV)

n.m,l.k

l

(16) This means that the estimate S + ( x ) by LSM is an unbiased estimate of the projection of S(x) on the subspace spanned by the functions A.(x) (the subspace which is visible by the experiment), i.e. S + ( x ) = SA(x) (see figs. l a and 2a). Estimate (14) can be obtained as a solution of the following variational problem

min,~ = f dx S 2 ( x ) S(x)

under the condition ( A . I S ) = Y., n = 1 . . . . . N. Expression (14) is called a pseudosolution, and operator E.A.(x)G£ -1 is called a pseudoinverse operator for the system of equations ( A . I S ) = Y. if the scalar product in the space S(x) is defined as (S, S) = fdx S(x)S(x) and in the space of measurements as

(Y, Y ) = Z Y.W.mYm" n.m

and

SpY. += SpKG 1. Let us consider, from a general point of view, the noise component of the estimate S + ( x ) . When the nondiagonal elements of matrix G are large enough (functions A.(x), Am(x), n ~ m are strongly overlapped, which is a c o m m o n situation in practice), the matrix G is ill-conditioned. The consequences of this are small eigenvalues X in matrix G. This can lead to inappropriate large noise in S + ( x ) , which becomes clear from the following considerations. Let us introduce eigenvectors [g(O) and eigenvalues )t l of G r a m matrix G: G I g(l)) = Xl [ g(l)),

(g(l) I g°") ) = 6l,..

(18)

Using the theorem on spectral representation of G 1 we can write

G -1= ~

I g(l))(g(l)l

t=1

(19)

X/

Substituting eq. (19) into expression (14) for S + ( x ) we obtain S+(X) =

E

An(x)g(I)g(ml)(Ym+~m)

n,m,l

)kl

It is evident that the noise component S ÷ ( x ) is proportional to c"/Xt, which can be large enough at small Xl- It is also evident from expression (17) for Sp~ if one considers a particular case for K~" = o28~,.:

SpY. = Elo2/Xl.

As it was shown in our earlier paper [8] the requirement on the estimate of the visible component of S(x) to be unbiased also leads to eq. (14). The fulfilment of eq. (10) for A~(x, x') actually means that in LSM A~(x, x') is a projection operator onto the visible subspace. Actually, if we denote it by PA one can see that P~ = PA. The projection operator onto the invisible subspace is

P2(x, ~') = 8 ( ~ - ~ ' ) - e , , ( x , x'). The characteristics of the resolution which was introduced earlier can be calculated:

SpA~ = f d x E &(x)G22A~(x) = N. n.m

For the first example ~krnin=0.10 X 10 -5, )kmax= 0.82, and for the second Xmin=0.35 X 10 -3, ~rnax = 0.14. The addition of relatively small errors % (see figs. l c and 2c) to the exact values Y~ leads to a drastic increase of the noise component in the estimate S + ( x ) (see figs. l d and 2d). Though the estimate S + ( x ) has a finite noise component ( X ~ 4 = 0), its significant value leads to the necessity of applying regularization methods developed for ill-posed problems. The general method of suppressing the noise component S + ( x ) is filtration of S + ( x ) through the noise suppression filter U(x, x ' ) : S ( x ) = Jdx' U(x, x ' ) S + ( x ' ) . If UPAj- = 0 one can see that S = USA [8]. Moreover, every linear estimate can be written as a result of

KB. Anykeyev et at / Unfoldingmethods filtration of S+(x) through U(x, x')=AR(X, X') for this estimate. Actually, with the use of eqs. (6), (9) and (14), we obtain

355

result which makes no sense when the resolution K(x, x') is poor. Large values of Y. in the last two bins due to smearing, being corrected for low efficiency, provide a pitch in S0(x).

A~g+= E a.(x) f dx' A.(x') E Am(X')Gm~Yk rt

m,k

3. Spectral w i n d o w m e t h o d n

We shall consider the most popular methods for the choice of AR(x, x') (or a,(x)) in the following sections. In the case of model 2) for measurements sometimes the simplest estimate S0(x) is used in the form of the step function, which takes into account detection efficiency E(x), but neglects the difference of resolution K(x, x') from the 8-function. For comparison let us obtain expressions for S0(x) and its characteristics

A°(x, x'), X°(x, x'), SpA °, SpZ °. We look for S0(x) in the form

go(X)= Ea.×.(x)

The spectral window method as a regularization method [6] is explained as a suppression of the contribution from small XI in eq. (19) by means of introducing multiplier f(;~t) under the sum sign: N

I g('~)(g('~ I' 2l,

(~ 1 = ~ f ( X , )

(20)

l=1

where f ( ~ ) ~< 1 and f(X) is small for small eigenvalues. For further analysis of the resolution and noise of the estimate that we obtained from S+(x) by substituting G-~ instead of G - 1 it becomes useful to introduce functions

n

by LSM from the condition of the minimum of the functional

~:n~.m[Y,-fdxxn(x)E(x)~akxk(x) ×Wnm[gm-fdxxm(x)E(x)~l

l

alXl(X) ],

qq(x)= ~ll ~g~l)An(x ).

Using eq. (18) it becomes clear that +t(x) form the orthonormal basis in the visible space: (4'tlffk) = E ..m

where X, is the characteristic function of the nth bin of the experimental histogram 1, X.(x)=

fdx,~g~')A,,(x) ~ g1 m (k)~ ( x ) V"1

n,m ~

_(t)x

.(k) _

-- ¥,,.t,,k

whenxf~[x,,,x.+,).

fdx X.(x)E(x)x.(x) = C., we obtain go(x)= Ex.(x)c;-lr., Denoting

tt

A°(x, x') = E x . ( x )

1/hk

Ign 'dnm,~m

1

when x ~ [x~, x~+x),

O,

(21)

c;-1 A.(x , ),

In figs. 3 and 4 + , ( x ) are shown when n = 1, 3, 5, 7, 9. Making the substitution G - ~ o G-~ in expression (14) for S + ( x ) we have

S(x)= ~ ~ , ( x ) f ( X , ) ( g ' " l Y

).

(22)

rl

From this expression one can obtain

Art(x, x') = ~_,4,t(x)f(X,)4q(x'), ~,O(x, X') = E Xn(X)Cn n,m

lWn2CmlXm(Xt)

SpZ °= ~ C. ZAx.K.,,,

SpAR = Ef(X,) < SpA; = N, I

¢1 A X n = X n + 1 - - X n.

In figs. lc and 2c S0(x), and SpA °, and SpZ ° are given. The comparison of the last ones with SpA~ and Sp~, + demonstrates how the improvement of the resolution in the estimate S+(x) can lead to a drastical increase of the noise component. Fig. lc illustrates how the efficiency E(x) in the step function S0(x) leads to a

(23)

l

,

1

(t)

(k)

.~(x, x')= i.n~m.k~bl(x)~f(~.l)gn Knmgm 1

× l ( x ~ ) ~ - ~ ( x 1, SpZ= Y'~

l,m,n

1 ezrx ~.(t)r

~11 J

\'*lllSn

.(t)_

*Xnm,Sm"

V.B. Anykeyev et al. / Unfolding methods

356 When Knm = O2~nm:

sp:~=~f2(x,) "

qJ1

2


0

l

This inequality (SpN < Sp,~ +) is true for a general form of the matrix K. Thus on the one hand the spectral window method leads to the deterioration of resolution A R, i.e. estimate S ( x ) will possess additional bias is comparison with S + ( x ) , caused by regularization. On the other hand the noise component of S ( x ) is smaller than the noise component of S + ( x ) . It is evident that for f ( X ) = 1 expressions (22) and (23) are those for LSM in spectral representation which means that

-2

2

~5

N

A~(x, x') = E ~t(x)~t(x')

"

(24)

/=1

2 0.50 -2

2 -0.50.5-

0-2 "1"

r

0 -0.50.5-

0.5

×

1.0

Fig. 4. Example 2. Notations as in fig. 1. 5

Let us consider the simplest form of f()~): f ( ? 0 = 1 when ?~ >/)~NR, NR < N and f ( ? 0 = 0 for )~ < ?~NR"Considering ?~t to be in decrease order we obtain:

-0.50.5"

N R

A R ( X , x ' ) = Y~. ~ b , ( x ) f t ( x ' ) .

(25)

1=1

-0.5 0.5-

9

-0.5 X

Fig. 3. Example 1. Functions of orthogonal basis in visible space- ~b1, ~P3, ~ks, ~Ps, t~9-

It means that though A ~ is a projector on the whole visible space, A R is the projector on its subspace spanned by the functions ~b1. . . . . ~bUR. The choice of N R is a compromise between the deterioration of the resolution in estimate (22) and the value of its noise component. A more detailed study of the choice of N R will be done in section 8. Figs. 5a and 6a illustrate the comparison of S ( x ) with the estimate S ( x ) by the spectral window method ( f ( k ) is chosen to be a step function), and in figs. 7a and 8a one can see the comparison of AR(X 0, x ) by the

357

V.B. Anykeyev et al. / Unfolding methods

A

A i

2.

%

M

i1-

J

Oi

i

4.

10.

i

16.

X

v'

/ Z

i

i

4.

10.

A

i

X

16.

A

/\ I ~

2

d)

2-

>:1-

..:7 Oi

.

.

.

.

4.

.

.i

.

.

10.

X

16.

v

"t

J

't

4.

.

X

16.

Fig. 5. Example 1. The comparison of estimates i ( x ) , minimizing D --~, for different methods: solid line - estimates if(x) (c,, 4, 0), dashed line - function S ( x ) . For illustration in some points values Z(x, x) 1/2 are plotted as errors. (a) Spectral window method, S P A R = 5.0, S p Z = 6.78x104. (b) Iterative method, SpA R = 5.49, S p Z = I . l l x l05. (c) Tikhonov's method, SpA R = 5.82, S p Z ~ 1.54x 105. (d) Dual method: o = 0.3 solid line, SpA R = 5.70, S p Z = l . 5 8 X 105; o =1.0 dashed-dotted line, SpA R = 4.11, S p Z = 4.97 x 104.

spectral

window

method

with

A~(x0,

x)

and

A°(x0, x).

T h e s i m p l e s t iterative s c h e m e for the s o l u t i o n o f eq. (26), l e a d i n g to a linear e s t i m a t e o f S ( x ) , is C (i+1) = C (i) - " r ( G C (i) - Y ) ,

4. lterative method H e r e we c o n s i d e r a n o t h e r m e t h o d o f r e g u l a r i z a t i o n the iterative m e t h o d [6,9]. I n line o f o u r f o r m e r app r o a c h we will use it to c o n s t r u c t a n a p p r o x i m a t e e x p r e s s i o n for G - l . I n the c o n s t r u c t i o n S + ( x ) it w a s necessary to solve the s y s t e m o f e q u a t i o n s (13) equivalent to the s y s t e m G . C = Y.

(26)

(27)

w h e r e i is the i t e r a t i o n n u m b e r a n d T is a r e l a x a t i o n p a r a m e t e r w h o s e c h o i c e will b e d i s c u s s e d later. If we c h o o s e C (°) = 0 as a zero a p p r o x i m a t i o n o n e arrives at i--1

C (i)

= "r E (1 j=o

rG)JY.

(28)

So in iterative s c h e m e (27) t h e linear e s t i m a t e for the i t h i t e r a t i o n looks like

V.B. Anykeyev et al. / Unfolding methods

358

%

%

>:.

¢2-

1.

O"

01.0

0.'~

X

!

1I;

0 I;

0.0

1I;

X

\

%

d)

2.

14 4

O-

I

olo

i

0.5

i

X

1.0

0.'0

0.~B

X

11

Fig. 6. Example 2. Notations as in fig. 6 except for (d), where estimate S(x) (dotted line) for the dual method with parameter o = 0.0025 is presented. (a) Spectral window method, SpA R = 6 . 0 , Sp,~ = 2.22 x 105. (b) Iterative method, SpA R = 6.28, SpX = 2.36 x 105. (c) Tikhonov's method, SpA R = 7.63, SpX = 6.37 × 105. (d) Dual method, SpA R = 6.48, SpX = 4.59 × 105.

i--1

s ( i ) ( x ) = E "r E A . ( x ) ( I n,m

for this scheme is d e n o t e d as G-1. T h e expressions for AR(X, x ' ) X ( x , x ' ) , SpA R, a n d SpX are:

- "rG)J~Y,.

j=O

= Y'~ A , ( x ) G ; 1 y m . n,m

(29)

AR(X,

x')=

SpA R = S p G G - 1, Z(x, x')=

The approximate value of G - 1 equal to

Y'~A.(x)G.1A.,(x'), n,m

E

A°(x)dZIK,-,d,2Ak(X'),

(30)

n,m,l,k

SpX = S p G - 1 K d

1G.

i-1

TE

j=o

(I-

~'G) j

T h e scheme to o b t a i n the iterative estimate stated a b o v e becomes more t r a n s p a r e n t w h e n we r e t u r n to a

V.B. Anykeyev et al. / Unfolding methods spectral representation of ~ - 1 and basis + t ( x ) in the visible space. With the use of the known representation of function F from matrix G:

F(G) = ~~r(h,) lgd))(g(')l, I

we can denote for the ith iteration N

C -a = • ~

i

1

Y'~ ( 1 - ~X,)Jlg(I))(g(I)[.

(31)

l=1 j=0

The sum over j is a sum of items in geometric progression which helps to express (31) as follows:

359

As a consequence ( ~ - i will converge to G -1, if 2. Hence to reduce the bias of the estimate S(J)(x) the parameter ~ should be chosen from the condition T = l / A m = . In this case the function f , ( A ) = [ 1 - ( 1 ~'h) j--l] for large X is close to unity, and with small X it is close to zero. With the use of eq. (32) for G a it is possible to write SU)(x), AR(X , x'), SpA R, y,(x, x'), and SpY, for the iterative method in form (22), (23) with the function f ( X ) defined earlier. So the iterative method leads again to stabilization of the estimate S ( x ) with respect to the input noise % thus deteriorating the resolution. The choice of the number of iterations N~ (playing the role of the regularization parameter in this very case) is a compromise 0 < "r•ma x <

A t~ I I I I

0.5

~

0.0

/ Y/

L-..~_.~, ~i ~/

t I t i t I

I I I I

I i

\ }k //',, \," "-~.

X

I

~'~

0,0

,LI,'t

4.'

b) I

0.5

I i

i ',./

I

I "~. IL,/

\~/

16 .

16. '

4.'

X

' 10.

X

1'6.

/X I I I I I

0.5I

II I I

c)

I I I I I

d)

'1 t I t

0.5

I

~

I

I \\v. \ 0.0

o.o\\J I 4.

~

'~..'

10.

~/

X

1 .

4.

\

/',,

~

~

/\J 10,

X

16.

Fig. 7. Example 1. The comparison of AR(X0, x) (solid line) with A~ (x0, x) (dashed line) and with - A°(Xo, x) (dashed-dotted line) for different methods in the point x 0 = 9.4. (a) Spectral window method. (b) Iterative method. (c) Tikhonov's method. (d) Dual method (o = 0.3 solid line, o = 1.0 dashed-dotted line, A°(Xo, x) dotted line).

360

V.B. Anykeyev et al. / Unfolding methods

2-

/1

A

I I

I

I'I

a)

1

I I I

b)

r I J I

"7.

C

V V i

i

05

0,0 2-

X

1~

do

015

1.6 X

2-

,q II

c)

tl I L

d)

I

I I I I

I I

1

I,,I

1-

fl

o!

L, L/ i

t

0.0

0.'5

X

1.~)

o15

0.0

X

110

Fig. 8. Example 2. Value x o = 0.16. Notations as in fig. 7.

between the noise in the estimate and its resolution. In figs. 5b and 6b the comparison of S ( x ) with estimate (29) by the iterative method is given; figs. 7b and 8b illustrate the comparison of A~(Xo, x) and A ° ( x o , x) with AR(X 0, x) by the iterative method.

where the second term is the so-called stabilizer and a > 0 is a regularization parameter. Let us study first the situation when k = 0. The variation of (33) over S ( x ) leads to the following equation for S(x):

~q(x) + Y'. n.(x)W.,.fdx' A,.(x')g(x') rt,m

5. Tikhonov's regularization m e t h o d

= ~ A , , ( x ) W , , , , Y m.

In this method [10] the estimate S ( x ) is determined from the functional which is a generalization of the LSM functional

(34)

n,m

Consequently, S ( x ) can be written as 1

~(x) = ~ E A . ( x ) 17

ff~ = E ( Y . -

Y,,)I.V.,,,(Y,,,-

1~)

n,m

+~fdx[d'~'S(x)]2, [

dxt,

(33)

-= E ~ . ( x ) , ~ . . n

(35)

V.B. Anykeyev et aL / UnfoMingmethods Substituting eq. (35) into the definition of A we obtain the system of equations for , ~

= 1_ w i t -

Ga],

O~

Let us assume S

dS' d2SI d3SI x=~=~xx x=a= dx 2 x=b-- ~ X3 ~=b=0"

Under these boundary conditions Green's function exists for the equation

which can be represented in two equivalent forms:

[al + WG]A = WY, [aK+G]A=

d4r(x, x') dx 4

Y.

The solution can be written down in two ways:

daF(x, x ' ) = 8 ( x - x ' ) . dx '4

It can be expressed as

A = [ a I + WG]-IWy,

(36)

A=[aK+G]

(37)

1y.

361

( x - a)2(3x ' - x - 2 a ) / 6 , x'- 2a)/6,

F(x, x') =

( x ' - a)2(3x

x <~x f, X>X t"

As it follows from eq. (35) and, for example, eq. (37) the estimate S(x) obtained by Tikhonov's regularization method is calculated as it was done earlier by substituting an approximate matrix

By multiplying eq. (4) by Green's function (41) and integrating the result over x we obtain

8

S(x)=x

'=[aK+G]-'

(38)

(41)

,

[

Y', U,(x)Wnm Ym- f d x ' A m ( x ' ) S ( x ' )

]

n.m

instead of G -1 into S+(x). Using denotation eq. (38) we can write

-= E u.(x),~., n

S ( x ) = Y" An(x)GffmlYm.

(39)

n.m

where

The expressions for AR(X, x'), X(x, x'), SpA R, and SpX are similar to eqs. (30).

Un(x) = f d x ' F(x, x ' ) A n ( x ' ) .

If we suppose K,,, representation (38),

Doing the same as for the case k = 0 we obtain

=

O2~nm and return to spectral

3 = [aK+

1 = y ' [~o2 + ~./] -llg(t))(g(l) I

M]-'Y,

(43)

where

l

= xf(~/)

M.,.= fdx

I g(Z))(g(z) I,

where f(X) = ) ~ / ( a a 2 + X), expressions for S(x),

AR(X, x'), X(x, x'), SpA R, and SpX can be written in form (22), (23) with substitution of the previous f(X) by a new one. The conclusions made in connection with the spectral window method are true for Tikhonov's method. In figs. 5c and 6c a comparison of the estimate by Tikhonov's method with S(x) is given; in figs. 7c and 8c a comparison of A~(xo, x) and A°(xo, x) with AR(X0' x) by Tikhonov's method is given as well. The case when k = 2 is considered as in ref. [11. In this case the variation of eq. (33) over S(x) leads to the equation for S(x): d4S+ a dx 4

(42)

E

dx' A , ( x ) F ( x , x t ) A m ( x ' ) .

Thus in this case the estimate S(x) and its characteristics have the form

~(x) = E Vo(x)[~K+Mlo2Ym, n.rn

A . ( x , x')

=

E

U n ( x ) [ o t K + m ] n m A m ( x-'

' ),

n,m

SpA R = Sp[aK + M ] - I ( A I U ) ,

z(~, z')=

E

(44)

U.(x)[~K+M122

n,rn.i.j

XKmi[aK + M].IUy(x' ), S p X = S p [ a K + M ] - ' K [ a K + M] I(UIU ).

An(x)Wnm f dx t A~(x t ) S^( x t )

Expressions similar to eqs. (44) under the boundary conditions

n,m

= Y'~ A , ( x ) W , mYm n . rn

under the boundary condition d:zSB{ dS~lh d3S ^ [ dx 2 ~ - ] 1 =~x38(S) =0.

(40) d2S x=a = d3~ x=a = d2~ - d3~ x=b = 0 dx 2 dx 3 dx 2 x=b dx 3 can be found in ref. [1]. As it can be seen from numerical example 1 considered in ref. [1] the estimate S(x) turns out to be very close to the solution for k = 0.

V.B. Anykeyevet al. / UnfoMingmethods

362

In ref. [11] another approach is studied for k = 2. It is founded on the use of the representation of S ( x ) in the following form: J

S(x) = Y'~ aib~(x),

Substituting a n ( x ) from eq. (49) into eq. (6) we obtain the estimate S(x) from which the expression for SpZ, containing ~, can be found. Substitution of this expression into eq. (48) gives the equation for ~:

Sp[XK+G] 1K[XK+G]-'(bIb)=Y1.

i=1

where bi(x) are B-splines. In ref. [2] we used a numerical example from this paper for illustrating unfolding by the maximum entropy method. For both methods estimates S ( x ) and their statistical errors turned out to be close enough.

(50)

As a result the following expressions were obtained:

S(x) = ~, b,(x)[aK+ G],~Ym,

(51)

n,m

Art(x, x') = E b,(x)[XK + G]n~Am(x'), n,m

SpA R = Sp[XK + G]-I(AIb>, 6. Dual approach

Z(x, x ' ) =

E

bm(x)[XK+G]n2

n,rn.&j

The idea of this approach can be found in ref. [5,12,13] and consists of the following, Let Ao(x, x') be a resolution function with the desired properties. We introduce a "distance" between Ao(x, x') and Art(x, x') which looks like eq. (9): IIAR

--

A0 l]2 =

Sp(AR

-

= f d x dx'

A0)(AR

-

Ao) T

JAR(X,

x')-Ao(x, x')] 2

f d x E a.(x)C.mam(X) n.m

- 2 f d x a . ( x ) b o ( x ) + B,

(45)

×Kmi[ XK + GI,5'bj(x ). Let us take the operator A0(x, x ' ) as A~(x, x'). Making use of the definition of b,(x) one can see that in this case b,,(x)= A,(x) and estimate (51) coincide with the estimate from Tikhonov's regularization method if we identify the Lagrange multiplier ~ with the regularization parameter a. In this way, Tikhonov's regularization method when being used for the solution of problem (3), acquires a new interpretation. On the other hand, with the use of the definition for bn(x ) we can write

S(x) = f dx' Ao(x, x') • A,(x')[;~K + G]n~Ym, n,m

where

b.(x) = f d x ' A.(x')Ao(x, x'), B = f d x dx' AZ(x, x'). Now we will look for an(x ) from the condition of a minimal "distance" between A R and A 0 under the condition that SpZ (see eq. (11)) is equal to 3'1- According to the Lagrange method the problem is reduced to finding a stationary point of the functional = IIAR - A 0 [ [ 2 +

~[Sp,~ - "{1] -

i.e. S ( x ) is the response of the device Ao(x, x') on the estimate S ( x ) obtained from Tikhonov's method. The approach stated above to obtain the estimate S ( x ) has a symmetric wording - we shall look for a,(x) from the minimum of SpY. under the condition that I ] A r t - A 0 l l 2 = y 2 . A problem of this type is reduced to finding a stationary point of the functional

= Sp2 + X [ ¢1AR-- Ao II 2 - "/2]. (52) A variation of eq. (52) by an(x) and differentiation of eq. (52) by h gives equations

1 (46) m

Varying eq. (46) over a,,(x) and differentiating eq. (46) with respect to 2~we obtain ~" [ X K + G ] , , , a m ( x ) =

bn(x),

(47)

m

SpZ = 3'1-

(48)

If not all bn(x) are equal to zero, i.e. visible spaces of Ao(x, x') and those of problem (3) overlap, then eq. (47) has a solution:

a,(x) = ~ [ X K + G],~bm(x ). m

(49)

II AR --A0112 = "Y2"

(54)

F r o m the comparison of eqs. (47), (48) and eqs. (53), (54) it becomes evident that the functions am(x) in these expressions differ only in the equation for the Lagrange multiplier. So the conclusions made in relation to S ( x ) from eq. (46) are true for S ( x ) from eq. (52) as well. For numerical illustration we chose Ao(x, x') to be Gauss functions: Ao(x, x')= N(x', o2). In fig. 5d two estimates (51) are given with o = 1.0 and o = 0.3. It is clear from the figure that the systematic error of the

V.B. Anykeyevet al. / Unfoldingmethods estimate with o = 1.0 larger. This is because in this case b,(x) turns out to be extremely wide for describing the b e h a v i o r of S(x). So o n other pictures we give the estimate a n d its characteristics with o noticeably less t h a n a in the initial resolution function K(x, x'). It should be n o t e d t h a t in this case the estimate differs slightly from that b y T i k h o n o v ' s m e t h o d for k = 0. In figs. 7d a n d 8d for this case a c o m p a r i s o n of Art(x o, x) is given with A ~ ( x 0 , x ) a n d A°(xo, x).

363

is the o p e r a t o r n o r m which is in agreement with the norm

IISAII2= fdx[SA(x)] 2. N o w we find the expression for A R from the condition of the m i n i m u m of M ( A R ) . Substituting the explicit expressions for A ~ , A R a n d Sp.S into M(Art) we obtain

M(AR)= [ N - 2 ~ f dx a.(x)A.(x) 7. On optimality of Tikhonov's method

+ ~E..,f d x a.(x)Gm, am(x)]

W h e n there are several unfolding m e t h o d s the question arises: which of t h e m is the best according to some criterion? Let us consider as the criterion the m e a n distance between visible c o m p o n e n t SA(X ) a n d the estimate S ( x ) being searched for:

M a k i n g a variation of eq. (56) b y

~=fdx[SA(x)--S(x)l 2

~[~K+G [m ],a1[ x("= )' ASA ,"x(') II

) -- A R S ] 2 q- S p Z .

= fdx[gA(x

(55)

Value ~AA is defined by a systematic error (first term in eq. (55)) and a statistical error (second term in eq. (55)). M i n i m i z a t i o n of eq. (55) of Art would lead to a n optimal estimate S ( x ) . However, the systematic error contains the u n k n o w n function SA(X ), SO this p r o b l e m c a n n o t be solved. However, it is possible to find the optimal Art u n d e r weaker demands. W e will find expressions majorizing ~AA- M a k i n g use of expressions

[ISA I[z

+ g. fax a.(x)K.ma.(x ).

(56)

n,m

a.(x),

we have (57)

Substituting the expression for a,,(x) from eq. (57) into the expression for Art, we o b t a i n

AR(x, x') = E A.(x)

1

K+ G

n,rn

A,.(x'). nm

(58) F r o m c o m p a r i s o n of eq. (58) with A R from eq. (30) one can see t h a t Art, minimizing M ( A R ) , is the Art f r o m T i k h o n o v ' s m e t h o d for k = 0 w h e n a = 1/11 SA II 2

A~ = E A.(x)G£-maAm(x)' n,m S(x)

= SA(X)

! ~, \k

-I- S/~ ( x ) ,

AeS=A;SA = &,

'c~ /4

'~x \

AR= ~7~a,(x)A,(x), n

\

spz = E f d x an(x)K,,ma,,,(x ),

\

\.~,

\

n,m

""~,, ..,

o b t a i n e d earlier, we can write

"\?,, \ = fdx[SA ARS] 2 + Sp~, -

.',.'..,,,\ =fdx

[(A;

-- Art)SA]:+ SpZ ii52

~lla~

- -

A~ II2 II& II2 +

where

IIAY -Artll2 =

I(51 SpY.t0 7

10°

Sp~. =-M ( A R ) ,

fdx dx'[A~(x, x')-Art(x,

x')] 2

Fig. 9. Example 1. The dependence of IIA~ - A R II 2 on Sp~. Spectral window method - solid line, iterative method dashed line, Tikhonov's method - dotted line, dual method dashed-dotted line.

V..B. Anykeyev et a L / Unfolding methods

364

I I A ~ - - A R I I 2 is the characteristic of the resolution (systematical error); and

~.

=

O(S) = E (Yn-

Yn(S))Wnm(Ym- "Yrn(S))

(59)

n,rn

,\

).x\

1.

03

SpZ" 106 Fig. 10. Example 2. Notations as in fig. 9.

The question remains whether this property of minimum majorant for Tikhonov's method stays the same for other values of a. It is interesting to compare the values of M ( A R ) for different methods when the statistical error S p Z remains the same. Let us suppose S p Z = ' y . The minimum of M ( A R ) stands for the minimum of l] A~ - A R II 2 because the value [[ SAI[ 2 is constant. So the Art providing this minimum is a stationary point of the Lagrange fm.ctional

* =lla -ARI12 + X[SpZBut in the previous section, when we analysed functional (46), it was shown that it was the A R for Tikhonov's method when k = 0. Numerical calculations presented in figs. 9 and 10 illustrate this statement. So Tikhonov's method is optimal for the criterion: minimal majorant of systematical error when the statistical error value is fixed.

is the characteristic of the degree of agreement of S ( x ) with the experimental data. Plots of these values are given in figs. 11 and 12 ~2 F r o m these figures it is clear that for the considered examples only the dependence of S p Z on a has a singularity in the region of the minimal value of D'~. In the majority of the considered methods S p Z strongly increased when a is close to the optimal value of the regularization parameter. This fact can be used for the choice of a. The choice of the regularization parameter from the discrepancy principle is well known [14]. In our consideration this means that the value of a was chosen from the condition O ( S ) = N. It is clear from figs. 11 and 12 that, except for the variant in fig. l l a (the spectral window method), the discrepancy principle provides an a that is larger than the optimal value of a. The estimates S(x), corresponding to this principle of choice of a, are given in figs. 13 and 14. It is clear from these plots, that the estimates S ( x ) , with exception of the estimate by the spectral window method (fig. 13a), are close to the estimates S ( x ) for the optimal value of a. The following interpretation can be given for the discrepancy principle. For all methods except for the dual method, Y , , ( , ~ ) = ( G G - 1 Y ) , , ~ Y , , when a---,0. Thus the fixation of a from the condition O ( S ) = N actually implies the demand of an average difference of Y n - Yn(~) from zero to be equal to the measurement error of Y~ because of the regularization. In ref. [11] where the solution being searched for as a composition of B-splines has been proposed the method of choosing the regularisation parameter. In our approach this method can be interpreted as the choice of the parameter from the condition Y~J()%, a ) -%
where N o is the number of statistically significant coefficients in the spectral representation of the estimate.

8. On the choice of the regularization parameter In this section we will consider as the regularization parameter a: a) 1 / N R in the spectral window method; b) a value inverse to the number of iterations 1 / N I in the iterative method; c) a in Tikhonov's method; and d) X in the dual method. It is evident that the optimal value of a will be the one minimizing ~7. This very value was used for the estimates S ( x ) in figs, 5 and 6. In a real experiment the calculation of ~ 7 is impossible and one must rely on the analysis of the values: S p Z is the characteristic of the noise component; SpA R or

9. On the use of linear estimates S ( x ) for parametric analysis When the parametric model exists for S ( x ) , S ( x ) = f ( x I a), where a is the vector of parameters, the problem arises of how to estimate a and how to check the :~2 It should be noted that for all methods but dual one, the behavior of IIA~ - A R II2 as a function of a is similar to the function N - SpA R, so in figs, 11 and 12 we give only SpA R.

V.B. Anykeyev et al. / Unfolding methods

2.o~ %

....•.,.,,•-''"''"

6-

i

6.5-

cn

°'/"

6.04

~.5-

~

4

/

\

4:

a)

% % <~

365

tn 6-

1

0.5-

2-4

4-

5.5-

.

;

1,0-

5 -

.

5.00.5-

0.0

02

2_ i

.

.

.

0.11

.

i

'

i . \. -. /. . . 2 . .

....

0.31 1 / N r ~

0.01

0.51 0.5

. . . ;/J: " cl Y/

¢'4' CD

7-

2.0 ~c~

%

(S"

<~

0.25 ."

1.5

d) .."

"7

....//

~o

"-7.

5.o-

5.0-

.5

5.5-

1.0-

....

0.131/NC10-1

.

!,,

1.0-

./

2-

i.5-

5.0-

1-

0,5-

0.50.01

0.11

d....10 -4

0;i; 0s<

0.20

Fig. 11. Example 1. Dependences of SpA R (solid line), Sp• (dashed line), q~(S) (dotted fine) and D-~ (dashed-dotted line) on the regularization parameter. A cross on the plots marks the point q~(S) = N while an arrow indicates the corresponding value of the regularization parameter. (a) Spectral window method. (b) Iterative method. (c) Tikhonov's method• (d) Dual method•

hypothesis that f ( x l a ) does not contradict experimental data yn. This can be solved as follows. First the inverse problem is solved and the estimate S ( x ) is found• Then M independent s t a t i s t i c s 2 m are calculated:

estimate for the parameters a is looked for from the minimum of the functional

•=

E [A- <~.ti(xt.)>] tl,m

x D(z);2 [2,. 2,. = f d x ~m(x)g(x),

<~

i/(~

I-)5],

(61)

(60) where D ( Z ) is the error matrix for statistics 2,.. We consider the estimate S ( x ) to have the form

where q~m(x) is a set of linear independent functions and M ~< N. For example, q,m(x) can be characteristic functions of the mth bin in the histogram [15]. The

S ( x ) = E A,,(x)(~,,1Y,,, . n,m

(62)

V.B. Anykeyev et al. / Unfoldingmethods

366

Substituting eq. (62) into eq. (60) we find the expression for Z'm:

method) from the m i n i m u m of the functional

~ = E ( Y n - ( A . If(xla)>) n,m

Z% = ~., (q~,,,JA,,)(~l(At Is).

(63)

--1 xK,.,,(Y.,-(A,.,,I(xla)}).

n,l

F r o m eq. (63) it is clear that Zm differs from Z m = (~m I S ) even if q~,~= A m but ~ - 1 ~ G-1 (i.e. one of the regularization methods was used). This, generally speaking, c~.n lead to the bias of the estimate for a even when N~oo. From our point of view the consistent approach when making a parametric analysis is to give up from using S ( x ) and to return to direct measurements Y,. That means that the estimate of the vector of parameters a should be searched for following the LSM (X 2

i']

04"

10. Conclusion The problem of reconstructing the distributions measured by detectors with finite resolution and an efficiency not equal to unity (i.e. unfolding problem) was studied in this article from the point of view of the solution of the generalized moment problem in the class of linear estimates.

.c"

a)

1,2-

1.0.

:" ~o

;,,

0.8

:Z/

1.0-

0.6 2

0.5. 2

0,4

"7.

81

~J / j

i

,/

0.8.

1.0 . ~

0.60.8 3.4

0.6

0.2-

0.0. i

. . . .

0.10 ,.~ 2 5 t

8- 1 . 5

. . . .

0.34

0.01

;\

cl !

i\

Z

1.07.54-

7.0-

/i/'

~" c~.6. 6.5;"

1.0-

1.5 7.02-

..."

U3

1.5-

6.0-

0.5-

2-

0.14

d ) ..~/

.'7

",, /..).//

8.0-

0.08 1/NL

Z6"

8 8.5cz~ u3

20A~6

I

0.22 1 / N R

."

"

/i//

0.5-

1,06-5-

1,0 I

0.04

. . . .

i

. . . .

0.27 oC -10-5 0.50 Fig. 12. Example 2. Notations as in fig. 11.

5.50.00

0.20 A.IO.5

0.40

V.B. Anykeyev et aL / Unfolding methods

2.0-

2.0.

1.5 ¸

1.5-

1.0 ¸

1.0-

367

0.5

0'51 0.0

0.0 4.

10.

X

~

1 .

2.0"

2.0.-7>-

'



¢1.5

1.5-

1.0

1.0-

0.5-

0.5-

0.0.

0.0-

10.

/+\

¼.

1;.

,

16.

X d)

16.

X x Fig. 13. Example 1. Comparison of the estimates obtained from the condition ~ ( S ) = N (solid fine) with those minimizing D'~ (dashed line). For illustration on some points values of ,~(x, x) 1/2 are plotted as errors. (a) Spectral window method, SpA R = 6.00, Sp~ = 2.56 X 105. (b) Iterative method, SpA R = 5.01, SpZ = 6.07 X 10 4. (c) Tikhonov's method, SpA R = 5.46, Sp~ = 9.40 X 104. (d) Dual method, SpA R = 5.42, Sp~, = 1.02 x 105.

General properties of estimates were considered: the bias, which is characterised b y the residual a p p a r a t u s function a n d the noise c o m p o n e n t which is characterized by the error operator. First of all, the bias arises from the fact t h a t the finite n u m b e r of measurem e n t s contains i n f o r m a t i o n only a b o u t the visible comp o n e n t of the function being measured - its projection on the subspace of the a p p a r a t u s functions. T h e last squares m e t h o d (LSM) provides the estimate of the visible c o m p o n e n t to be u n b i a s e d b u t the noise is unacceptable. In order to suppress this noise c o m p o n e n t the following regularization m e t h o d s were studied in detail: the spectral window method, the iterative method,

T i k h o n o v ' s m e t h o d a n d the dual method. The difference of the residual a p p a r a t u s functions for these m e t h o d s from the L S M one provides a n additional bias of the estimates. T h e residual a p p a r a t u s functions for T i k h o n o v ' s m e t h o d p r o v e d to have minimal difference from those of LSM. F o r the m e t h o d s considered the expressions for the residual a p p a r a t u s functions AR(X, x ' ) , the error operator 2 ( x , x ' ) , SpA R, a n d S p 2 were o b t a i n e d a n d their i n t e r p r e t a t i o n is given in terms of a spectral representation. The detailed analysis of two numerical examples proves that o p t i m a l estimates by these regularization m e t h o d s are similar enough.

V.B. Anykeyev et al. / Unfolding methods

368

t

-'

principle of e n t r o p y m a x i m u m is used [2] (which automatically leads to a positive estimate), etc. F o r the p r o b l e m s w h e n the resolution is b a d a n d the experimental i n f o r m a t i o n is p o o r [7] the use of the fact that S ( x ) is positive permits one to o b t a i n estimates which are better t h a n those from linear methods. However, nonlinearity m a k e s the i n t e r p r e t a t i o n of estimate S ( x ) more complicated - it is difficult to calculate correct expression for Z ( x , x ' ) a n d the o p e r a t o r AR(X, x ' ) does not exist.

a)

% 2.

Acknowledgement o

olo

o:s

1:0

x

The authors express their gratitude to V.V. A m m o soy a n d V.F. K u r j a k i n for fruitful discussions.

Appendix >:_

On the choice of the histogram bin size 2 The bin size affects the n u m b e r of the a p p a r a t u s functions A , ( x ) , entering into expression for S(x), a n d their " w i d t h " because by definition

Ao(x) = f .... dx' K(x', x)e(x',

1

x).

Xn

o 0.o

o.'5

×

1'.0

Fig. 14. Example 2. Notations as in fig. 13. Estimates obtained from the condition ~(£) = N for Tikhonov's and dual methods are indistinguishable on the plot from those minimizing D-'/. (a) Spectral window method, SPAR= 4.00, SpZ= 9.05)<104. (b) Iterative method, SpA R = 5.56, SpN = 1.58 × 105.

So the greater h , = x , + a - x n is, the " w i d e r " is A,(x). It is evident that the reconstruction of the fine structure of S ( x ) is impossible with " w i d e " A,(x). Also the increase of their n u m b e r provides their strong overlap a n d leads to Xmin ---' 0, which consequently leads to an increase of the noise c o m p o n e n t of S(x). T h e question arises how the " w i d t h " of A , ( x ) depends o n the value

h n. To o b t a i n the results in a n analytical form we choose

E ( x ' , x ) = 1, K(x, x ' ) = N ( x ' , o2). T h e n for the bin in the central region of the histogram

f d x A.(x)= f~i[°+'dx'fdx N(x', 0 2) = h.. W h e n solving practical p r o b l e m s the optimal value of the regularization p a r a m e t e r must be chosen from the compromise between the bias of the estimate a n d its noise component. M e t h o d s of choosing the regularization p a r a m e t e r we know can be used only to define some region near the o p t i m u m . F r o m our p o i n t of view the final choice of the solution can be d o n e from the analysis of estimates o b t a i n e d by several methods. Numerical illustrations in this article were p r o d u c e d with the help of the software package SLIP (solution of linear inverse problem) which is u n d e r development. N o n l i n e a r estimates S ( x ) arise w h e n the n o n l i n e a r iterative methods [9] are used, w h e n a priori information is applied (for example, S ( x ) > 0 [7]), w h e n the

F o r the n o r m a l i z e d function .,in(x) = A . ( x ) / h . we have

, = fax xL(x)

= (xo+, + x , ) / 2 ,

D = f d x (x - j,)~.4.(x) - h. ) Jx. f ....

J

)+(x-I't)]2N(x',°2)

= 0 2 + h2./12. So w h e n we choose h2n = a 2 the " w i d t h " of the functions A . ( x ) actually is defined by the initial resolution K ( x ' , x).

KB. Anykeyev et al. / Unfolding methods Gauss-Markov theorem for the problem of unfolding Sometimes there are reasons to consider that SA belongs to the subspace s p a n n e d by the set of functions

C(x):

It is clear that estimate (A.1) with the expression o b t a i n e d from M is the L S M estimate for S(x) w h e n it is searched for in the form I

s(x)

I SA(X) = E (ziCi(x)' i=1

369

=

~,~(x) i=1

I < N.

In this case we look for the estimate S ( x ) in the form

S ( x ) = Z C , ( x ) g , nYn,

(A.1)

[8]. It must b e n o t e d that for the n o r m a l distribution for % the value ~ ( S ) (eq. (59)) is distributed according to a X~ ~ distribution.

i,n

where M is the u n k n o w n matrix. The condition of the estimate S ( x ) to be u n b i a s e d is

g= Z C,(x)M,.f dx' A.(x') Y'.,~fj(x') i,n

j

= Y'. a j C , ( x ) M , . ( A I C ) . j = ~ a j C j ( x ) j,i,n

a n d leads to the e q u a t i o n

M ( A JC ) = I.

(A.2)

We try to find m a x i m u m M from the m i n i m u m of Sp~ = SpMKMT(C I C) u n d e r the c o n d i t i o n of b e i n g u n b i a s e d (A.2). In accordance with the Lagrange m e t h o d the p r o b l e m is reduced to find the stationary p o i n t of the functional

cb = SpMKMT(cI C) + S p A [ M ( A [ C ) - I], where A is a square matrix of Lagrange multipliers of d i m e n s i o n I. Equating d~/dMnk to zero we o b t a i n the expression for M via A:

M = - ½ ( C I C ) - ' A T ( C i A ) K -1.

(A.3)

The substitution of this expression into the eq. (A.2) permits to define AT:

AT= _ 2 ( C I C ) [ ( C I A ) K _ I ( A [ C )

]-1.

Substituting the expression for AT into (A.3) we o b t a i n

M = [ ( C [ A ) K - I ( A I C ) ] - a ( C I A ) K -1.

References [1] V.P. Zhigunov, Nucl. Instr. and Meth. 216 (1983) 183. [2] V.P. Zhigunov et al., Nucl. Instr. and Meth. A273 (1988) 362. [3] A.N. Diddens et al, Nucl. Instr. and Meth. 178 (1980) 27. [4] B.W. Rust and W.R. Burrus, Mathematical Programming and the Numerical Solution of Linear Equations (Elsevier, New York, 1972). [5] Ju.P. Pytjev, Matematicheskij sbornik 120 (2) (1983) 240. [6] M. Bertero, I N F N / T C - 8 8 / 2 (1988). [7] E.A. Belogorlov and V.P. Zhigunov, Nucl. Instr. and Meth. A235 (1984) 146. [8] V.B. Anykeyev et al., Preprint IHEP 89-15 (Serpukhov, 1989) in English. [9] G.I. Marchuk, Methods of Numerical Mathematics (Springer, Berlin, 1975); S.F. Giljazov, Methods of Solution of Linear Ill-posed Problems (MSU, Moscow, 1987). [10] A.N. Tikhonov and V.Ja. Arsenin, Methods of Solution of Ill-posed Problems - M (Nauka, 1979). [11] V. Blobel, Proc. 1984 CERN School of Computing, Aiguablava, Spain, CERN 85-09 (1984) p. 88. [12] W.R. Burrus and V.V. Verbinsky, Nucl. Instr. and Meth. 67 (1969) p. 181. [13] G. Backus and F. Gilbert, Philos. Trans. Roy. Soc. 266 (1970) p. 123. [14] V.A. Morozov, Regular Methods of Solution of Ill-posed Problems - M (Nauka, 1987). [15] J.V. Allaby et al., CERN-EP/89-79 (1989).