An iterative method for model parameter identification

An iterative method for model parameter identification

Computers and Chemical Engineering 29 (2005) 941–948 An iterative method for model parameter identification 1. Incorrect problem Chr. Boyadjiev∗ , E...

279KB Sizes 2 Downloads 213 Views

Computers and Chemical Engineering 29 (2005) 941–948

An iterative method for model parameter identification 1. Incorrect problem Chr. Boyadjiev∗ , E. Dimitrova Bulgarian Academy of Sciences, Institute of Chemical Engineering, “Acad. G. Bontchev” Str., B1.103, 1113 Sofia, Bulgaria Received 17 July 2002; received in revised form 17 February 2004; accepted 3 August 2004 Available online 24 December 2004

Abstract A method for model parameter identification as incorrect inverse problem solution has been proposed. Iterative regularization procedure and a numerical algorithm have been developed for the case of two-parameter models. The method has been tested in two cases concerning both linear and non-linear relationships between objective function and parameters. Random number generator has been employed for “experimental” data used in the parameter identification procedure. Initial approximation effects on both the identified parameters and the regularization parameter have been investigated. Statistical approach for the analysis of the model adequacy has been proposed. © 2004 Elsevier Ltd. All rights reserved. Keywords: Model parameter identification; Incorrect inverse problems; Iterative method regularization; Model adequacy

1. Introduction Mathematical modeling relevant to heat and mass transfer operation is based on development of adequate mathematical structures employing corresponding physical mechanisms. The model build-up requires values of coefficients that can be obtained only through processing of experimental data. Very often such processing is based on solution of inverse problems and especially inverse identification problems. Very often this problem is incorrect (ill-posed) that implies a solution sensibility due to errors of the experimental data used (see, e.g. Beck & Arnold, 1977; Beck, Blackwell, & St. Clair, 1985; Glasko, 1994; Tikhonov & Arsenin, 1986; Tikhonov, Kal’ner, & Glasko, 1990). Solutions of parameter identification problems can be obtained through minimization of variances (least square function). This implies the functional has to satisfy a condition of certain minimal difference between calculated values and experimental data (see, e.g. Alifanov, 1994; Alifanov, Artiukhin, & Rumiantsev, 1988; Banks & Kunsch, 1989; ∗

Corresponding author. Tel.: +359 2 704 154; fax: +359 2 734 936. E-mail address: [email protected] (Chr. Boyadjiev).

0098-1354/$ – see front matter © 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compchemeng.2004.08.036

Boyadjiev, 1993; Brakham, 1989; Chavent, 1973, 1980; Tikhonov et al., 1990). In some cases, additional information about the functional minima exists a priori that allow to develop different methods of solution such as selection, quasisolution, and substitution of equation (see, e.g. Boyadjiev, 1993; Tikhonov & Arsenin, 1986; Tikhonov et al., 1990). In many cases, the inverse problems are essentially incorrect and regularization procedures are required (see, e.g. Alifanov et al., 1988; Boyadjiev, 1993; Tikhonov et al., 1990). Regularization implies use of variational or iterative approaches. After the regularization, gradient methods are also employed for minima search (see, e.g. Alifanov, 1974, 1977, 1983, 1994; Alifanov et al., 1988; Alifanov and Rumiantsev, 1980). Usually, an iterative procedure stops when the iterative solution begins to diverge from the exact solution. At this moment, the number of the last iteration is accepted as a regularization parameter of the solution. In many practical cases, this approach leads to large differences between the iterative solution and the exact one. The aim of the present paper is to present a new iterative algorithm. The main idea lies in minimization of the difference between the iterative and the exact solutions.

942

Chr. Boyadjiev, E. Dimitrova / Computers and Chemical Engineering 29 (2005) 941–948

2. Problem formulation Let us consider a numerical model: y = f (x, b),

(1)

where f is an objective function that can be expressed analytically or numerically, or through an operator (algorithm). Here, x = (x1 , . . ., xm ) is the vector of the independent variables, while b = (b1 , . . ., bJ ) is the vector of the parameters. The parameters of the model (1) can be determined through processing of N experimental values of the objective function yˆ = (ˆy1 , . . . , yˆ N ). This requires a least square function to be defined: Q(b) =

N 

(yn − yˆ n )2 ,

(2)

n=1

where yn = f(xn , b) are calculated values of the objective function (1); while xn = (x1n , . . ., xmn ) are values of the independent variables obtained under different experimental conditions (regimes), n = 1, . . ., N. The parameters of the model (1) can be determined under conditions imposed by a minimum of the function Q = (b1 , . . ., bJ ) concerning the parameters b = (b1 , . . ., bJ ). Frequently, the determinations of b is associated with many difficulties due to problem incorrectness, coming from the solution sensibility, induced by experimental errors associated mainly with the determination of yˆ . The problem can be avoided if regularization methods are used. The regularization transforms an incorrect problem into conditionally correct one. In the particular case the least square functions minima can be obtained by variational or iterative methods (see Alifanov, 1994; Alifanov et al., 1988; Alifanov and Rumiantsev, 1980; Banks & Kunsch, 1989; Beck & Arnold, 1977; Glasko, 1994). When and iterative method is used to find the minimum of Q(b) a so-called iterative regularization procedure is required (see, e.g. Alifanov et al., 1988; Boyadjiev, 1993; Tikhonov et al., 1990).

Fig. 1. Objective function y for different values of the model parameter b at x = x0 = const.

Consider y as experimental error of the objective function. Fig. 1 shows that, the error of parameter identification is depends on the magnitude of the objective function. At small values of the objective function, there are small errors

b1 , that recognize the inverse identification problem is correct one. However, if the objective function values are large the corresponding values of b2 is large too and the inverse problem is incorrect. In cases of extremely large objective function values, enormously errors b3 occur, that classify the inverse identification problem is essentially incorrect. The results shown on Fig. 1 indicate that the incorrectness of the inverse problem is not a result of the error size and the cause is the parameter sensitivity with respect to the experimental errors of the objective function.

4. Incorrectness of the least square function method Let’s consider two-parameter model y = 1 − b1 exp(−b2 x),

3. Incorrectness of the inverse problem Let us consider one-parameter model: y = 1 − exp(−bx),

(3)

where y is an objective function, x is an independent variable and b is a parameter. Fig. 1 shows relationship between the objective function and the model parameter at a constant value of the independent variable x = x0 . Such type of relationships is typical for number of heat or mass transfer processes models. The plot on Fig. 1 permits to obtain the objective function y0 is known, that is a direct problem solution. However, an inverse problem looks for the value of the parameter b0 if experimental value of the objective function y0 are known.

(4)

where b¯ 1 = 1 and b¯ 2 = 5 are the exact values of the parameters. The parameter identification problem will be solved by help of artificial experimental data provided by a random number generator: yˆ n(1) = (0.95 + 0.1An )yn ,

yˆ n(2) = (0.9 + 0.2An )yn ,

(5)

where An are random numbers within the interval [0, 1]. The values of yn are obtained from the model (4) for x = 0.01n (n = 1, . . ., 100). The maximal relative errors of these “experimental” data ( ˆy) are ±5% and ±10%. The values of yn , (1) (2) yˆ n and yˆ n are shown on Fig. 2. This plot shows that when 0 < x < 0.3 the inverse identification problem is correct, while in the case of 0.31 < x < 0.65 it is incorrect. The problem becomes essentially incorrect when 0.66 < x < 1.

Chr. Boyadjiev, E. Dimitrova / Computers and Chemical Engineering 29 (2005) 941–948

(1)

Fig. 2. Mathematical model and “experimental” data [*] yˆ n : values of y (2) with a maximal “experimental” error of ±5%; [䊉] yˆ n : values of y with a maximal “experimental” error of ±10%; [—] y = 1 − exp(−5x).

943

Fig. 4. The horizontals of the least square function Q (n = 31–65; ˆy [%] = ±5); [䊉] b = [1; 5].

In the cases of ±5% relative experimental errors, the least square function (2) yields horizontal lines (see Figs. 3–5) for different intervals of variation of x when the inverse problem is correct (Fig. 3), incorrect (Fig. 4) or essentially incorrect (Fig. 5). These results obtained show that, when the difference between the exact parameter value and determined (at the point of function minimum) is very small, the least square method is correct (see Fig. 3). On the other hand, in cases of remarkably large differences the inverse problem is incorrect (see Fig. 4). In the extreme case when the least square function has no minimum, the inverse problem is essentially incorrect (see Fig. 5). These results show that in cases of incorrect inverse problems minimization procedures of least square functions do not provide solutions that require additional conditions. Fig. 5. The horizontals of the least square function Q (n = 66–100; ˆy [%] = ±5); [䊉] b = [1; 5].

Consider a gradient method for a minimum search. If the iterative procedure is convergent, at each step the difference between e iterative solution and exact one will decrease toward the least squares minimum. However, there is a step, after which this difference begins to increase and if it is sufficiently small the iterative procedure should stop.

5. Regularization of the iterative method for parameters identification

Fig. 3. The horizontals of the least square function Q (n = 1–30; ˆy [%] = ±5); [䊉] b = [1; 5].

Different iterative methods for minima search (the gradient ones too) are stable with respect to experimental errors of objective functions. However, after a certain number iterations the difference between the iterative and exact values of the parameter starts to increase. The latter requires at every step this difference to be controlled.

944

Chr. Boyadjiev, E. Dimitrova / Computers and Chemical Engineering 29 (2005) 941–948

The following explanation stresses on a method with a preliminary defined accuracy of parameter identification. The minimum of Q(b) is determined by a gradient method. Through the search of the minimum the difference between iterative and exact parameter values are controlled at each iteration step. Let us assume that the iteration procedure starts with an initial approximation b(0) = (b1 (0) , . . ., bJ (0) ). The values of bi = (b1i , . . ., bJi ), where i is iteration number, are result of conditions imposed by a movement towards the anti-gradient of the function Q(b):

where values of b¯ j (j = 1, . . . , J) are unknown. They can be replaced by

bji = bj(i−1) − β(i−1) Rj(i−1) ,

(6)

∆j = γ

(7)

The parameter γ related to the desired accuracy (for example γ = 0.9) and it plays a role of a regularization parameter. From Eq. (6), it follows that:

j = 1, . . . , J,

where Rj(i−1) =  J

(∂Q/∂bj )(i−1)

1/2 ,

2

j = 1, . . . , J.

j=1 (∂Q/∂bj )(i−1)

Here, βi is the iteration step and βo = 10−2 (arbitrary small step value). The gradient of Q(b) gives: 

∂Q ∂bj

 =2

N  n=1

(i−1)



∂f (xn , b) [f (xn , b(i−1) ) − yˆ n ] ∂bj

, (i−1)

(8)

where ∂f/∂bi have to be calculated analytically or numerically. Each iteration step is successful if two conditions are satisfied:  N   Qi−1 − Qi = β(i−1) 2f (xn , b(i−1) ) − 2ˆyn  n=1

− β(i−1)

J  

Rj

j=1

×

J   j=1



∂f Rj ∂bj

∂f ∂bj

(i−1)





≥ 0,

(9a)

(0)

step through the use of the initial value ∆j : (0)

bj1 − bj0 , bj1

j = 1, . . . , J.

βi |Rj(i−1) | = |bj(i−1) − bji |,

j = 1, . . . , J.

(12)

(13)

Thus, the second condition (9b) can be expressed as 2∆j(i−1) ≥ |bj(i−1) − bji |,

j = 1, . . . , J.

(14)

The condition indicating the point where the iterative solution moves away from the exact one is: (0)

|bj(i−1) − bji | > 2∆j(i−1) ,

j = 1, . . . , J.

(15)

Hence, condition (15) permits a regularization of the parameter identification problem that leads to sufficiently exact values of model parameters.

6. Iteration step determination and iteration stop criterion

βi+1 = 2βi .

(16)

However, if Qi − Qi−1 < 0, Qi−1 − Qi−2 ≥ 0 the step should be kept unchanged:

2  2 bj(i−1) − b¯ j − bji − b¯ j     = β(i−1) 2 bj(i−1) − b¯ j − β(i−1) Rj(i−1) Rj(i−1) ≥ 0,

if the accuracy of the parameter identification is preliminarily defined (desired accuracy). (0) The desired accuracy ∆j(i−1) should be obtained in each

The step can be modified at each iteration point. In cases of two (or three) successful iterations (Qi − Qi−1 < 0, Qi−1 − Qi−2 < 0), the step should be enlarged twice:

(i−1)

 







j = 1, . . . , J, (11)

(0)



j = 1, . . . , J.

(0) (0) |bj(i−1) − b¯ j | = ∆j(i−1) = |∆j bj(i−1) |,

βi+1 = βi .

j = 1, . . . , J. (9b)

(17)

If the step is unsuccessful (Qi − Qi−1 ≥ 0), it should be reduced twice: βi+1 =

1 βi . 2

The first condition (9a) indicates that the iterative solution (bi ) approaches the solution at the minimum (b* ). On the other hand, the second condition (9b) controls the deference ¯ The between the iterative solution (bi ) and the exact one (b). divergence is due to the effect of the problem incorrectness b¯ = b∗ (see Figs. 3–5). The second condition (9b) leads to

Steps should be reduced also when the iteration is unsuccessful and there is non-convergence towards exact parameter values, i.e. when the condition (15) is satisfied. The procedure stops after unsuccessful iterations if the last step is smaller than the predefined accuracy:

2|bj(i−1) − b¯ j | ≥ βi |Rj(i−1) |,

|bj(i−1) − bji | < ∆j(i−1) .

j = 1, . . . , J,

(10)

(18)

(19)

Chr. Boyadjiev, E. Dimitrova / Computers and Chemical Engineering 29 (2005) 941–948

In cases when the iterative procedure convergences slow the increasing of γ according to (14) improves the convergence.

7. Iterative algorithm The results obtained permit to build up an algorithm for solution of an inverse identification problem: 1. Put β0 = 10−2 , γ = 0.9, bj0 = bj (initial parameter values), j = 1, . . ., J. 2. Put i = 1. 3. Calculate yn(i−1) = f(xn , bi−1 ), n = 1, . . ., J. 4. Calculate   ∂f , j = 1, . . . , N. ∂bj i−1 (0)

5. Calculate Qi−1 =

N 

(yn(i−1) − yˆ n )2 .

6. Calculate ∂Q ∂bj

 =2 (i−1)

N 

Table 1 One and two-parameter model solutions ∆ˆy [%]

b*

i

b1∗

b2∗

i

±5 ±10

4.9678 4.9351

337 339

1.0025 0.99401

5.0674 4.9218

128 172

8. Correct problem solution Literature sources (see, e.g. Alifanov, 1974; Alifanov et al., 1988; Boyadjiev, 1993; Tikhonov et al., 1990), teach that every method for solving of incorrect problems should solve also correct ones. Therefore, the first solution of the inverse problem considered here corresponds to the interval 0 < x < 0.3. Consider one and two-parameters models (3) and (4). Fig. 2 shows models (3) and (4) with exact parameter b¯ = 5, b¯ 1 = 1, b¯ 2 = 5) together with “experimental” data (5). The proposed algorithm was used for solution of the identification problem and results are summarized in Table 1. 8.1. Effect of the initial approximation

n=1



945

 [yn(i−1) − yˆ n ]

n=1

∂f ∂bj

 , (i−1)

j = 1, . . . , J. 7. Check if i = 1? - Yes, and then go to 8. - No, then go to 10. 8. Calculate parameters and accuracy

The efficiency of every iterative method for function minimization depends on the initial approximation. Parameter values obtained under conditions imposed by different initial approximations are summarized in Table 2. 8.2. Effect of the regularization parameter The iteration number depends on the regularization parameter value γ and the efficiency of the minimization increases when the value of γ is increased. This effect is demonstrated through data summarized on Table 3.

bji = bj(i−1) − β(i−1) Rj(i−1) , (0)

∆ji = γ

|bj1 −bj0 | |bji |, bj1

∆ji = |bji − bj(i−1) |,

9. Incorrect problem solution j = 1, . . . , J.

9. Put i = i + 1 and then go back to 3. 10. Check if Qi−1 − Qi−2 > 0? - Yes, and then go to 11. - No, then go to 13. (0) 11. Check if ∆j(i−1) < ∆j(i−1) ? - Yes, and then go to 17. - No, then go to 12. 12. Put β(i−1) = (1/2)β(i−1) and go back to 8. (0) 13. Check if ∆j(i−1) > 2∆j(i−1) ? - Yes, and then go to 12. - No, then go to 14. 14. Check if Q(i−2) − Q(i−3) > 0? - Yes, and then go to 15. - No, then go to 16. 15. Put βi−1 = βi−2 and then go back to 9. 16. Put βi−1 = 2βi−2 and then go back to 9. 17. Stop.

As commented before, if “experimental” data are captured under conditions (regimes), coresponding to the interval Table 2 Effect of the initial approximation (0 ≤ x ≤ 0.3, γ = 0.9) b(0)

b*

i

b1

b2

b1∗

b2∗

i

1.0 2.0 4.0 6.0 8.0 10.0

4.9364 4.9543 4.9672 4.9678 4.9678 4.9678

33 63 161 337 679 1173

0.5 0.7 0.9 1.1 1.3 1.5

6.0 6.0 6.0 6.0 6.0 6.0

0.9289 0.99440 0.9955 1.0025 0.99344 0.99548

4.9372 4.9611 4.9737 5.0674 4.9433 4.9735

296 240 216 128 257 224

(0)

(0)

Table 3 Effect of γ γ

b*

i

γ

b1∗

b2∗

i

0.9 0.1

4.9678 4.9676

337 3044

0.9 1.2

1.0025 1.0108

5.0674 5.1922

128 104

946

Chr. Boyadjiev, E. Dimitrova / Computers and Chemical Engineering 29 (2005) 941–948

Table 4 Incorrect problem solution ∆ˆy [%]

b*

i

b1∗

b2∗

i

±5 ±10

5.0614 5.1232

1213 1217

1.1797 1.3778

5.4666 5.9106

642 416

0.31 < x < 0.65, the parameter identification problem will be ill-posed. The problem incorrectness is due to solution sensibility with respect to “experimental” errors associated with the objective function yˆ determination. Consider the solutions of the parameter identification problem through minimization of the least square function (2), with xn = 0.01 n, n = 31, . . ., 65, i.e. 0.31 ≤ x ≤ 0.65. Solutions in two cases—one-parameter (b(0) = 6, γ = 0.5) and two-parameter models (b1 (0) = 1.1, b2 (0) = 6, γ = 0.05)—are summarized in Table 4. Comparisons between model predictions and “experimental” data are illustrated by plots on Figs. 6 and 7. These plots indicate very small differences between y1 , and y2 and the exact model values y. Hence, the method permits an inverse problem to be solved in the cases when some of the “experimental” data are not sensible “physically”. The latter implies that these data have not physical (2) sense since yˆ n > 1. 9.1. Effect of the Initial Approximation The iteration numbers depend on the initial approxima(0) (0) tions b(0) , b1 , b2 of the iterative procedure. The results for 0.31 < x < 0.65 are summarized in Table 5.

(1)

Fig. 7. Two-parameter model and “experimental” data: [*] yˆ n : “ex(2) perimental” data with a maximal error ±5%; [䊉] yˆ n : “experimental” data with a maximal error ±10%; [—] y = 1 − b1 exp(−b2 x); b1 = 1; b2 = 5; [— —] y1 = 1 − b1∗ exp(−b2∗ x) b1∗ = 1.1797; b2∗ = 5.4666; [- - -] y2 = 1 − b1∗ exp(−b2∗ x); b1∗ = 1.3778; b2∗ = 5.9106. Table 5 Effect of the initial approximation (0.31 ≤ x ≤ 0.65, γ = 0.5) b(0)

b*

i

b1

b2

b1∗

b2∗

i

1.0 2.0 3.0 4.0 6.0 8.0 10.0

5.0280 5.0472 5.0562 5.0602 5.0614 5.0614 5.0614

25 59 133 301 1213 5171 15829

0.5 0.7 0.9 1.1 1.5 2.0 3.0

6.0 6.0 6.0 6.0 6.0 6.0 6.0

1.1815 1.1853 1.1797 1.1797 1.1815 1.1798 1.1798

5.4708 5.4794 5.4666 5.4666 5.4709 5.4664 5.4665

704 619 668 642 439 1479 2944

(0)

(0)

9.2. Effect of regularization parameter The effect of γ (initial accuracy value) on the iteration numbers are summarized in Table 6. The results presented on Figs. 6 and 7 demonstrate that the differences between

Table 6 Effect of γ γ

b*

i

γ

b1∗

b2∗

i

0.05 0.5 1.2

5.0614 5.0614 5.0614

13823 1213 552

0.05 0.5 1.2

1.1797 1.2052 1.2375

5.4666 5.5246 5.5951

642 139 87

the exact model and the models derived through parameter identifications are very small. On the other hand, the results in Table 5 show that the differences between the exact and the obtained values of the parameters are significant. The correctness of the parameter identification will be tested below through a criterion of model adequacy (see, e.g. Vuchkov & Stoyanov, 1986).

10. Statistical analysis of model adequacy

(1)

Fig. 6. One-parameter model and “experimental” data: [*] yˆ n : “ex(2) perimental” data with a maximal error ±5%; [䊉] yˆ n : “experimental” data with a maximal error ±10%; [—] y = 1 − exp(−bx); b = 5; [— —] y1 = 1 − exp(−b* x); b* = 5.0614; [- - -] y2 = 1 − exp(−b* x); b* = 5.1232.

A test of a model adequacy was performed through a statistical analysis. The parameters b* are derived through calculations of experimental data, so they also can be assumed as random numbers. The same suggestion is valid for the objective function values calculated with random parameter numbers. Moreover, both the parameter and the objective

Chr. Boyadjiev, E. Dimitrova / Computers and Chemical Engineering 29 (2005) 941–948

function incorporate also effects of the model build-up that implies a lack of knowledge concerning the mathematical structure employed (see, e.g. Bojanov & Vuchkov, 1973). The model is assumed as adequate one if the variance of the experimental data error (S␧ ) equals the variance of the model error (S). The test were performed with the experimental values of the objective function yˆ k (k = 1, . . ., K) obtained under identical technological conditions (regimes) (0) (0) x = x(0) = (x1 , . . . xk ), where K = 5–10. The experimental data variance requires the mathematical expectation of y (m ˜ y ) to be estimated (see, e.g. Boyadjiev, 1993; Draper & Smith, 1966): K 1  yˆ k K

m ˜y =

K

1  (ˆyk − m ˜ y )2 . K−1

(21)

k=1

Thus, the variance of the model error (see, e.g. Boyadjiev, 1993) is: N 1  Q (yn − yˆ n )2 = , N −J N −J

(22)

n=1

where N is the number of experimental data, J is the parameters number. The model adequacy is defined by the variance ratio F=

J ∆ˆy [%] b1∗ 1 1 2 2

±5 ±10 ±5 ±10

– – 1.1797 1.3778

b2∗

γ

S␧ × 10−2 S × 10−2 F

FJ

5.0614 5.1232 5.4666 5.9106

0.5 0.5 0.05 0.05

2.6042 5.2083 2.6042 5.2083

2.19 2.19 2.20 2.20

2.3588 4.7328 2.3656 4.7349

0.8205 0.8257 0.8252 0.8265

N = 35, J = 1(2), K = 10, x(0) = 0.5 and α = 0.05 (see Table 8). The models are adequate despite the large differences between the calculated and the exact values of the model parameters (see Table 4).

11. Comparison between correct and incorrect problems

and

S2 =

Table 8 Statistical analysis of the model adequacy (0.31 ≤ x ≤ 0.65)

(20)

k=1

Sε2 =

947

S2 , Sε2

(23)

where S 2 > Sε2 , if S contains the error effect of the both model and experimental data. The value of F is compared to the tabulated values (FJ ) of the Fisher’s distribution (see, e.g. Draper & Smith, 1966). The condition of the model adequacy is F ≤ FJ (α, ν, νε ),

The results obtained for inverse identification problem solutions show a large difference between the least square functions in the correct and incorrect problem cases. Horizontals (contour lines) of the least square function exist in Figs. 8 and 9. In all cases, Q is a ridge type function. If the inverse problem is correct, the distance exactly between the solution points and the minimum defined by the least square function should be very small (see Fig. 8). In cases of incorrect problems, this distance is very large (see Fig. 9). Figs. 8 and 9 show the “road” of the iterative procedures (0) (0) from initial parameter values (b1 , b2 ) towards the parameter values at the last iterations (b1∗ , b2∗ ). All these trajectories of the iterative solutions demonstrate the role of the second condition of (9). In all cases discussed above, the difference between correct and incorrect inverse identification problems is based on a distance between points of exact solutions and least square function minima. Actually, exact parameter values

(24)

where ν = N − J, ν␧ = K − 1, α = 0.01–0.1. The statistical analysis of the model adequacy was performed with 0 ≤ x ≤ 0.30 and the results are presented in Table 7. For the tests performed: N = 30, J = 1(2), K = 10, x(0) = 0.2 and α = 0.05. The results confirm the adequacy of the model. The statistical analysis of cases corresponding to incorrect inverse problem (0.31 ≤ x ≤0.65) was performed with Table 7 Statistical analysis of the model adequacy (0 ≤ x ≤ 0.3) J ∆ˆy [%] b1∗ 1 1 2 2

±5 ±10 ±5 ±10

– – 1.0025 0.99401

b2∗

γ

S␧ × 10−2 S × 10−2 F

FJ

4.9678 4.9351 5.0674 4.9218

0.9 0.9 0.9 0.9

1.7933 3.5867 1.7933 3.5867

2.24 2.24 2.25 2.25

1.7071 3.4139 1.8354 3.4434

0.9061 0.9059 1.0475 0.9217

Fig. 8. Road of the iterative procedures (n = 1–30; yˆ n [%] = ±10); [䊉] b = [1; 5], ([ ] b0 = [0.6; 5.9], [+] b* = [0.9833; 4.8390]), ([ ] b0 = [1.2; 4.1], b* = [0.9691; 4.6444]).

948

Chr. Boyadjiev, E. Dimitrova / Computers and Chemical Engineering 29 (2005) 941–948

problem incorrectness that is mainly due to parameter sensibility with respect to the experimental data errors. Thus, a minimization of least square function cannot be assumed as a solution of parameter identification problem. Additional condition for inverse problem regularization is introduced in the procedure proposed. This condition permits least square function minimization to be employed for solutions of model parameter identification problems. The model adequacy was tested through a statistical analysis. The latter can be assumed as a criterion of applicability of the iterative method proposed.

References

Fig. 9. Road of the iterative procedures (n = 31–65; ˆy [%] = ±10); [ ] b0 = [0.6; 5.9]; [ ] b0 = [1.6; 4.2]; [䊉] b = [l; 5]; [䊉] b = [1; 5]; [+] b* = [1.3724; 5.8999]; [+] b* = [1.3760; 5.9071]. Table 9 Solutions of correct and incorrect problems using different “experimental” data sets (0)

(0)

b1 = 1.1, b1 = 6 Different “experimental” data

b1∗

b1∗

γ

0 ≤ x ≤ 0.3 1 2 3

1.0025 1.0115 1.0068

5.0674 5.1706 5.1881

0.9 0.9 0.9

128 120 179

0.31 ≤ x ≤ 0.65 1 2 3

1.1564 0.5789 1.1723

5.2675 3.7056 5.2624

0.05 0.05 0.05

798 1803 776

i

practically do not exist, so useful criterion for the inverse problem “diagnostics” is required. Table 9 summarizes solutions of correct and incorrect inverse problems based on different experimental data sets. It is clear that large difference between solutions can be used as a criterion of inverse problem incorrectness.

12. Conclusions An iterative method and algorithm for model parameters identification in cases of incorrect inverse problems are proposed. Large difference between parameter values, based on processing of different experimental data sets is assumed as a criterion for inverse problem incorrectness. Solutions of model parameters identification problems through least square function minimizations show large differences between exact and calculated (as function minima) parameter values. This difference cannot be explained only by experimental data size but they come also from the inverse

Alifanov, O. M. (1974). Solution of the inverse heat conduction problems by iterative methods. J. Eng. Phys. (Russ.), 26(4), 682–689. Alifanov, O. M. (1977). Inverse heat transfer problems. J. Eng. Phys. (Russ.), 33(6), 972–981. Alifanov, O. M. (1983). On the solution methods of the inverse incorrect problems. J. Eng. Phys. (Russ.), 45(5), 742–752. Alifanov, O. M. (1994). Inverse heat transfer problems. Berlin: SpringerVerlag. Alifanov, O. M., Artiukhin, E. A., & Rumiantsev, C. B. (1988). Extremal methods for incorrect problem solutions. Moskow: Nauka (in Russian). Alifanov, O. M., & Rumiantsev, C. B. (1980). Regularization iterative algorithms for the inverse problem solutions of the heat conduction. J. Eng. Phys. (Russ.), 39(2), 252–253. Banks, H. T., & Kunsch, K. (1989). Estimation techniques for distributed parameter systems. Birkhauser. Beck, J. V., & Arnold, K. I. (1977). Parameter estimation in engineering and science. New York: J. Wiley. Beck, J. V., Blackwell, B., & St. Clair, C. R., Jr. (1985). Inverse Heat Condition. Ill-posed problems. New York: J. Wiley–Interscience Publications. Bojanov, E. S., & Vuchkov, I. N. (1973). Statistical methods for modelling and optimization of multifactor object. Sofia: Technics (in Bulgarian). Boyadjiev, Chr. (1993). Fundamentals of modeling and simulation in chemical engineering and technology. Sofia: Edit. Bulg. Acad. Sci., Inst. Chem. Eng (in Bulgarian). Brakham, R. L. (1989). Jr. scientific data analysis. Berlin: SpringerVerlag. Chavent, G. (1973). Identification of distributed parameters. Identification and system parameter estimation. P. 2. In Proceedings of the 3rd IFAC Symposium (pp. 649–660). North-Holland Publ. Co. Chavent, G. (1980). Identification of distributed parameters systems: About the output least square method, its implementation and identifiability. Identification and system parameter estimation. V. I. In Proceedings of the SIFAC Symposium. Pergamon Press. Draper, N. R., & Smith, H. (1966). Applied regression analysis. New York: J. Wiley. Glasko, V. B. (1994). Inverse problem of mathematical physics (translated by Adam Bincer). AIP. Tikhonov, A. N., & Arsenin, V. I. (1986). Methods for solutions of incorrect problems. Moscow: Nauka (in Russian). Tikhonov, A. N., Kal’ner, V. D., & Glasko, V. B. (1990). Mathematical modelling of technological processes and method of inverse problems. Moscow: Mashinostroenie (in Russian). Vuchkov, I., & Stoyanov, S. (1986). Mathematical modelling and optimization of technological objects. Sofia: Technics (in Bulgarian).