Parametric sensitivity: A case study comparison

Parametric sensitivity: A case study comparison

Computational Statistics and Data Analysis 53 (2009) 2640–2652 Contents lists available at ScienceDirect Computational Statistics and Data Analysis ...

846KB Sizes 1 Downloads 51 Views

Computational Statistics and Data Analysis 53 (2009) 2640–2652

Contents lists available at ScienceDirect

Computational Statistics and Data Analysis journal homepage: www.elsevier.com/locate/csda

Parametric sensitivity: A case study comparison H. Sulieman a,∗ , I. Kucuk a , P.J. McLellan b a

Department of Mathematics and Statistics, American University of Sharjah, P.O.Box 26666, Sharjah, United Arab Emirates

b

Department of Chemical Engineering, Queen’s University, Kingston, Ontario, Canada, K7L 3N6

article

info

Article history: Received 31 March 2008 Received in revised form 4 January 2009 Accepted 8 January 2009 Available online 18 January 2009

a b s t r a c t This article presents a comparative analysis of three derivative-based parametric sensitivity approaches in multi-response regression estimation: marginal sensitivity, profile-based approach developed by [Sulieman, H., McLellan, P.J., Bacon, D.W., 2004, A Profile-based approach to parametric sensitivity in multiresponse regression models, Computational Statistics & Data Analysis, 45, 721–740] and the commonly used approach of the Fourier Amplitude Sensitivity Test (FAST). We apply the classical formulation of FAST in which Fourier sine coefficients are utilized as sensitivity measures. Contrary to marginal sensitivity, profile-based and FAST approaches provide sensitivity measures that account for model nonlinearity and are pertinent to linear and nonlinear regression models. However, the primary difference between FAST and profile-based sensitivity is that traditional FAST fails to account for parameter dependencies in the model system while these dependencies are considered in the analysis procedure of profile-based sensitivity through the re-estimation of the remaining model parameters conditional on the values of the parameter of interest. An example is discussed to illustrate the comparisons by applying the three sensitivity methods to a model described by set of non-linear differential equations. Some computational aspects are also explored. © 2009 Elsevier B.V. All rights reserved.

1. Introduction Parametric sensitivity analysis in multi-parameter regression models aims at assessing the sensitivity of model predicted responses to variations in model parameters. The numerical values of these parameters are usually unknown and the primary goal in regression analysis is to estimate these values from available experimental data. The uncertainties associated with these parameter estimates propagate into the model predictions via sensitivities. When sensitivity analysis is applied to an existing model of a physical system, the relative importance of the parameters can be established and the results can be used to strengthen the knowledge base by guiding subsequent research in order to increase the confidence in the model and its predictions. Parametric sensitivity can be local or global. In local sensitivity, the information generated from the analysis is valid over small ranges of parameter uncertainties where linear approximation to model function is assumed to be adequate. Local sensitivity techniques deliver derivative related measures and fail to account for simultaneous changes in the values of model parameters. On the other hand, global sensitivity incorporates the simultaneous changes in the parameter values in their respective ranges of uncertainty and evaluates the effect of a parameter of interest while all other parameters are varied as well. The majority of global sensitivity techniques are variance-based measures that apportion the output uncertainty to the uncertainty in the parameters. A few of these methods are model independent and can be applied to nonlinear and nonmonotonic models. The main difficulty with these methods is that they are often computationally expensive and require



Corresponding author. Tel.: +971 6 515 2928; fax: +971 6 515 2950. E-mail addresses: [email protected], [email protected] (H. Sulieman), [email protected] (I. Kucuk), [email protected] (P.J. McLellan).

0167-9473/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.csda.2009.01.003

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

2641

large number of model evaluations. In this work sensitivity analysis is carried out in the context of parameter estimation in multi-response regression models using three analysis procedures: marginal sensitivity defined by first-order partial derivatives of the predicted response with respect to model parameters, classical Fourier Amplitude Sensitivity Test (FAST) and profile-based sensitivity introduced by Sulieman et al. (2001). The conventional measure of local sensitivity is defined by the first-order partial derivatives of the model function with respect to the parameters resulting from the linear approximation of the model function in the parameter space. The effect of a parameter of interest on model predictions is determined locally by changing only the value of this parameter while all other parameters are held fixed at their nominal values. Therefore, parameter dependencies are not accounted for by this local sensitivity measure which (Sulieman et al., 2001) called marginal sensitivity coefficient. To surmount the drawbacks of the linear sensitivity assessment, Sulieman et al. (2001) proposed an alternative assessment procedure in which simultaneous perturbations in the values of all model parameters are achieved using the profiling scheme introduced by Bates and Watts (1988) for nonlinearity assessment of regression models. The resulting profile-based sensitivity measure; defined by the total derivative of the model function with respect to a parameter of interest, was shown to account for the nonlinear dependencies among the parameter estimates. The authors applied the profile-based measure to the nonlinear estimation of parameters in single response regression model and later extended the application to the parameter estimation in multi-response regression models (Sulieman et al., 2004). Detailed discussion of the profile-based sensitivity measures is presented in Section 3. One of the most commonly used computational methods for sensitivity of model outputs from complex model functions is the Fourier Amplitude Sensitivity Test (FAST) developed by Cukier et al. (1973, 1975, 1978) and Schaibly and Shuler (1973). Analogous to profile-based sensitivity, FAST method perturbs the multidimensional parameter space simultaneously by using a suitably defined search-curve and it accounts for model nonlinearity in the parameters. A core feature of FAST is that the assessment of sensitivity can be carried out independently for each parameter using a single set of computational runs because all of the terms in the Fourier expansion of the model function are mutually orthogonal. Contrary to profile-based sensitivity, FAST assumes independent model parameters and hence fails to account for their dependencies. Most recently Xu and Gertner (2007) proposed, using rank correlations, a procedure to extend the application of FAST to models with correlated parameters. A follow-up investigation to the work in this article is being considered to examine how the results here alter when the extended FAST is applied. In the first formulation of FAST, the authors used the Fourier sine coefficient as a measure to assess the sensitivity of the solutions of chemical rate equations to parameters given by the rate coefficients. They showed that the sine coefficient is related to mean value of the partial derivative of the output concentration with respect to the rate coefficient of interest over perturbations of all other rate coefficients. The relationship between the Fourier sine coefficient and the partial derivative of the output function with respect to a parameter is revisited and further discussed in this article. Cukier et al. (1978) further extended the application of Fourier sensitivity analysis by developing a variance-based measure for the assessment of model uncertainty. Since, its development, FAST has been routinely used in the assessment of complicated dynamic systems including climatic and meteorological models (Liu and Avissar, 1996; Rodriguez-Camind and Avissar, 1998), laser kinetics models (Colonna et al., 1994), transportation models for underground nuclear waste disposal (Saltelli et al., 1999, 2000a) and other systems. Recent reviews of the method can be found in Cacuci and Ionescu-Bujor (2004) and Saltelli et al. (2004). This article presents a comparative analysis of the three sensitivity methods: marginal sensitivity, profile-based sensitivity and conventional FAST approach.The three measures are applied to the parameter estimation of the Dow Chemical regression benchmark model formulated by Biegler et al. (1986). The multi-response chemical kinetic regression model consists of a set of nonlinear differential equations with four response variables and eight unknown parameters that are estimated from given experimental data. In Section 2 we describe the general multiresponse regression model and discuss the parameter estimation problem in such model. In Section 3 we discuss the profile-based sensitivity measure for the parameter estimation in multiresponse regression models and discuss its computational features. In Section 4 we review FAST and discuss the most recent developments in the application of this technique. Applications of marginal sensitivity, profile-based sensitivity and FAST to Dow Chemical regression benchmark is presented in Section 5, and conclusions are summarized in Section 6. 2. The model We consider the general multi-response regression model given by: ynj = fj (xn , Θ ) + znj ,

n = 1, . . . , N j = 1, . . . , J

(1)

where ynj is the random variable associated with the measured value of the j-th response for the n-th experimental setting, fj is a known model function for the j-th response depending on some or all of the experimental settings xn and on some or all of the parameters in Θ . The function fj may or may not differ among J responses. znj is the disturbance term and Θ is a p-element vector of unknown parameters. For fixed set of experimental conditions, xn , n = 1, . . . , N, the expected responses fj (xn , Θ ) depend only on Θ and therefore can be defined as: f(xn , Θ ) = H(Θ )

(2)

2642

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

ˆ are found by minimizing where f and H are N × J matrices of expected responses fj (xn , Θ ). The parameter estimates Θ d(Θ ) = |Z0 Z|;

(3)

where Z is the residual matrix defined by: Z(Θ ) = Y − H(Θ )

(4)

Y is the N × J matrix of observations ynj . d(Θ ) is called the determinant estimation criterion developed by Box and Draper (1965). The disturbance term znj in Eq. (1) is assumed according to Box and Draper (1965) to have a normal distribution with zero mean, for all n and j, independent for different n but having a fixed variance–covariance matrix 6 of dimension J × J when n is common. Bates and Watts (1987) developed a generalized Gauss–Newton method to optimize d(Θ ) by explicitly deriving the gradient and Hessian of d(Θ ). Unlike the Gauss–Newton method, the generalized Gauss–Newton method for multiresponse data need not result in a positive definite Hessian matrix. When the Hessian matrix is not positive definite, the optimal value of d(Θ ) cannot be calculated. Bates and Watts (1987) proposed a procedure to restore positive definiteness of the Hessian be inflating the diagonal element and modifying the Gauss step size. Guay and McLean (1995) discussed a number of approaches to optimize d(Θ ) for models consisting of systems of ordinary differential equations. They proposed a modified Gauss–Newton optimization algorithm in which the model equations, the gradient and Hessian of d(Θ ) are simultaneously solved using the Decoupled Direct Method (Dunker, 1984). Their approach is utilized in this work. Dependencies among responses is also a common problem in multiresponse parameter estimation. Dependencies can occur in the response data or in the expected responses, causing the residual matrix Z to become singular; this can result in the estimation procedure converging to a spurious optimum. Measures used to deal with this computational problem, such as modifying the estimation procedure to take account of these dependencies or simply removing the responses that are responsible for the dependencies from the data, will improve the conditioning of the sensitivity calculations. Further discussion on detecting and eliminating data dependencies in multiresponse estimation can be found in Seber and Wild (1989). 3. Profile-based sensitivity Sulieman et al. (2004) developed a profile-based sensitivity measure to assess sensitivity of the predicted responses from the model in Eq. (1) to the parameters Θ . Motivated by the profiling algorithm developed by Bates and Watts (1988) for constructing likelihood intervals for individual parameters in single-response nonlinear regression models, the developed measure quantifies the net change in a predicted response due to perturbations in a parameter of interest, after the remaining parameters are updated to their estimates conditioned on the perturbed value of the parameter of interest. Following the profiling perturbation scheme, the p-element vector of parameters Θ is partitioned as Θ = (θi , Θ−i ) where θi is the parameter of interest and Θ−i is (p − 1)-element vector of the remaining parameters. The value of θi is varied across a specified range of uncertainty, and for each fixed value of θi , conditional estimates of the remaining parameters Θ−i , denoted ˜ −i , are obtained by minimizing d(Θ ) over the (p − 1)-dimensional parameter space. As a result, the J-element vector of by Θ the profile-based sensitivity coefficients of the J response variables to θi at a selected setting of the experimental conditions x0 is defined by the total derivative of the predicted response with respect to θi given by: PSC0 (θi ) =

˜ −i (θi )) DH00 (θi , Θ

(5)

Dθi

where H00 = (f1 (x0 , Θ ), f2 (x0 , Θ ), . . . , fJ (x0 , Θ ))0 is the J × 1 vector of response variables evaluated at x0 . Eq. (5) can be shown to equal:

∂ H00 ∂ H00 PSC0 (θi ) = − ∂θi ∂ Θ−i



∂ 2d ∂ Θ−i ∂ Θ−0 i

−1

∂ 2 d ∂θi ∂ Θ−i ˜

(6)

Θ−i

∂ H0

where the vector ∂θ0 has J elements of the first-order partial derivatives of the J model functions with respect to θi evaluated i ∂ H0

at x0 . Following Sulieman et al. (2001) these first-order partial derivatives are called Marginal sensitivity coefficients. ∂ Θ 0 is −i an J ×(p − 1) matrix consisting of the marginal sensitivity coefficients of the J model functions with respect to Θ−i evaluated

at x0 . The matrix ∂ Θ ∂ ∂dΘ 0 is the (p − 1) × (p − 1) sub-matrix of the Hessian matrix of d(Θ ), and ∂θ∂∂ Θd is a (p − 1)-element i −i −i −i vector of the Hessian terms corresponding to θi and Θ−i . The (r,s)th component of the Hessian matrix is given by: 2

2

    ∂ 2d = |Z0 Z| tr (Us )tr (Ur ) − tr [Us Ur ] + tr (Z0 Z)−1 (Z0s Zr + Z0r Zs ) + tr (Z0 Z)−1 (Z0 Zrs + Z0sr Z) ∂θs ∂θr

(7)

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

2643

where Ur = (Z0 Z)−1 (Z0 Zr + Z0r Z)

∂ H(Θ ) ∂Z =− ∂θr ∂θr ∂ 2Z ∂ 2 H(Θ ) Zsr = =− , ∂θs ∂θr ∂θs ∂θr Zr =

r , s = 1, . . . , i − 1, i + 1, . . . , p.

where the notation tr (A) represents the trace of a matrix A.

The (p − 1) elements of the vector ∂θ∂∂ Θd in Eq. (6) are obtained by taking s = i and r = 1, . . . , i − 1, i + 1, . . . , p in Eq. (7). i −i (p−1)p Because there are only 2 distinct Hessian terms in the (p − 1)×(p − 1) sub-matrix, the total number of Hessian terms to 2

(p−1)p

(p−1)(p+2)

be evaluated is (p − 1) + 2 = . The computations of these terms are simplified by using the QR decomposition 2 of the matrix Z. For diagnostic purposes, the quantities in the equations above are evaluated at the determinant estimate ˜ −i (θˆi )). For further details of the above results the reader is referred to Sulieman et al. (2001, 2004). values (θˆi , Θ Eq. (6) can be viewed as PSC0 (θi ) = MSC0 (θi ) + correction term.

(8)

The correction term is a product of the vector containing the marginal effects of Θ−i on the predicted responses at x0 ∂ H0

˜ −i with respect to θi . These slopes measure the changes in through the matrix ∂ Θ 0 , and the vector containing the slopes of Θ −i the values of the parameters Θ−i caused by the changes in the values of θi . Eq. (6) indicates that these slopes consist of two ˜ −i , ˜ −i , ∂ d 0 and the dependencies between θˆi and Θ components: the pairwise dependencies among the elements of Θ ∂ Θ−i ∂ Θ−i 2

∂2d . ∂θi ∂ Θ−i

According to Eq. (7), the two sets of dependencies are computed using the second-order derivatives of the model

functions with respect to Θ−i so as to account for partial model nonlinearity. Eq. (8) reveals that, for a given parameterization, model formulation and design, the vector of the profile-based sensitivity coefficients, PSC, simultaneously accounts for the marginal effects of a parameter of interest on the J predicted responses, adjusted for the correlation structure among the parameter estimates, and therefore provides more complete information on the sensitivity behaviors of these predicted responses than that provided by the marginal sensitivity coefficients MSC. The reliability of the sensitivity information provided by MSC depends on the magnitude of the correction term which in turn depends on the extent of dependencies between the parameter estimates and/or the degrees of nonlinearities of the model functions with respect to the parameters. Following Sulieman et al. (2004) the j-th element of the profile-based sensitivity vector PSC0 (θi ) is scaled by the estimated standard errors of the j-th predicted response and the i-th parameter estimate to produce dimensionless profile-based sensitivity coefficient given by: se(θˆi ) se(hˆ 0j )

PSC0j (θˆi )

(9)

˜ −i (θˆi )), and PSC0j (θˆi ) is the j-th component where hˆ 0j is the predicted value of the j-th response evaluated at x0 and (θˆi , Θ ˜ −i (θˆi )). Following Bates and Watts (1988), se(θˆi ) and se(hˆ 0j ) are calculated using appropriate of PSC0 (θi ) evaluated at (θˆi , Θ terms in the inverse Hessian matrix of the determinant and the gradient of model function for se(hˆ 0j ). Similar scaling is applied to the j-th element of the marginal sensitivity vector to give: se(θˆi ) se(hˆ 0j )

MSC0j (θˆi ).

(10)

For convenience of notational presentation, the subscript 0 is dropped and the notations PSCj (θi ) and MSCj (θi ) are used in the rest of the article to denote the j-th elements of the vectors of dimensionless sensitivity coefficients for θi evaluated at x0 . We close this section with a synopsis to implement profile-based sensitivity analysis for the estimation of a given multiresponse model formulation, design and parameterization. 1. For given experimental data, perform unconditional parameter estimation using the determinant criterion in Eq. (3) in ˆ and corresponding standard errors. order to get the parameter estimates Θ 2. For a parameter of interest θi , set θi at a specified value. In this work we use θi = θˆi . ˜ −i (θˆi ) 3. Use the determinant criterion to obtain the conditional estimates of the p − 1 remaining parameters in the model Θ and the standard errors of the predicted responses. ∂ H0

˜ −i (θˆi )). 4. Evaluate the marginal sensitivity coefficients ∂θ0 at (θˆi , Θ i

2644

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

(p−1)(p+2)

5. Evaluate the Hessian terms given in Eq. (7) for all r , s = 1, . . . , i − 1, i + 1, . . . , p. As stated earlier, there are 2 terms to be evaluated. QR decomposition of the matrix Z can be used to simplify the computations. 6. Use the resulting values of steps 4 and 5 to calculate the profile-based sensitivity coefficient given in Eq. (6) at a given design point x0 . 7. Use the transformation in Eq. (9) to obtain scaleless sensitivity coefficients. 8. Repeat steps 3 to 8 for i = 1, 2, . . . , p. With p parameters in the model, the above synopsis requires p repetitions of the same conditional optimization routine for each specified value of the elements in Θ . This means there is no additional computational complexity required when the synopsis is applied for each of the p parameters. It is ‘‘more of the same’’ computations are called repetitively. In addition to facilitate convergence and reduce the number of model evaluations needed in the conditional estimation of the p − 1 parameters in step 3, the initial values of these parameters are selected at their unconditional estimated values obtained in step 1. Note that the above synopsis can be repeated for a range of values of θi . The purpose of the analysis in this case would be to investigate the nonlinearity of the model function in terms of the profile sensitivity coefficients. A plot of the resulting profile-based sensitivity coefficient versus θi provides a sensitivity profile in the parameter space at a given x0 . Such a sensitivity profile is a nonlinearly-scaled version of the profile trace used by Bates and Watts (1988) to assess nonlinearity ˆ. in parameter estimation. For diagnostic and comparative purposes, we only applied the synopsis at Θ 4. Review of FAST Cukier et al. (1973, 1975) and Schaibly and Shuler (1973) developed this diagnostic technique to investigate the sensitivity of the solutions of large scale chemical reaction systems to uncertainties in rate coefficients. Because of theoretical ambiguities contained in the first formulation of this method, Cukier et al. (1978) reformulated the procedure to make it more understandable and to facilitate its applications to systems other than chemical reaction systems. The principle of FAST is that the model predicted response is expanded into a Fourier series and the Fourier coefficients are used to derive sensitivity information of the model predictions to the parameters in addition to the partial variance contributed by individual parameters to the total variance of the model predicted response. The key idea of FAST method is to convert the p-dimensional integral in Θ space to one dimensional integral space by suitably defining a search variable s using the transformations:

θi = gi (ui ) i = 1, . . . , p

(11)

where the variable ui serves to vary θi and is defined by: ui = Gi (sin wi s) i = 1, . . . , p

(12)

Pp

wi ’s are incommensurate frequencies, i.e., i=1 ζi wi 6= 0 for any integers ζi . With appropriate choice of G transformation, variation of s over the range −π ≤ s ≤ π generates a curve which traverses the p-dimensional parameter space in a manner consistent with the assumed probability distribution of Θ . The relative rate of traversal in each θi direction is proportional to the corresponding frequency wi . The set of frequencies, w = (w1 , w2 , . . . , wp ) cannot be truly incommensurate and therefore is appropriately selected as a finite set of integer frequencies. Choosing integer frequencies introduces two types of errors to the analysis. First, the search curve becomes periodic and does not drive arbitrarily close to any point in the parameter space, i.e., is not space-filling curve. Second, harmonics of the fundamental frequencies used for the parameters interfere with one another. This implies that the Fourier coefficients associated with the i-th parameter θi do not reflect sensitivity of the model function to only θi , rather they reflect sensitivity to possible combination of θi and other parameters. With a judicious choice of interference factor M, the error due to interference can be avoided or minimized. The factor M postpones the interference to harmonics of order higher than M and therefore, harmonics up to order M are included in the analysis. The order M is generally taken as 4 or higher in order to obtain sufficient accuracy of the calculations (McRae et al., 1982; Saltelli et al., 1999, 2000b). Cukier et al. (1975) tabulated frequency sets free of interferences to 4th order for up to 50 parameters. The j-th predicted response function at x0 , Hj , becomes periodic in s and so it is expanded by a Fourier series as follows: Hj (s) =

∞ X

(Aj (k) sin(ks) + Bj (k) cos(ks))

(13)

k=−∞

where: Aj (r wi ) = 1/2π Bj (r wi ) = 1/2π

Z

π

−π Z π −π

Hj (s) sin(r wi s)ds

(14)

Hj (s) cos(r wi s)ds

(15)

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

2645

Aj (r wi ) for r = 1, 2, . . . and Bj (r wi ) for r = 0, 1, 2, . . . are the Fourier coefficients for the fundamental frequency wi and its harmonics at x0 . Using Parseval’s theorem Cukier et al. (1973) established a relation between the marginal sensitivity coefficient and the sine Fourier coefficient, see details in the following subsection. Few years later, Cukier et al. (1978) used the Fourier coefficients given in Eqs. (14) and (15) to develop a variance based measure to assess the uncertainty in the j-th predicted response at x0 due to the uncertainty in the i-th parameter. It is defined by the partial variance:

ρij =

σij2 σj2

where

σj2 = 2

∞ X

A2j (q) + B2j (q)

q =1

and

σij2 = 2

M X

A2j (r wi ) + B2j (r wi )

r =1

σ is the total variance of the j-th predicted response at x0 and σij2 is the part of σj2 that arises from the i-th parameter θi . For incommensurate frequency set, ρij gives the fraction of total variance of the j-th predicted response at x0 due to the uncertainty in θi . 2 j

The transformations Gi in Eq. (12) are determined so that the two integrals (14) and (15) in the s space are equal to the corresponding ones in Θ space. Various transformations have been proposed (Koda et al., 1979; Lu and Mohanty, 2001; Saltelli et al., 1999). The transformation function proposed by Saltelli et al. (1999) for uniform distribution which has been recently used by Haaker and Verheijen (2004) and Deflandre et al. (2006) is applied here. It is given by: ui = G i =

2

π



θiu − θil θiu + θil



arcsin(sin wi s)

(16)

where θiu and θil are the upper and lower bound of the uniform parameter distribution. The function gi in Eq. (11) is best chosen to be: gi (ui ) = θˆi eui

− ∞ < ui < ∞

(17)

θˆi is the available estimate of the parameter θi . The ui ’s are considered to be independent random variables with respective probability distributions. These probability distributions are determined by the choice of the transformation Gi . Saltelli et al. (1999) showed that the required number of model evaluations has a lower limit determined by the Nyquist criterion resulting in the inequality: Ns ≥ 2M wmax + 1

(18)

where Ns is the number of equally spaced s points, M is the order of the frequency set w and wmax is the maximum frequency. A higher value of M requires a frequency set with higher value of wmax and thus a larger Ns . Deflandre et al. (2006) examined the effect of the sample size Ns by comparing the sensitivity results obtained with M = 4 and M = 8. They found the two sets of results were very similar and attributed this similarity to the fact that the amplitude of Fourier coefficients decreases as the order of their harmonics increases. The accuracy of the FAST analysis depends on the choices of Ns , M and wmax which in turn depend largely on the number of uncertain parameters p in the model and complexity of the model formulation. FAST can become computationally expensive for large-scale system models, and for such systems, Lu and Mohanty (2001) suggested using a computationally less expensive analysis as a screening method to select the most influential parameters at early stage of the analysis. 4.1. Relation to marginal sensitivity coefficients The authors of FAST centered their efforts on the first-order Fourier sine coefficients, Aj (wi ) in Eq. (14), corresponding to r = 1 as sensitivity measures. They showed that Aj (wi ) =

2 bi

h∂ Hj (u)/∂ ui i

(19)

where bi is a distributional parameter for ui , and h∂ Hj (u)/∂ ui i is the expected value of ∂ Hj (u)/∂ ui over the u-space probability distribution.

2646

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

Eq. (19) suggests that the Fourier coefficients Aj (wi ) are directly related to the marginal sensitivity of the predicted response at x0 with respect to the i-th parameter, averaged over the uncertainties in all the parameters. This appealing relation does not, however, yield a sharp sensitivity measure unless the integrand ∂ Hj (u)/∂ ui is positive throughout the parameter space. This implies that large values of Aj (wi ) do not necessarily mean large sensitivity of model prediction at x0 . Despite the loss of absolute values of Aj (wi ) this sensitivity measure provides a method of calculation that is both simple to use and relatively fast because the determination of relative sensitivity in multidimensional parameter space is reduced to one dimensional integral. By comparing Aj (wi ) calculated for different parameters in the same model, we obtain useful information on the relative sensitivity of these parameters. Cukier et al. (1975) established a more direct relation between the sensitivity measures Aj (wi ), and the marginal ∂ H (u)

sensitivity coefficients ∂ju . They showed, using moments representation of the probability distribution function and Taylor i series expansion of model function, that Aj (wi ) =

2 bi

h∂ Hj (u)/∂ ui i =

2 bi

∂ Hj (u)/∂ ui |u=0 + R

(20)

ˆ in Eq. (17)), where ∂ Hj (u)/∂ ui |u=0 is the value of the marginal sensitivity coefficient when u = 0 (corresponding to Θ = Θ and R is a remainder term containing the first order moments of the higher-order terms in the Taylor series expansion of ∂ Hj (u)/∂ ui . R will be small only if the higher-order terms are essentially small meaning a predicted response function that is linear with respect to the parameters. Thus Eq. (20) indicates that Aj (wi ) are nonlinear sensitivity measures that can be valid over broad ranges of parameter uncertainty. Eq. (20) illustrates that Aj (wi ) is directly related to the local measure of sensitivity given by the marginal analysis while it is truly global in nature. Aj (wi ) handles simultaneous perturbations in the parameter values and has broad ranges of validity due to its nonlinear nature. Following Saltelli et al. (2006), Aj (wi ) is called a hybrid global-local measure. The parallel between Eqs. (20) and (8) clearly reveals that profile-based and FAST sensitivity methods provide corrected sensitivity measures to that provided by the marginal analysis. The PSC measure in Eq. (8) adjusts the marginal sensitivity coefficient MSC for the nonlinear dependencies among parameter estimates. Using second-order derivative information of the model function, PSC accounts for model nonlinearity in two parts. The first part lies in the second-order derivatives with respect to the p − 1 parameters in the conditional estimation problem. The second part lies in the second-order derivatives with respect to the parameter of interest and the remaining parameters. The only second-order derivative information that is not incorporated in the analysis is the one corresponding to the parameter of interest. This is because the PSC is a first-order sensitivity coefficient. Eq. (20) demonstrates that the Fourier amplitude A(wi ) adjusts the marginal sensitivity coefficient for the nonlinearity in the model function but not for the dependencies among parameters. However, A(wi ) accounts for the full nonlinearity using derivative information to higher orders than that accounted by the PSC. It also incorporates second-order derivative information with respect to the parameter of interest which is not incorporated by the PSC. The difference between MSC and PSC values is directly attributed to the magnitude of the correction term measuring the nonlinear dependencies among parameters. The difference between MSC and A(wi ) is attributed to the magnitude of R involving higher-order terms in the above expansion without identification of particular terms that contribute the most to the difference. On the other hand, explaining the difference between PSC and A(wi ) is quite delicate. The difference can arise because of the correlations among parameter estimates or because of model nonlinearity. PSC accounts for parameter dependencies and nonlinearity in the remaining parameters up to second-order derivative information. It does not account for nonlinearity with respect to the parameter of interest. A(wi ) accounts for model nonlinearity using higher-order derivative terms to all orders but it does not account for parameter dependencies. Separation of the effects of dependence and nonlinearity can only be achieved if an assessment of the magnitudes of the second-order terms in θi and higher orders in other parameters is carried out. For a meaningful comparison between Aj and PSCj , the Fourier coefficients are scaled using the transformation in Eq. (9), this yields: se(θˆi ) se(hˆ j )

Aj (wi ).

(21)

Hereafter, the notation Aj (wi ) will be used to indicate scaled Fourier coefficients. 4.2. Implementation algorithm of FAST FAST method can be applied to any model described by a system of algebraic or differential equations. Application of FAST requires defining the wi and Gi and evaluating the original model equations at a sufficient number of points to allow acceptable level of accuracy in the numerical evaluation of the integrals (14) and (15). The following are the necessary steps in using FAST: 1. Select the appropriate transformation functions. Transformations given in Eqs. (16) and (17) for uniform distribution are used here.

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

2647

2. Assign a different frequency wi to each of the parameters. The frequency set has interference factor M. We used M = 4. 3. Select the number of model evaluations, generally given by Eq. (18). For sample size, Ns , Cukier et al. (1978) and Koda et al. (1979) gave a set of uniformly spaced and symmetric s points using the relation: sl =

π



2l − Ns − 1

2



for l = 1, . . . , Ns

Ns

the s range is restricted to − π2 ≤ sj ≤ π2 because of the applicable symmetry properties (Cukier et al., 1978). 4. Solve the model equations for each parameter combination sl defined by the transformations θi = gi (Gi (sin wi sl )). This step is the most computationally demanding step in the analysis. 5. For selected x0 calculate the Fourier coefficients in Eqs. (14) and (15) by numerical evaluation. Using the symmetry properties for odd integer frequencies w, McRae et al. (1982) provided simpler computational formulas of the Fourier coefficients, namely: Aj (wi ) =

"

1

N0 −1

X

Hj (N0 ) +

Ns

(Hj (N0 + q) − Hj (N0 − q)) sin



π r wi q

#

Ns

q =1

where r wi is odd, otherwise Aj (wi ) = 0. Bj (wi ) =

"

1

N 0 −1

Hj (N0 ) +

Ns

X

(Hj (N0 + q) + Hj (N0 − q)) cos

q =1



π r wi q

#

Ns

where r wi is even, otherwise Bj (wi ) = 0. Here N0 = Ns2+1 , the index for the parameter estimates combination corresponding to sj = 0. 6. Scale the resulting Aj (wi ) by the transformation in Eq. (21). 5. Example The Dow Chemical company formulated a model consisting of set of differential equations to describe the kinetics of an isothermal batch reactor system. The model was first used by Biegler et al. (1986) to illustrate typical parameter estimation problems encountered in industry and then re-formulated by Biegler and Tjoa (1991) as follows: dγ1 (t ) dt

dγ2 (t ) dt

dγ3 (t ) dt

= −k2 γ1 γ2 A = −k1 γ2 (x2 + 2x3 − x4 − 2γ1 + γ2 − γ3 ) − k2 γ1 γ2 A + k−1 β1 (−x3 + x4 + γ1 − γ2 )A 1

= k1 (x3 − γ1 − γ3 )(x2 + 2x3 − x4 − 2γ1 + γ2 − γ3 ) + k2 γ1 γ2 A − k−1 β2 γ3 A 2

where A=

−2x3 + x4 + 2γ1 − γ2 + γ3 γ1 + β1 (−x3 + x4 + γ1 − γ2 ) + β2 γ3

β1 =

K1 K2

K3

,

β2 = K   2 E1

k1 = k10 exp

R

 k2 = k20 exp

1

E2 R

 k−1 = k−10 exp



x1



1 x1

E−1 R

1



T0 1



− T0   1 1 − , x1

T0

x = (x1 , x2 , T0 ) , where x1 is the temperature of the system, x2 is the initial catalyst concentration (0.0131), and the variable T0 = 342.15 K is a reference temperature. x3 is the initial concentration of γ1 , γ1 (0), and x4 is the initial concentration of γ2 , γ2 (0). γ(t ) = (γ1 (t ), γ2 (t ), γ3 (t ))0 is the vector of state variables of dimension J = 3 and Θ = (k10 , E1 , k20 , E2 , k−10 , E−1 , β1 , β2 ) is the vector of model parameters. The data, given in Biegler et al. (1986), consist of three sets of experimental runs taken at different temperatures; the total number of measurements is 92. The measurements are the concentrations of four species. The first three measurements correspond to γ1 , γ2 , and γ3 while the fourth one is given by 0

γ4 = γ1 (0) − γ1 − γ3 . There were 11 missing values of γ1 and 5 missing values of γ4 .

2648

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

Table 1 Summary of parameter estimates for the isothermal batch reactor model. Parameter (θ1 ) (θ2 )

k10 E1

(θ3 )

k20

(θ4 ) (θ5 ) (θ6 ) (θ7 ) (θ8 )

E2 k−10 E−1

Estimate ± St.Error 2.98 ± 0.896 5514 ± 907.31 0.44



β1 β2

2.52 ± 0.39 9596 ± 811.64 94.82 ± 37.51 12 360 ± 1820.60 0.169 ± 0.02 0.092 ± 0.0063

ˆ = Σ

0.038 0.47

 −0.016 −0.008 0.21

ˆ ) = 4.596 d (Θ

Following Guay and McLean (1995) the data for the fourth species were excluded from the analysis because of the exact correlation of γ4 with responses γ1 and γ3 which would lead to singularity problems. We used the data for the first three species to estimate the eight parameters by the modified Gauss–Newton optimization algorithm proposed by Guay and McLean (1995). The method uses symbolic differentiation to extract second-order sensitivity information for use in the parameter estimation using the determinant criterion. The calculations of the profile-based sensitivity measures are reproduced here in addition to the FAST calculations using MATLAB6.5, Release 13. The resulting values of the parameter ˆ,Σ ˆ , and d(Θ ˆ ), are reported estimates along with their standard errors calculated using the Hessian of d(Θ ) evaluated at Θ ˆ , represents an estimate of the within-run covariance structure, in accordance in Table 1. The reported covariance matrix, Σ with the standard assumption about the disturbance term described in Section 1. Following the synopsis described in Section 3, the vector valued measures PSC and MSC were calculated for the eight parameters in the model at all design points in each set of experimental runs taken at a given temperature. The number of model evaluations needed to obtain convergence in the conditional estimation of the remaining parameters ranges between 14 and 18. At each iteration evaluations of first and second order sensitivity equations are needed. With three state variables and seven parameters, there are 21 first-order sensitivity equations and 84 second-order sensitivity equations. For the parameters β1 and β2 the determinant conditional estimation did not converge at the start. Changes to the step size of the Gauss increment were implemented to reach convergence. The resulting measures were then scaled using the transformations in Eqs. (9) and (10). For the Fourier analysis, the transformations (16) and (17) for uniform distribution were used. The lower and upper bounds for each parameter distribution are defined by θil = θˆi − se(θˆi ) and θiu = θˆi + se(θˆi ). These uncertainty ranges are shown in Table 1. The frequency set w = (23, 55, 77, 97, 107, 113, 121, 125) is taken from Schaibly and Shuler (1973) with order of interference M = 4. The corresponding number of model evaluations was Ns = 1001. The Fourier sine coefficient was calculated for each parameter at all time points in the three sets of runs at three temperatures. The resulting measures were scaled using the transformation in Eq. (21). While a broad range of sensitivity behaviors is seen in the analysis of this model, the comparisons between the three measures of sensitivity: marginal, profile-based and FAST measures exhibited common features. In general, some significant differences in the values of the three measures are observed during the early times of the reactions. Except for temperature T = 100 ◦ C, these differences die out as the reactions proceed to steady state. Fig. 1 displays the sensitivity plots of the FAST, marginal and profile-based sensitivity measures of the predicted values of an arbitrarily selected response γ3 with respect to θˆ7 = βˆ 1 at the three temperatures. For the model, βˆ 1 represents the ratio of two equilibrium constants associated with the rapid acid–base reactions. The dashed, solid and dotted lines in each plot represent the values of FAST, marginal and profile-based sensitivity coefficients, A3 (w7 ), MSC3 (θˆ7 ) and PSC3 (θˆ7 ), respectively. In Fig. 2 the fitted concentration values for γ3 at the three temperatures are plotted. Due to different time scales used at the three sets of runs, the fitted values are plotted against a time index ranges from 1 to 32. Fig. 2 clearly shows that the reactions proceed to steady state at temperatures T = 40 ◦ C (◦ solid line)and T = 67 ◦ C (∗ solid line) but not at T = 100 ◦ C (+ dashed line). The steady state is reached at earlier time at T = 40 ◦ C than at T = 67 ◦ C. The pronounced impact of the temperature of the reaction on the sensitivity behavior of the predicted response is seen in the plots of Fig. 1. At temperatures T = 40 ◦ C and T = 67 ◦ C, starting from negligible sensitivities initially, the values of the three measures for the predicted response γ3 ultimately reach a sustained value of 1 as the reactions proceed to steady state. The constant sensitivity at longer times (corresponding to larger data point numbers) is expected because equilibrium constants influence the steady state concentrations. At some of the earlier time points the values of A3 (w7 ) and MSC3 (θˆ7 ) are almost equal while PSC3 (θˆ7 ) shows significant differences. These differences can be explained by the parameter estimate correlations that vastly dictate the sensitivity behavior of the predicted concentrations at these times. At other early times, the values of MSC3 (θˆ7 ) and PSC3 (θˆ7 ) are almost equal while A3 (w7 ) deviates significantly from them. These deviations can be attributed to the considerable magnitude of R in Eq. (20) measuring model nonlinearity that manifests itself at some early times in the reactions. This nonlinearity includes second-order derivative term with respect to β1 and higher-order

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

2649

Fig. 1. Parametric sensitivities of the predicted concentrations of γ3 to βˆ 1 at the three temperatures. The dashed, solid and dotted lines join the values of A3 (w7 ), MSC 3 (θˆ7 ) and PSC 3 (θˆ7 ), respectively.

Fig. 2. The predicted concentrations of γ3 at T = 40 ◦ C (solid ◦), T = 67 ◦ C (solid ∗) and T = 100 ◦ C (dashed +).

derivative terms with respect to all other parameters. Because the three measures of sensitivity generally coincide at these two temperatures for all time points after point number 6 (corresponding to time 4.75 h) at T = 40 ◦ C and point number 7 (time 5.33 h) at T = 67 ◦ C, the nonlinearity of the model is negligible and dependencies between βˆ 1 and the estimates of the remaining parameters have no influence on the way this parameter is affecting the predicted concentrations of γ3 . At high temperature (T = 100 ◦ C), the sensitivity profiles change dramatically. For all time points, the marginal sensitivity values MSC 3 (θˆ7 ) are essentially zero, suggesting that βˆ 1 is assessed marginally to have negligible impact on the predicted concentration of γ3 . PSC 3 (θˆ7 ), however, shows significant sensitivity of γ3 to βˆ 1 at almost all time points while FAST values A3 (w7 ) show considerable sensitivity at some of the early time points. Recall from Fig. 2 the predicted γ3 does not proceed to steady state at this temperature and therefore it is expected to exhibit sensitive behavior to changes in the values of some or all parameters. The results of the profile-based sensitivity clearly demonstrate that the predicted γ3 has nonzero sensitivity to βˆ 1 . According to PSC3 (θˆ7 ) values, re-estimation of the remaining parameters conditional on βˆ 1 to account for

2650

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

Fig. 3. Parametric sensitivities of the predicted concentrations of γ2 to selected parameters at T = 40 ◦ C. The dashed, solid and dotted lines join the values of A2 (wi ), MSC 2 (θˆi ) and PSC 2 (θˆi ), respectively, for parameters i = 1, 3, 5, 6, 7, 8.

correlations would produce a significant change in the predicted response through out the reaction time interval starting from time point number 7 (corresponding to time 2.5 h). This is likely a result of a shift in the steps governing the overall reaction as temperature is increased. The consequence for model development is that, given the above parameterization of the model equations, β1 continues to play a major role in the predicted concentration of γ3 at high temperature indirectly because of the correlations between the parameter estimates. Such an insightful result about the model would be unlikely to attain using FAST or marginal sensitivity approaches. The underlying nonlinearity of the model exhibits itself only at some of the early times of the reaction as shown by the values of A3 (w7 ). The sensitivity plots in Fig. 3 show the values of the three sensitivity measures for γ2 to kˆ 10 , kˆ 20 , kˆ −10 , Eˆ −1 , βˆ 1 and βˆ 2 , respectively, at T = 40 ◦ C. For this response and temperature, the sensitivity plots of Eˆ 1 and Eˆ 2 are identical to those of kˆ 10 and kˆ 20 , respectively, and hence are omitted from the figure. The plots show that the three measures become essentially zero after time point number 9 (corresponding to time 13.05 h). This time corresponds to the point where approximately 50% of the initial concentration of this species has been consumed. This suggests that the predicted concentrations of γ2 for the latter portion of the experiment have no significant sensitivity to any of the model parameters, regardless of the way in which the parameter values are changed. That is to say, for parameter estimation, increasing the level of precision in some or all of the parameter estimates would have minimal impact on the precision of the predicted concentrations at these time values. In the earlier times of the reaction (before 13.05 h) the relationships between the three measures of sensitivity vary according to the parameter considered. For instance, for kˆ 10 the values of FAST measure closely agree with the profilebased sensitivity coefficients both showing low sensitivity of γ2 to each of these parameters. This behavior sustains until time point 7 after which FAST measure becomes almost identical with the marginal coefficient and the three measures coincide in value from time point 9 onwards. For Eˆ −1 , the profile-based sensitivity results at earlier times than point number 9 differ significantly from both FAST and marginal sensitivity measures in value and behavior. For kˆ −10 , the three sensitivity measures agree in value and behavior throughout the reaction time interval. There is, however, a common feature in all these in which the magnitude of the three measures of sensitivity is largest at time point 7, indicating that improving the precision of any of the parameter estimates will result in the greatest improvement in the precision of the predicted response occurring at point 7. It is worth mentioning that this particular sensitivity behavior of the predicted concentration at time point 7 is not observed at the other two temperatures. The sensitivity plots of Figs. 1 and 3 provide samples of the range of sensitivity behaviors observed for this example using the three sensitivity measures. A number of general observations can be made. In order to have a comprehensive understanding of effects of the parameters on the predicted response, a significant amount of information must be considered. For example, use of sensitivity plots to visualize the impact of each of the eight parameters on each of the three responses, at three different settings of temperature must be considered. In addition the sensitivity plots for a parameter of interest explicitly show where in the experimental design range the differences between the three measures of sensitivity are significant implying a profound effect of parameter dependencies and/or model nonlinearity. The range of sensitivity behavior changes dramatically and, given the complexity of the model, it is difficult to identify which parameters are

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

2651

likely to have significant effects on the predicted concentrations simply by examining the model equations alone. Using an appropriate analysis procedure, computing the sensitivity coefficients in the nonlinear parameter estimation and plotting them against the design points provides a concise visual way to investigate the sensitivity behavior of the parameter estimation problem. 6. Conclusions The reliability and accuracy of parametric sensitivity results greatly depend on the perturbation scheme used to vary the parameter values and on the underlying assumptions about the model and/or the parameters. In this paper, we have conducted a comparison between three methods of parametric sensitivity in a multi-response nonlinear parameter estimation setting. The three methods investigated are: conventional marginal sensitivity coefficient, profile-based parametric sensitivity measure developed by Sulieman et al. (2001, 2004) and classical Fourier Amplitude Sensitivity Test (FAST). The measures provided by these three methods are first-order sensitivity measures that have different degrees of accuracy and validity. The marginal sensitivity coefficient is a local measure that fails to account for model nonlinearity and parameter dependencies. The classical FAST is a global method that accounts for model nonlinearity up to all derivative orders but fails to account for parameter estimate dependencies. This article focuses on the Fourier sine amplitude as a first-order sensitivity measure. The results show that the measure is directly linked to the local marginal sensitivity coefficient averaged over all ranges of parameter uncertainties and so it can be considered as a hybrid globallocal sensitivity measure. The profile-based sensitivity method has been shown to provide a more reflective picture of the sensitivity behavior because it accounts for both parameter estimate correlations and model nonlinearity up to secondorder derivatives in the remaining parameters. Although the profile-based sensitivity coefficient; defined by the total derivative of the model function with respect to the parameter of interest is inherently local measure, it is a measure that has a broader range of validity than that for the marginal sensitivity. This is because through the re-estimation of the model remaining parameters conditional on the given values of the parameter of interest, it incorporates simultaneous perturbations in parameter values and accounts for parameter dependencies. In addition, through the use of second-order derivative information, profile-based sensitivity accounts for model nonlinearity. The analysis results of the Dow chemical reaction model show that when both parameter estimate dependencies and model nonlinearity are negligible, the three measures produce sensitivity behaviors that are very similar. The comparison exercise carried out in this work also helps identify settings in the experimental design range for which the contributions of parameter dependencies and/or nonlinearity to the sensitivity behavior of the associated predicted responses are predominant. Acknowledgments The financial support of the American University of Sharjah is greatly acknowledged. We also thank the anonymous reviewers for their helpful comments. References Bates, D.M., Watts, D.G., 1988. Nonlinear Regression Analysis and its Applications. Wiley, New York. Bates, D.M., Watts, D.G., 1987. A generalized Gauss–Newton procedure for multi-response parameter estimation. SIAM Journal of Scientific and Statistical Computing 7 (1), 49–55. Biegler, L.T., Tjoa, I.B., 1991. Simultaneous solution and optimization strategies for parameter estimation of differential-algebraic equation systems. Industrial & Engineering Chemistry Research 30, 376–385. Biegler, L.T., Damiano, J.J, Blau, G.E., 1986. Nonlinear parameter estimation: A case study comparison. AIChE Journal 32 (1), 29–45. Box, G.E.P., Draper, N., 1965. The Bayesian estimation of common parameters from several responses. Biometrika 52, 355–365. Cacuci, D.G., Ionescu-Bujor, M., 2004. A comparative review of sensitivity and uncertainty analysis of large-scale systems II: Statistical methods. Nuclear Science and Engineering 147, 204–217. Colonna, G., Longo, S., Esposito, F., Capitelli, M., 1994. Fourier-transform sensitivity analysis application to XeCl self-sustained discharge-laser kinetics. Applied Physics B: Laser and Optics 59 (1), 61–72. Cukier, R.I., Levine, H.B., Shuler, K.E., 1978. Nonlinear sensitivity analysis of multiparameter model systems: Review. Journal of Computational Physics 26, 1–42. Cukier, R.I., Schaibly, J.H., Shuler, K.E., 1975. III. Analysis of the approximations. Journal of Chemical Physics 63 (3), 1140–1149. Cukier, R.I., Fortuin, C.M., Shuler, K.E., Petschek, A.G., Schaibly, J.H., 1973. Study of the sensitivity of coupled reaction system to uncertainties in rate coefficients. I, Theory. Journal of Chemical Physics 59 (8), 3873–3878. Deflandre, A., Williams, R.J., Elorza, F.J., Mira, J., Boorman, D.B., 2006. Analysis of the QUESTOR water quality model using a Fourier amplitude sensitivity test (FAST) for two UK rivers. Science of the Total Environment 360 (1–3), 290–304. Dunker, A.M., 1984. The decoupled direct method for calculating sensitivity coefficients in chemical kinetics. Journal of Chemical Physics 81, 2385–2393. Guay, M., McLean, D.D., 1995. Optimization and sensitivity analysis for multiresponse parameter estimation in systems of ordinary differential equations. Computers & Chemical Engineering 19 (12), 1271–1285. Haaker, M.P.R, Verheijen, P.J., 2004. Local and global sensitivity analysis for a reactor design with parameter uncertainty. Chemical Engineering Research and Design 82 (A5), 591–598. Koda, M., McRae, G.J., Seifeld, J.H., 1979. Automatic sensitivity analysis of kinetic mechanisms. International Journal of Chemical Kinetics 11, 427–444. Liu, Y., Avissar, R., 1996. Sensitivity of shallow convective precipitation by land surface heterogeneities to dynamical and cloud microphysical parameters. Journal of Geophysical Reserach 101 (D3), 7477–7497. Lu, Y., Mohanty, S., 2001. Sensitivity analysis of a complex proposed geologic waste disposal system using the Fourier amplitude sensitivity test method. Reliability Engineering & System Safety 72, 275–291.

2652

H. Sulieman et al. / Computational Statistics and Data Analysis 53 (2009) 2640–2652

McRae, G.J., Tilden, J.W., Seinfeld, J.H., 1982. Global sensitivity analysis—A computational implementation of the Fourier Amplitude Sensitivity Test (FAST). Computers & Chemical Engineering 6 (1), 15–25. Rodriguez-Camind, E., Avissar, R., 1998. Comparsion of three land-surface schemes with the Fourier amplitude sensitivity test (FAST). Tellus 50A (3), 313–332. Saltelli, A., Ratto, M., Tarantola, S., Campolongo, F., 2006. Sensitivity analysis practices: Strategies for model-based inference. Reliability Engineering & System Safety 91, 1109–1125. Saltelli, A., Tarantola, S., Campolongo, F., Ratto, M., 2004. Sensitivity Analysis in Practice. A Guide to Assessing Scientfic Models. Wiley, New York, USA. Saltelli, A., Chan, K., Scott, E.M., 2000a. Sensitivity Analysis. Wiley, New York, USA. Saltelli, A., Tarantola, S., Campolongo, F., 2000b. Sensitivity analysis as an ingredient of modeling. Statistical Science 15 (4), 337–395. Saltelli, A., Tarantola, S., Chan, K., 1999. A quantitative model-independent method for global sensitivity analysis of model output. Technometrics 41, 39–56. Seber, G.A.F, Wild, C.J., 1989. Nonlinear Regression. Wiley, New York. Schaibly, J.H., Shuler, K.E., 1973. Study of the sensitivity of coupled reaction system to uncertainties in rate coefficients. II. Applications. Journal of Chemical Physics 59 (8), 3879–3888. Sulieman, H., McLellan, P.J., Bacon, D.W., 2004. A Profile-based approach to parametric sensitivity in multiresponse regression models. Computational Statistics & Data Analysis 45, 721–740. Sulieman, H., McLellan, P.J., Bacon, D.W., 2001. A profile-based approach to parametric sensitivity analysis of nonlinear regression models. Technometrics 43 (4), 425–433. Xu, C., Gertner, G., 2007. Extending a global sensitivity analysis technique to models with correlated parameters. Computational Statistics & Data Analysis 51 (12), 5579–5590.