Robust Experiment Design via Maximin Optimization LUC PRONZATO AND ERIC WALTER Lahorutoire des Signaux et SystBmes, CNRS/Ecole Plateau du Mouton, Received
Sup&ewe
d’Electricit&
F91190 Gif-sur- Yvette, Frunce
3 Ju& 1987; reused 1 I Januu~
1988
ABSTRACT When designing an experiment for estimating the parameters of a nonlinear model, the uncertainty in the nominal parameters has to be taken into account. For this a maximin methodology
is used, where the parameters
are assumed
to belong
to a known
admissible
domain, without any other hypothesis on their distribution. In some cases robust maximin design can be transformed into a simple problem of D-optimal design. When the maximin optimization problem cannot be avoided, an algorithmic procedure is proposed by which an approximate maximin optimal experiment can be designed at a reasonable cost.
1.
INTRODUCTION
Before collecting data to be used for estimating the parameters of a model, it is advisable to select the best feasible experiment (in a sense to be specified) to be performed on the system. This choice of experimental procedure, known in the literature as experiment design, is especially important when there are very few data to be collected and when the measurements are highly corrupted by noise, as is often the case in the biosciences. Two main questions have to be considered when designing an experiment. First, one has to choose a way of comparing any two given experiments so as to decide which is better. This is the criterion definition problem. Second, one needs an algorithm to find the best feasible experiment in the sense of this criterion (or at least a good approximation of the best feasible experiment). The existence of reasonably simple algorithmic procedures thus appears as a sine qua non for the practical use of sophisticated criteria. The most commonly used D-optimal design [l-4] consists in maximizing the determinant of the Fisher information matrix. This can be performed relatively simply by using a specific algorithm when one wants to optimize a design measure [l, 41, or by resorting to classical nonlinear programming algorithms when one is interested in discrete designs. In both cases, the D-optimal experiment generally depends on the value of the parameters to be estimated, which is of course unknown. The usual practice is then to MATHEMATICAL
BIOSCIENCES
89:161-176
QElsevier Science Publishing Co., Inc., 1988 52 Vanderbilt Ave., New York, NY 10017
161
(1988)
0025-5564/88/$03.50
162
LUC PRONZATO
AND ERIC WALTER
design an experiment D-optimal for some reasonable nominal value of these parameters. Since the uncertain character of this nominal value is not taken into account, this can hardly be considered as a robust approach, and has raised some doubts among experimenters about the practical interest of optimal experiment design. Several approaches have been proposed in the literature to overcome this difficulty. One of them is to design experiments in a sequential way [5-81 by alternating estimation of the parameters and experiment design. Each estimation step improves the knowledge of the system parameters, and this knowledge can then be used to improve the quality of the next experiment to be performed. However, it is not always possible to conduct multiple experiments on the same subject, and one is often interested in designing a single experiment. Moreover, when using sequential design, each experiment to be performed must be chosen at best, given the available information on the parameters. For these reasons increasing attention has been devoted in recent years to nonsequential approaches that allow the determination of a single optimal experiment while taking into account some characterization of the parameter uncertainty. Two methodologies seem particularly attractive for that purpose. The first assumes that the prior probability density function of the parameters is known, and the criterion to be optimized is the mathematical expectation of some classical nonrobust optimality criterion over all the possible values of the parameters 19, lo]. We have described [ll, 121 an algorithmic procedure based on a stochastic approximation technique that makes optimal discrete design with this approach almost as simple as with conventional nonrobust criteria. However, an experiment that is good on average may prove very poor for some particular values of the parameters associated with very low probability densities. When one wants to avoid such a situation, one may prefer the second methodology, which assumes that the parameters belong to some known prior domain, on which no probability function needs to be defined. One would then like to optimize the worst possible performance of the experiment over the prior domain for the parameters [4, 9,13-151. The purpose of the present paper is to describe such a maximin-type approach, and to suggest some tools that can be used to design experiments that are optimal in the maximin sense at a reasonable computational cost. Section 2 briefly states the problem and defines the maximin criterion to be considered. Section 3 presents some properties of the corresponding optimal designs. Section 4 is devoted to maximin design for exponential regression models, frequently used in many fields including the biosciences. Section 5 describes an algorithmic procedure for the determination of the optimal solution of the maximin problem. Various examples are given which allow a comparison with results obtained by optimizing mathematical expectations [12].
163
ROBUST EXPERIMENT DESIGN
2.
PROBLEM
STATEMENT
Denote by y the N-dimensional vector of all available measurements on the process, 8 the p-dimensional vector of the parameters to be estimated, and e the n-dimensional vector describing the experimental situation (for example, if one is interested in optimal sampling schedules, e may consist of the times at which data are collected). Under not too restrictive conditions, the maximum-likelihood estimator of the parameters based on y is known [lo] to be asymptotically normally distributed, with a mean equal to the true value B* of the parameters, and a covariance M-‘(O*,e), where M(B*,e) is the Fisher information matrix, given by
In (l), &+,e) is the conditional probability density function of the measurements y when the model parameters take the value B and the experimental conditions are described by e, and Ey,,,.,e( a} denotes the mathematical expectation of the bracketed quantity with respect to y conditional to B* and e. In what follows, e is assumed to belong to an admissible domain E, defined by
The output
error t(Q),
defined as
444 = y-xJW,
(3)
where ym(0,e) is the vector of the model outputs associated with the observations y, is assumed to be such that c(e*,e) is a zero-mean noise whose probability density function f(e) is independent of 8*. It is then easy to show that the Fisher information matrix can be written as
M(O*,e) =Xf(e*,e)~Y1(e).X(B*,e),
(4)
with x(
e*,e)
=
_!!!+!21 = _ “Ymjiye) 1, 0’
e*
(5)
and X-l(e)
=lalnLtr)a:i!c)f(c) ar. c
(6)
164
LUC PRONZATO
AND ERIC WALTER
If the noise is also assumed to be white, then E(e) is diagonal obtains the well-known expression
and one
(7) where y,, (0,e) is the ith component of ym(0,e), and where w, is the i th diagonal term of Z(e). For instance, when the noise components are additive white and normally distributed N(0, a’), w, is equal to u2. It should be noted that, except when y,(&e) is linear in 0, the Fisher information matrix depends on 8*, which is of course unknown. The asymptotic normality of maximum-likelihood estimators is the rationale for the choice of scalar functions of M( 8, e) as criteria for experiment design. The most classical D-optimal experiment design would ideally consist in maximizing the criterion
j,( e*,e) = det M( tI*,e)
(8)
with respect to e. Since t3* is not known, we need to use a criterion that does not depend upon it. The maximin approach to be considered here uses as such the following criterion (to be maximized): jmmd(e) = ,“:@det M( 0, e),
(9)
where 0 is the (supposed known) prior admissible set for 0. An experiment will be said to be MMD-optimal, and denoted by emmd, if it satisfies emmd
=
Arg
~G~inmd
(e)
9
(10)
or equivalently
(11) where
e,(e) = kgeeejd(e,4. When 0, does not depend on e, the maximin problem mal design problem (see Section 4 for examples). Example I. Suppose a single observation whose output is described by y(t)
=exp(-e*t)+c(t),
(12) reduces to a D-opti-
can be performed
on a process
(13)
ROBUST
EXPERIMENT
where c(t) information
DESIGN
165
is a white noise normally distributed N(0, a’). matrix for the experiment e = 1 is then given by M(fl,t)
The D-optimal experiment parameter 19is given by
-2r2exp(
=u
associated
The Fisher
-20t).
with a nominal
(14) value
8” for the
td =1/P. If the prior admissible experiment is given by
for B is [a, b], then the MMD-optimal
domain
t mmd Another given by
maximin
criterion
(15)
=1/b.
[13] involves
(16)
the D-efficiency
of a design,
(17) where p is the number of unknown parameters, and e, (0) the D-optimal experiment associated with the parameter vector tl (see also [24]). An experiment will be said to be MMDe-optimal if it satisfies
(18) Example 2. The MMDe-optimal Example 1 is given by
t mmdr
=
sampling
ln(b/a )
~
b-a
time for the problem
of
(19)
Examples 1 and 2 illustrate the fact that MMD-optimal design and MMDe-optimal design lead to completely different policies. It is therefore of interest to compare their respective merits. It must first be noted that the evaluation of cd(8,e) for any given value of 0 requires the computation of the associated D-optimal design ed( f3), which makes the computation of a MMDe-optimal experiment far more complex than that of a MMD-optimal experiment. For a further comparison, Figure 1 presents the D-efficiency of and fmmdr given by (16) and (19) as a function of 8 when 8 belongs to ;zyg] = [l,lO]. As could be expected, the efficiency of the MMDe-optimal experiment appears to be more evenly distributed on 8 than that of the
166
LUC PRONZATO AND ERIC WALTER
10
1
8
FIG. 1 D-efficiency of the MMD- and MMDe-optimal sampling times given by (16) and (19) as a function of I?: r,,d = 0.1, t,,,,r = 0.256.
MMD-optimal experiment. One could hastily conclude that the sampling time &mdC is more robust than tmmd. However, if one is interested in the (asymptotic) uncertainty on the estimate of 0 as a function of its true value, it is illuminating to plot det M-‘(8, e) as a function of 8 for both sampling times (Figure 2). This evidences the fact that there is little to be gamed in terms of uncertainty in choosing MMDr-optima&y when 8 tends to 1, whereas the improvement obtained in choosing MMD-optimality when 8 tends to 10 is enormous. As far as reducing the worst possible uncertainty on 8 is concerned, MMD-optimahty appears to have definite advantages over MMDc-optimality.
3. 3.1.
PROPERTIES
OF MMD-OPTIMAL
INFLUENCE
OF A REPARAMETRIZATION
DESIGNS OF THE MODEL
Among the attractive properties of D-optimal design is the fact that a D-optimal experiment is invariant to any nondegenerated transformation applied to the model parameters [2, 31. As an example, suppose one is interested in designing an experiment for estimating the (micro) parameters
167
ROBUST EXPERIMENT DESIGN
det ( MS1 (0,t) )
FIG. 2 det M-‘(0, t) as a function of 6’ for the MMD- and MMLk-optimal times given by (16) and (19); z,,,,~ = 0.1, t,,,,ds = 0.256.
8 of a compartmental $
sampling
model =A(B)x+ Y = C( e)x,
B(t3)u,
x(0) =x0(e), (20)
where the input u is fixed (e.g. an impulse or step function). Assume the micro parameters are structurally locally identifiable, and an analytical expression for y can be calculated (e.g. a sum of real exponentials). One can then merely compute the D-optimal experiment for estimating the (macro) parameters involved in this analytical expression, and be sure that the D-optimal experiment for the macro parameters is also D-optimal for the micro parameters [provided, of course, the numerical values for the micro and macro parameters correspond to the same solution y(t)]. The numerical simulation of (20) can then be avoided when designing a D-optimal experiment for the micro parameters, and this reduces the computational cost quite considerably. It is therefore of interest to know whether the same type of property holds true for MMD-optimal designs. Unfortunately,
168
LUC PRONZATO
AND ERIC WALTER
this is not always so, and MMD-optimal experiments are generally changed when a nondegenerated transformation is applied to the model parameters. To prove it, consider a reparametrization of the model defined by A( fl), and assume 8 transforms into A. When estimating 8, the MMD-optimahty criterion (9) can be written as
_knd
(e>= 6ee [ det ‘(gjdetM(h(B),e)]
>
(21)
and unless the transformation X(0) is linear, the experiment maximizing (21) generally differs from the MMD-optimal experiment for the estimation of A, which maximizes min xEadet M(X,e). This can be illustrated by considering Example 1 again. Example
Let B = A2 in the model (13), so that
1 (continued).
y(t) The Fisher information is given by
=exp(-A2t)+c(t).
matrix relative to X for a single observation
M(X,t)
=4am2A2t2exp(-2A2t).
(22)
at time t
(23)
If the prior admissible domain is [0, b] for 0 and [0, @/*I for A, then the MMD-optimal sampling time is still given by (16) when the parameter to be estimated is 8. On the other hand, there is no MMD-optimal sampling time when the parameter to be estimated is A, for min,det M(X, t) = 0 for any t. 3.2.
REPLICATED
EXPERIMENTS
It is well known that nonminimal D-optimal designs are most often obtained by replicating the experiment that corresponds to a minimal D-optimal design. This property has received a considerable amount of attention (see e.g. [3, 161). I (continued). If N measurements are allowed, the D-optimal schedule consists of N independent samples taken at t, as given
Example
sampling by (1%
The following theorem indicates that there are situations where the MMD-optimal experiment can also be expected to consist of replications of a minimal design. THEOREM
I
(i) Zf 0,(e)as given by (12) does not depend on e, then the MMD-optimal is D-optimal for tl = 0,. experiment e,,, (ii) Zf the criterion jd(* , *) a d mzsasaddlepointat ‘t (e,,e,) E~X~E, then es is both MMD-optimal and D-optimal for 8 = 0,.
ROBUST
EXPERIMENT
169
DESIGN
Proof. Part (i): Trivial from (11). Part (ii): Since (O,,e,) is a saddle point, any (0,e) in ~xIE
An MMD-optimal
.&(e,,e)
G(fl,,e,>
experiment
satisfies
satisfies
Q&e,).
(24)
minj,(e,e,,,) befpjd(e,eJ =j,@A.
es8
Now, m%,~jd(h,,,,)
Equations
G j,(e,,e,,,,,>, and from (24)
(25) and (26) imply
minjd(8,es) = em>ejd(e,emmd),
(27)
‘368
so that es is MMD-optimal.
From (24) e, is also D-optimal
As a consequence of Theorem 1, the MMD-optimal present the same properties of replication as the D-optimal the conditions of part (i) or (ii) are satisfied. Example 1 (continued). If the admissible solution for (12) is given by
for 0 = t3,. experiment will experiment when
domain for 0 is [a, h], then the
e,=b,
(28)
whatever the number of samples allowed. Since 8, does not depend on e, Theorem 1, part (i), applies and the MMD-optimal experiment for N samples consists of N independent replications of a measurement at time l/b. Note that if MM&-optimality were used, one would no longer have replication [13]. 3.3. MODELS THATARE PARTLY TO THE PARA METERS
LINEAR
WITH
RESPECT
Even if they are nonlinear in the parameters, models often have outputs that depend linearly in a subset of these parameters (see e.g. Section 4). The following theorem is then of interest. THEOREM
2
Suppose the following hypotheses are satisfied: Hl:
The model output satisfies
Y,,(e) = g:(C+e’,
i=l
,..., N,
(29)
170
LUC PRONZATO
AND ERIC WALTER
with
e=(e’yJn’r)‘,
(30)
where thejth entry ofg,!(W”,e) only depends on 19,“‘. H2: The noise is additive, white, and distributed independently of the parameters of the model. H3 : The admissible space for the parameters is such that the parameters in 8’ and in V” can be chosen independently. Then the MMD-optimal 0’ fixed at 1.
experiment can be obtained with all components of
Proof Taking Hl and H2 into account, can be written [17]
the Fisher information
matrix
where I p,2 is the (p/2)X (p/2) identity matrix, 0p,2 is the (p/2) x (p/2) null matrix, up/2 is the (p/2)-dimensional vector with all entries equal to 1, D(0’) is the diagonal matrix diag{t’, j=l,..., p/2}. The MMD-optimal
experiment
%md=Arg[FCy( Taking
can therefore be obtained
as
~[~(e:,‘]~~detM(u~,~,enl,e))].
(32)
H3 into account, one obtains from (32)
ennnd
y$ detM(up12,
e?))].
(33)
The search for an MMD-optimal experiment can thus often be conducted in a parameter space restricted to 8”‘, which results in considerable computational savings when using an algorithm such as that described in Section 5. Maximin or minimax criteria for experiment design were proposed some time ago [2, 4, 131, but the complexity of their optimization appears as a tremendous obstacle to their practical use. As will be seen in the next section, it is sometimes possible to take advantage of the model structure and parameter constraints to transform an MMD-optimal design problem into a simple D-optimal design problem for some known value of 0.
ROBUST EXPERIMENT
4.
EXPONENTIAL
171
DESIGN
REGRESSION
MODELS
Exponential regression models play an important role in physics and in the biosciences. They correspond, for example, to the response of a compartmental model described by (20) to a zero input and a nonzero initial condition. Many of the models used in pharmacokinetics are sums of exponentials. This is why the result of Melas [18] that we recall now seems of special importance. THEOREM3
[Ils]
Suppose the i th model output is given by
(34)
where e(i) is a scalar characterizing the experimental situation for the i th measurement (for example the i th sampling time). Suppose the noise is additive, white, and distributed independently of the parameters of the model. Suppose the admissible domain for the nonlinear parameters is given by
where 0& and the Jo; are known. Then for any experiment measurements, 0, given by (12) is such that
e~gx,e~~-II,,e~gT-(,,+,,),...,e~lax-
e with at least p
c
. (36)
If the conditions of Theorem 3 are satisfied and if the admissible space for the parameters is such that the parameters in 8’ can be chosen independently from those in W”, then, because of Theorem 2, Theorem l(i) applies and the MMD-optimal design becomes a classical D-optimal design for e1 = U~,~ and tl”’ given by (36). This MMD-optimal design will thus possess the same property of replicate samples as a D-optimal design. Example
3.
Consider
the model
y,i(e,ti)
=Aexp(-h,ti)+Bexp(-h,ti),
where 0 = (A, B, A,, X2)’ is the vector of the (macro) parameters estimated. Suppose the linear parameters A and B satisfy A> 0,
B=- 0,
(37) to be
(38)
172
LUC PRONZATO
and the nonlinear
parameters
AND ERIC WALTER
Xi and A, satisfy
xi < 10, Then the MMD-optimal experiment the D-optimal experiment for
hi -x,
>l.
with four measurements
e, = (1,1,10,9)‘.
(39) coincides with
(40)
Optimizing the D-optimality criterion for this value of 8 with a classical nonlinear programming algorithm, one obtains t mmd= (0,0.049,0.174,0.409)‘.
(41)
If 4q measurements are now allowed, the MMD-optimal experiment for the macro parameters consists in replicating q measurements at times given by (41). To compare this result with what can be obtained by a stochastic approach, let us assume that A, and X, are uniformly distributed in [l, lo] with the restriction Xi - X, 21. The approach described in [ll, 121 then leads to the following HD-optimal design: feid = (o,o.054,0.193,0.447)r.
(42)
which is rather close to t mmdas given by (41). The optimal experiment in the maximin sense can thus be expected to perform rather well on average. If for some exponential regression models it is possible to transform an MMD-optimal design into a conventional problem of D-optimal design, the maximin problem has generally to be handled as such. If such a transformation is not possible, then the relaxation algorithm described in the next section can be used for finding an approximate solution to the MMD-optimal design problem at a reasonable computational cost. 5.
MAXIMIN
OPTIMIZATION
VIA RELAXATION
There are very few general-purpose algoritts for solving maximin or minimax problems in the literature, and even fewer subroutines available in standard scientific libraries. Most of them are restricted to situations where one of the two vector arguments with respect to which the optimization is performed belongs to a finite set of values. They therefore do not apply here, where both e and 8 belong to infinite sets. Shimizu and Aiyoshi have proposed [19] a relaxation procedure involving the iterative construction of a set of representative values for one of the vector arguments (here 0), and the solution of a series of maximin or minimax problems where tl is restricted to this finite set of representative values. The initial maximin optimization
ROBUST
EXPERIMENT
problem (ll)-(12) to the constraint
173
DESIGN
can be viewed as the maximization
mindetM(0,e) BE8 This inequality
is equivalent
of the scalar (Y,subject
>LY.
(43)
VeGe,
(44)
to
detM(B,e)
>c:
and the maximin problem is an optimization problem with respect to e, subject to an infinite number of constraints. The procedure consists in relaxing the problem by taking into account only a finite number of constraints. The algorithm can be summarized as follows: Step I: Step 2:
Choose an initial parameter vector e(l), and define a first set of representative values so) = {g(‘) }. Set k = 1. Solve the current relaxed maximin problem e’“)=Arg(max min detM(B,e)). e,zlE tJE$$lkl
Step 3:
Solve the minimization
Step 4:
If
(45)
problem
det M( tl(k+l),e(k))
a
min det M( 8,e(‘)) BEa’))
- 6,
(47)
where 6 is a small positive predetermined constant, then consider flck+l) and eck) as (approximate) solutions of the maximin problem. Else include tVk+‘) into sck), increase k by one, and go to Step 2. Shimizu and Aiyoshi have shown [19] that the procedure terminates in a finite number of iterations if the following assumptions (generally satisfied by minimax design problems) hold:
(i) det M(f3,e) is continuous in 0, differentiable with respect to e, and with partial derivatives continuous in e; (ii) the admissible domain E is compact and such that E={eER”]c,(e)
I},
(48)
where the ci are differentiable with respect to e, with partial derivatives continuous in e; (iii) the admissible domain for the parameters in nonempty and compact.
LUC PRONZATO
174
AND ERIC WALTER
It must be noted that when one has to stop the procedure before the terminating condition (47) is satisfied, an approximate solution is nevertheless obtained, which satisfies a condition similar to (47) with a constant 6’3 6. Steps 2 and 3 require an optimization procedure. Since the functions involved are not necessarily unimodal, one is interested in their global optimum. For this reason we use a general-purpose global optimizer [20] based on an adaptive random search strategy [21]. The MMD-optimal experiment for the parameExample 3 (continued). ters of Example 3 has been determined in Section 4 using the results of Melas. The same problem can serve as a test case for the relaxation algorithm. The initial value tW is arbitrarily set to the admissible value
0(l) = (1,1,8,5)‘.
(49)
Thanks to Theorem 2, the first two components of 0 can be considered as fixed, and the optimization will only take place on the last two components. The successive representative values for 8 found at Step 3 are fl(*)= (1,1,9.98,8.98)’
(50)
ec3) = (1,1,10,9)‘.
(51)
and
Thus fJc3)corresponds to 0, as given by (40). The algorithm stops after three iterations of the relaxation procedure, and the MMD-optimal experiment is therefore given by (41). Example 4. Consider now the same problem replace the constraints (39) by
9.5 < x, Q 10,
1 Q x, 6 9,
as in Example
3, but
(52)
so that Theorem 3 no longer applies. Using the relaxation algorithm, initialized at B(l) = (1,1,9.75,9)‘, we obtain a set of representative values with only two elements e(1) and tV*)= (1,1,9.5,9)‘. The corresponding approximate MMD-optimal experiment is given by t ,,.,,,,d= (0,0.050,0.177,0.412)‘,
(53)
still very close to the value obtained in Example 3. Note that this maximin problem was solved with only four successive classical optimizations.
ROBUST
6.
EXPERIMENT
DESIGN
175
CONCLUSIONS
EID- and MMD-optimal designs appear as two complementary approaches to the robust design of experiments for estimating the parameters 8 of a nonlinear model. Whereas EID-optimal design requires the knowledge of a prior probability density function for 8, MMD-optimal design requires the knowledge of a prior feasible region for 8. A first criterion for choosing between these two approaches is thus to consider which type of information is available, or which type of information can most be trusted. When both types of information are available, another criterion for such a choice is the importance the experimenter gives to the values of tl that are associated with very low values of the prior probability density function. It may sometimes be acceptable to perform poorly for these unlikely values of the parameters. EID-optimal design might then be preferred. Sometimes, on the other hand, one needs to be sure that the experiment will be acceptably good for any feasible value of the parameters. One might then be more interested in MMD-optimal design. In this paper we have tried to convince the reader that MMD-optimal design might not be as difficult as it seems at first sight. There are special cases of importance where MMD-optimal design can be transformed into a simple D-optimal design problem for some known value of the parameters. When this is not possible, the combined use of a relaxation algorithm and a global optimizer allows one to find an approximate solution to the maximin problem at a reasonable cost. The methodology proposed could be extended to the design of discriminating experiments and to sequential design. For the latter problem, each estimation phase should provide an updated estimation of the prior feasible domain for the parameters. Methods recently developed for membership-set estimation could be used for this purpose [22, 231. The authors wish to thank Professor C. Cobelli and Dr. K. Thomaseth for their very helpful comments on an earlier version of Theorem 1. REFERENCES R. C. St. John and N. R. Draper, D-optimality for regression designs: A review, Technomerrics 17(1):15-23 (1975). V. V. Fedorov, Theory of Optimal Experiments, Academic, New York, 1972. E. M. Landaw, Optimal Experiment Design for Biologic Compartmental Systems with Applications to Pharmacokinetics, Ph.D. Dissertation, Univ. of California, Los Angeles, 1980. S. D. Silvey, Optimal Design, Chapman & Hall, London, 1980. G. E. P. Box and W. G. Hunter, The experimental study of physical mechanisms, Technometrics 27(1):23-42 (1965). N. R. Draper and W. G. Hunter, The use of prior distributions in the design of experiments for parameter estimation in non-linear situations, Biometrika 54:147-153 (1967).
176
LUC PRONZATO
AND ERIC WALTER
7
D. Z.
8
Phnrmacokinetics and Biopharmaceutics 9(6):739-756 (1981). J. J. Distefano III, Algorithms, software and sequential optimal sampling schedule designs for pharmacokinetic and physiologic experiments, Math. Comput. Simulrrtron
9 10
D’Argenio,
Optimal
sampling
times
for pharmacokinetic
experiments,
J.
24:531-534 (1982). V. V. Fedorov, Convex design theory, Math. Operutionsforsch. Statist. Ser. Stutist. 11(3):403-413 (1980). G. C. Goodwin and R. L. Payne, Dynumic System Identificution: Experiment Design and Dotu Analysis, Academic, New York, 1977.
11
L. Pronzato
12
Math. Biosci. 75:103-120 (1985). E. Walter and L. Pronzato, How to design experiments
and E. Walter,
uncertainty, System
Robust
experiment
design
in Proceedings of the 7th IFAC/IFORS
Parameter
Estimation,
via stochastic that are robust
Symposium
to parameter
on Identification
ctnd
York, July 1985, pp. 921-926.
13
E. M. Landaw, Robust sampling designs for compartmental eigenvalue uncertainties, in Mathemattcs and Computers
14
E. Walter quantitative
(J. Eisenfeld
approximation.
and C. DeLisi, Eds.), North
Holland,
models under large prior in Biomedictd Applications
Amsterdam,
and L. Pronzato, Robust experiment design: identifiabilities, in Identifiability of Parumetric
1985, pp. 181-187.
Between qualitative Models (E. Walter,
and Ed.),
16
Pergamon, Oxford, 1987, pp. 104-113. L. Pronzato and E. Walter, Robust experiment design for nonlinear regression models, in Model-Oriented Data Analysis (V. Fedorov and H. LLuter, Eds.), Lecture Notes in Economics and Mathematical Systems, Vol. 297, Springer, Berlin, 1988, pp. 77-86. M. J. Box, The occurrence of replications in optimal designs of experiments to
17
estimate parameters in non-linear models, J. Roy. Stutist. Sot. 30:290-302 L. Pronzato, Synthese d’exptriences robustes pour modeles a parametres
18
These en sciences, Univ. Paris-sud, Orsay, 1986. V. B. Melas, Optimal designs for exponential
19
Statist. Ser. Stutist. 9(1):45-59 (1978). K. Shimizu and E. Aiyoshi, Necessary
15
gorithm 20
21 22
23
24
by a relaxation
procedure.
IEEE
regression,
conditions Truns.
Muth.
for min-max Automat.
(1968). incertains,
Opemttonsforsch. problems
and
al-
Control AC-25(1):62-66
(1980). L. Pronzato, E. Walter, A. Venot, and J. F. Lebruchec, A general purpose global optimizer: Implementation and applications, Math. Comput. Simulution 26~412-422 (1984). G. A. Bekey and S. F. Masri, Random search techniques for optimization of nonlinear systems with many parameters, Math. Comput. Simulation 25(2):210-213 (1983). G. Belforte and M. Milanese, Uncertainty interval evaluation in presence of unknown but bounded errors: Nonlinear families of models, in Proceedings IASTED Internationol Symposium on Modelling Identification und Control, Davos, 1981, pp. 75-79. E. Walter and H. Piet-Lahanier, Robust nonlinear parameter estimation in the bounded noise case, in Proceedings of the 25th IEEE Conference on Decision and Control, Athens, 1986, pp. 1037-1042. R. Stiverkrtip, Optimization of sampling schedules for pharmacokinetic and biopharmaceutic studies, in Pharmacokinetics during Drug Development: Data Analysis and Evaluation Techniques (G. Bozler and J. M. van Rossum, Eds.), Gustav Fischer, Stuttgart,
1982.