Optimal and efficient designs for 2-parameter nonlinear models

Optimal and efficient designs for 2-parameter nonlinear models

Journal of Statistical Planning and Inference 124 (2004) 205 – 217 www.elsevier.com/locate/jspi Optimal and e$cient designs for 2-parameter nonlinea...

252KB Sizes 0 Downloads 68 Views

Journal of Statistical Planning and Inference 124 (2004) 205 – 217

www.elsevier.com/locate/jspi

Optimal and e$cient designs for 2-parameter nonlinear models A.S. Hedayata;∗ , J. Zhongb , L. Niec a Department

of Mathematics, University of Illinois at Chicago, Chicago, IL 60607, USA b Merck Research Laboratories, West Point, PA 19486, USA c University of Maryland at Baltimore County, Baltimore, MD 20250, USA Received 10 October 2002; accepted 8 April 2003

Abstract By Carath5eodory’s theorem, for a k-parameter nonlinear model, the minimum number of support points for any D-optimal design is between k and k(k + 1)=2. Characterizing classes of models for which a D-optimal design sits on exactly k support points is of great theoretical interest. By utilizing the equivalence theorem, we identify classes of 2-parameter nonlinear models for which a D-optimal design is precisely supported on 2 points. We also introduce the theory of maximum principle from di=erential equations into the design area and obtain some results on characterizing the minimally supported nonlinear designs. Examples are given to demonstrate our results. Designs with minimum number of support points may not always be suitable in practice. To alleviate this problem, we utilize some geometric and analytical methods to obtain some e$cient designs which provide more opportunity for the model checking and prevent biases due to mis-speciAed initial parameters. c 2003 Elsevier B.V. All rights reserved.  Keywords: Nonlinear design; Optimal design; D-optimality; C-optimality

1. Introduction For linear models, the optimal design theory has been developed for several decades, and some of the related results have been used in practice. On the other hand, due to its  This research is primarily sponsored by the National Science Foundation Grant DMS-0103727, the National Cancer Institute Grant P01-CA48112-08, and the NIH grant 5 P50 AT00155 jointly funded by the National Center for Complementary and Alternative Medicine (NCCAM), the O$ce of Dietary Supplements (ODS), the National Institute of General Medical Sciences (NIGMS) and the O$ce of Research on Women’s Health (ORWH). The contents of this paper are solely the responsibility of the authors and do not necessarily represent the o$cial views of NCCAM, ODS, NIGMS, or ORWH. ∗ Corresponding author. Tel.: +1-312-9964831; fax: +1-312-9963041. E-mail address: [email protected] (A.S. Hedayat).

c 2003 Elsevier B.V. All rights reserved. 0378-3758/$ - see front matter  doi:10.1016/S0378-3758(03)00196-4

206

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

complexities, very little research has been carried out for optimality of designs based on nonlinear models, albeit such models have been found to have many usefulness in practical problems. The major di$culties associated with a nonlinear design stem from the nonlinearity for each individual nonlinear model. Surprisingly, on the positive side there is a great deal of overlap between the theory of optimal design for linear models and nonlinear models. Originally used in linear optimal design theory, Carath5eodory’s theorem and the equivalence theorem of Kiefer and Wolfowitz (1960) still play important roles in nonlinear optimal design theory. The important equivalence theorem for nonlinear models has been constructed by White (1973). Wynn (1970) and Fedorov (1972) proposed algorithms for Anding the support points of the optimal design and the related weights based on the equivalence theorem for linear models. Their algorithms can also be used for nonlinear models. Even though these algorithms could help to And an optimal design, it is still interesting to characterize the structure of an optimal design. In this paper, we only consider local D-optimal designs, see Cherno= (1953). By Carath5eodory’s theorem, for a k-parameter nonlinear model, the minimal number of support points for a D-optimal design measure (see Section 2 for design measure) is between k and k(k + 1)=2. When we search for a D-optimal design measure, we only need to search in the subclass of design measures with k to k(k + 1)=2 support points. However, if we can determine that a D-optimal design is supported on k points, we can simplify our problem and restrict our search into a much smaller subclass. It is also of theoretical interest to And the exact number of minimal support points for nonlinear models. Especially, characterizing classes of k-parameter models for which a D-optimal design measure sits on exactly k support points is of great interest. For some selected models, there are results on the supports with the minimum number of points. Abdelbasit and Plackett (1983) demonstrated that a D-optimal design for the logistic model is supported on 2 points. Ford et al. (1992) considered nine cases and obtained optimal designs which are supported on 2 points. Hedayat et al. (1997) showed that optimal designs are supported on 2 points for the class of models they used for analyzing raw optical density data. Basically, the overall theme of these papers is to And an optimal design in a subclass of designs, and then to verify the equivalence theorem for its underlying model. Mathew and Sinha (2001) avoided the use of equivalence theorem and directly showed that a D-optimal design for the logistic model is supported on exactly 2 points. In general, Vila (1991) provided a su$cient and quasi-necessary condition for a local D-optimal design in nonlinear regression models. For polynomial regression models, by using T-systems, Karlin and Studden (1966a) and Fedorov (1972) identiAed a class of k-parameter models for which a D-optimal design can only be supported on exactly k points. Karlin and Studden (1966b) provided general results on a class of information matrices. Di=erent from these works, which are focused on some speciAed models, we shall introduce some simple characterization functions. By verifying the monotonity of the characterization functions we can identify classes of 2-parameter models for which a D-optimal design is exactly supported on 2 points. In addition, we suggest tools based on the theory of maximum principle from di=erential equations. The conditions we propose here are very easy to verify.

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

207

There is a Bayesian version of equivalence theorem which has been utilized to obtain results on the minimum number of support points for nonlinear Bayesian designs. The reader is referred to the excellent review papers by Chaloner and Verdinelli (1995) and DasGupta (1996). The reader is also referred to Dette and Sahm (1998) for minimum number of support points for minimax optimal designs in nonlinear regression models. Theoretical characterization of optimal designs is very important. But it is well known that some optimal designs, especially the minimum size supported optimal designs, do not provide enough statistical Lexibility for model checking. Moreover, the designs with few support points are very sensitive to the initial values of the parameters. On the other hand, the optimal design can serve as a benchmark for comparing designs. Based on results on optimal design including both D-optimal and c-optimal designs, we will introduce in this paper some geometric and analytical methods to obtain e$cient designs. In Section 2, we shall introduce the class of nonlinear models under our consideration and the related design spaces. In Section 3, we introduce a general method based on some characterization function and obtain some interesting results. In Section 4, we shall introduce brieLy the theory of maximum principle and present some new results. We shall demonstrate some of our design techniques via three examples in Section 5. Additional geometric and analytical methods for Anding e$cient designs are given in Section 6.

2. Models and the related design spaces We will concentrate on 2-parameter models. Instead of considering models, we will consider the following information matrix:  p  p   ni g( i ) ni xi g( i )    i=1  i=1   I (; ; x) =  p (1) ; p      2 ni xi g( i ) ni xi g( i ) i=1

i=1

where x1 ; : : : ; xp are p support points, ni is the number of observations collected on xi ; i =  + xi ;  and  are unknown scale parameters to be estimated, and g( i ) is a strictly positive function relating the mean and the variance of the model under consideration. As Ford et al. (1992) pointed out, 2-parameter generalized linear models and some 2-parameter nonlinear regression models have the same type of information matrices as (1). For a class of general 2-parameter nonlinear models yi = f( i ) + i if we only know the Arst two moments, the mean f( i ) and the variance v( i ) of yi , and use the extended least square (ELS) estimation method to estimate  and , the general form of the information matrix (1) still applies, in which

208

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

g( ) = f ( )2 =v( ) + v ( )2 =2v( )2 . All these models have the same type of information matrices as (1) with di=erent g( ). 2 We pobserve that  det(I (; ; x)) = det(I (; ; )). For a Axed sample size n = i=1 ni , in order to maximize det(I (; ; x)), which is equivalent to maximizing log det(I (; ; x)), we could maximize det((1=n)I (; ; )). We can rewrite (1=n)I (; ; ) as

p  g( i )



1 qi (2) ( g( i ); i g( i )); I (; ; ) =

n i g( i ) i=1



where qi = ni =n. Therefore, the induced design space is G = {( g( ); g( )); ∈ }, where  = [v1 ; v2 ]; v1 can be −∞, and v2 can be +∞. Because of the discrete nature of qi = ni =n, the search for an optimal design becomes very complicated. Here, we follow the Kiefer approximation theory and allow qi to be any number between 0 and 1. Thus, we can use tools from calculus to solve the problem. The associated design is referred to as a design measure , and the corresponding information matrix (2) will be denoted by M (). We will concentrate on investigating D-optimal design measures based on the information matrix M () with  = (−∞; +∞). As we notice, the design space completely depends on the curve G. For the existence of an optimal design, we need lim →∞ g( ) 2 to be bounded. Further, we assume that g( ) is a unimodal function with a continuous second-derivative, which is usually met in many practical situations. 3. Results based on characterization functions By the equivalence theorems of Kiefer and Wolfowitz (1960) and White (1973), the following inequality must be satisAed by a D-optimal design measure 0 : 2 if is a support point of 0 ; 2 d( ; 0 ) = g( )(a + 2b + c ) = 6 2 otherwise; a b (; ; ) = . where I−1 0 b c We Arst consider the case where g( ) is a symmetric function about 0. This assumption implies that the unique mode 0 of g( ) is 0. In the sequel we say a D-optimal design is a symmetric D-optimal design if is a support point of the design with weight qi , then − is also a support point of the design with the same weight qi . Lemma 1. If g( ) is a symmetric function about 0, then a D-optimal design is a symmetric design. The proof of Lemma 1 is simple and therefore omitted. Symmetric designs have been considered by many authors including Lin and Studden (1988) and Mathew and Sinha (2001).

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

209

Theorem 2. The number of support points of any D-optimal design is two, if its corresponding g( ) is a symmetric function about 0, and h( ) = 2 + 2g( ) =g ( ) is a monotone function on (0; +∞). Proof. Lemma 1 implies that a D-optimal design 0 must be a symmetric design and thus d( ; 0 ) becomes d( ; 0 ) = (a + c 2 )g( ). A stationary point of d( ; 0 ) is a solution of the following equation: d ( ; 0 ) = ag ( ) + c( 2 g ( ) + 2 g( )) = 0: If = 0, it must be a solution of the following equation: 2 +

2g( ) a =− :  g ( ) c

(3)

Because g( ) is a symmetric function about 0, 2 + 2g( ) =g ( ) is also a symmetric function about 0. Since 2 + 2g( ) =g ( ) is monotone on (0; +∞), there are at most three stationary points (including =0) for d( ; 0 ) in (−∞; +∞). These three stationary points must include at least two local maximum points because of the minimum number of support points requirement. However, these three stationary points cannot simultaneously be local maximum points since there must exist one local minimum point between the two local maximum points. Therefore, the number of support points of 0 is two. For the rest of this section, we will consider a general g( ) without the symmetric assumption. Let us consider all the stationary points for d( ; 0 ), i.e. d ( ; 0 ) = 0. If g ( ) is not 0, the stationary points for d( ; 0 ) satisfy the following equation: 



g( ) 2g( ) 2 + 2b +  + a = 0: L( ) = c +  g ( ) g ( ) Thus, the set of stationary points for d( ; 0 ) is a subset of { : L( )=0}∪{ : g ( )=0}. We And that L( ) = 0 is equivalent to L1 ( ) + a=c = 0, where 

g( ) 2g( ) b : +  L1 ( ) = 2 +  +2 g ( ) c g ( ) We know that in each of the monotone interval of L1 ( ), there is at most one solution for L1 ( ) = −a=c. In order to And the number of monotone intervals for L1 ( ), we investigate the number of stationary points for L1 ( ), on which L1 ( ) can possibly change the monotonicity. When g ( ) = 0,  

g( ) b g( )  +2  : + L1 ( ) = 2 1 +  g ( ) c g ( )

210

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

If (1 + (g( )=g ( )) ) = 0, then L1 ( ) = 0. Therefore, L1 ( ) = 0 is equivalent to J ( ) + b=c = 0, where J ( ) = +

g( )=g ( ) : 1 + (g( )=g ( ))

Therefore, we obtain the following theorem. Theorem 3. The number of support points of any D-optimal design is two, if 0 is the only stationary point of g( ), and if J ( ) + b=c = 0 has at most one solution in R \ { 0 }. Proof. If J ( ) + b=c = 0 has at most one solution, L1 ( ) = 0 has at most one root. Therefore, L1 ( ) has at most three monotone intervals (note that the function L1 ( ) may be discontinuous at = 0 ), which means that there are at most four stationary points (one in each monotone intervals and = 0 ) for d( ; 0 ). If there are three local maximum points, then it must contain two local minimum points inside, which contradicts the fact that there are at most four stationary points. Therefore, it has at most two local maximum points.

4. Results based on the theory of maximum principle In this section, we provide some results based on the theory of maximum principle in di=erential equations. These results hold for any g( ) as long as it is symmetric about 0. We shall brieLy introduce the theory of maximum principle. The following Theorem 4 and Remark 5 are from Protter and Weinberger (1967). Theorem 4 (maximum principle). Let u(x) satisfy the di9erential inequality u (x) + k(x)u (x) ¿ 0;

a¡x¡b

(4)

with a bounded function k(x) on any subinterval [a ; b ] ⊂ (a; b). If u(x) 6 M and if the maximum M is attained at an interior point c of (a; b), then u(x) = M for all x ∈ (a; b). Remark 5. A function satisfying inequality (4): (1) cannot have a local maximum at an interior point, (2) can have at most one local minimum at an interior point. Lemma 1 implies that a D-optimal design 0 must be a symmetric design, and consequently d( ; 0 ) = (a + c 2 )g( ). We can explore di=erent choices for k( ) and we propose the following ones. (1) k1 ( ) = −g ( )=g ( ), (2) k2 ( ) = −(2g ( ) + g( ))=g( ) .

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

211

We can verify that d ( ; 0 ) + k1 ( )d ( ; 0 ) = 2cH1 ( ); d ( ; 0 ) + k2 ( )d ( ; 0 ) = (a + c 2 )H2 ( ); where H1 ( ) = 2g ( ) + g( ) − H2 ( ) = g ( ) − g ( )

g ( )g( ) ; g ( )

2g ( ) + g( ) : g( )

By applying the theory of maximum principle to the function −d( ; 0 ), we obtain the following result. Theorem 6. The number of support points for any D-optimal design is two, if its corresponding g( ) is a symmetric function about 0, and either H1 ( ) 6 0 or H2 ( ) 6 0 on (0; +∞). Proof. Because g( ) is a symmetric function about 0; H1 ( ) and H2 ( ) are also symmetric functions about 0. By applying the maximum principle to −d( ; 0 ) there are at most two local maximum points for d( ; 0 ) on (−∞; +∞).

5. Examples Example 1. Consider the case where g( ) = (1 + 2 )−k , where k ¿ 1. The function g( ) may correspond to a linear or nonlinear model. The g( ) is a symmetric, unimodal function with 0 as its unique mode. Let h( )= 2 +2 g( )=g ( ) then h ( )=2 (k−1)=k. Therefore, h ( ) ¡ 0 when ¡ 0. By Theorem 2 the number of support points for the D-optimal designs for this model can only be 2. Let H1 ( ) and H2 ( ) be the two functions deAned in Section 4. It can be shown H1 ( ) = −

2(k − 1)t 2 ; (1 + t 2 )k+1

H2 ( ) = −

4k(k − 1)t 2 : (1 + t 2 )k+2

By Theorem 6 the number of support points for the D-optimal designs for this model can only be 2. Example 2. This example comes from a linear regression model in Fedorov (1972). The g( ) is exp(− 2 ), which is symmetric about zero and lim →∞ g( ) 2 = 0. Therefore, h( ) = 2 − 1. By Theorem 2 the number of support points for the D-optimal

212

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

designs for this model can only be 2 which was provided by Fedorov (1972). We can also show this result by Theorem 3 since 0 is the only stationary point of g( ) and J ( ) = − =(1 + 2 2 ) is monotone on (−∞; +∞). Example 3. Consider the case where g( ) = (s + t 2 )k exp(−l 2 ). Here s ¿ 0; t ¿ 0; l ¿ 0, and k = 0 or k 6 − 1. The function g( ) may correspond to a linear or nonlinear model. It is not di$cult to verify that, g( ) is a symmetric, unimodal function with 0 as its unique mode. Let h( ) = 2 + 2 g( )=g ( ). It is easy to verify that h ( ) = 2

k 2 t 2 − 2kstl − 2kt 2 l 2 + l2 s2 + 2l2 st 2 + l2 t 2 4 + kt 2 : (−kt + sl + tl 2 )2

Therefore, h ( ) ¡ 0 when ¡ 0. Thus, h( ) is monotone on the interval (0; +∞). By Theorem 2 the number of support point for the D-optimal designs for this model can only be 2.

6. E&cient designs Consider a general 2-parameter linear or nonlinear model. In some situations, we might be interested in estimating a particular linear combination of the parameters, say % = c1  + c2 , where  and  are the parameters in the model. A c-optimal design is the design which minimizes the variance of the estimator of %. When the data are from a binary distribution and the parameter of interest is EDp the pth percentile, the c-optimal designs are also called EDp designs. Wu (1988) provided general results concerning c-optimal designs for binary data. Ford et al. (1992) studied the locally D- and c-optimal designs for generalized linear models. Hedayat et al. (1999) and Hedayat et al. (2002) considered the EDp -optimal design for the raw optical density data, where they also studied the e$ciency and robustness of p-point equally spaced and weighted designs. Here, we only deal with the same type of information matrices (1) deAned in Section 2 without specifying the models. We also assume that g( ) is a strictly continuous positive function, and lim →+∞ g( ) 2 = 0. Moreover, as in Wu (1988), we assume that G contains a convex set. In Section 6.3, we shall further assume that g( ) satisAes the conditions of Theorem 2. Following Ford et al. (1992), the c-optimal nonlinear design can be transformed into a c-optimal design for the following linear model in the induced design space G: yi = Xi1  + Xi2  + ;

i = 1; : : : ; p;



where  ∼ N(0; )2 ); XiT = (Xi1 ; Xi2 ) = ( g( i ); i g( i )) and



G = {( g( ); g( ); ∈ (−∞; ∞)}:

(5)

The corresponding parameter of interest can be written as %∗ =c1∗ +c2∗ . Using Elfving’s method, we construct G ∗ which is the convex hull of G ∪ −G, where −G is the

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

213

reLection of G about the origin. If the vector (c1∗ ; c2∗ ) or its extension intersects the boundary of G ∗ at a point of G or −G, then the c-optimal design is a 1-point design. Otherwise, it is a 2-point design. Optimal designs may not be the best choice in practice. It is wise to trade some e$ciency for robustness and also gain some Lexibility for model checking. Consequently, the search for more points e$cient designs is useful and indeed very desirable. Hedayat et al. (1997) considered D-e$cient designs for their models. Through numerical search, they obtained D-e$cient designs in the subclass of equally spaced and weighted designs. Following Elfving (1952), we propose a general method to And a multi-point D-e$cient and c-e$cient designs with a pre-speciAed level of e$ciency. We deAne the c- and D-e$ciency indexes of the design  as follows: e= c () =

minunbiased %ˆ∗ (Var(%ˆ∗ ))∗ 6 1; minunbiased %ˆ∗ (Var(%ˆ∗ ))

e= D () =

det(I (; ; ) ) 6 1: det(I (; ; )0 )

where ∗ is the c-optimal design and 0 is a D-optimal design. A design  will be called a c- or D-e$cient design if e= c () or e= D () is relatively high, respectively. 6.1. The 2-point c-e
that vector c intersects G. Therefore, there exists a , such that c = s( g( ∗ ); ∗ g( ∗ )) for some s ¿ 0. For notational convenience, we also assume that s = 1. We know that the c-optimal design for estimating c1∗

 + c2∗  in the

induced ∗ ∗ ∗ g( ∗ )). design space is a 1-point design which is supported on X = ( g( ); ∗ This corresponds to the design supported on on the original design space. When c1∗ = 0, the c-optimal design is a 2-point design since g( ) ¿ 0, which will be covered in the next section. We propose the following method to And the 100r% c-e$cient 2-point designs. Draw any line through the point



A = r( g( ∗ ); ∗ g( ∗ )) and by-passing the origin. The line will intersect the boundary of G ∗ at 2 points B and C, which correspond to 1 and 2 . Because the line does not go through the origin, 1 = 2 and the vectors corresponding to B and C are linearly independent. Theorem 7. The design 2; r with support points 1 and 2 and with weights q1 and q2 has 100r% e
214

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

Proof. Denote yS 1 and yS 2 to be the observation means taken on points B and C. By the deAnition of e$ciency and based on the linear model (5). e= c (2; r ) =

minunbiased %ˆ∗ (Var(%ˆ∗ ))∗ minunbiased %ˆ∗ (Var(%ˆ∗ ))

= (n

min

c∗ =a1 B+a2 C



=

Var(a1 yS 1 + a2 yS 2 ))−1

min

c∗ =a1 B+a2 C

a21 a2 + 2 q1 nq2

−1 :

Since the two vectors √ B and C are linearly independent, there exists one and only one choice of (a1 ; a2 ) = 1= r(q1 ; q2 ) such that c∗ = a1 B + a2 C. Therefore, −1

2 q1 q22 + = r: e= c (2; r ) = rq1 rq2 We mention two applications for Theorem 7. Application 1: We want to include a particular point 1 in the design and And another point so that the resulting 2-point design

has at least

100r% e$ciency. √

Link the two induced design points ( g( ); g( 1 )) and r( g( ∗ ); 1 1



∗ ∗ g( )), and extent the line to intersect G, say at ( g( 2 ); 2 g( 2 )). Note that, for this application, 1 cannot be ∗ . Otherwise, the line will pass through the origin and may not intersect G at another point. According to Theorem 7, when 1 = ∗ , the design with support points 1 and 2 and with corresponding weights q and 1 − q has 100r% e$ciency. We can And 2 by solving the following equation:



√ ∗

2 g( 2 ) − 1 g( 1 ) r g( ∗ ) − 1 g( 1 )



= √

: g( 2 ) − g( 1 ) r g( ∗ ) − g( 1 ) Application 2: For some practical reasons, often experimenters weighted

equally

√ prefer designs. Hence, we need to And 2 points 1 and 2 such that r( g( ∗ ); ∗ g( ∗ )) is the mid-point of the two induced design points corresponding to 1 and 2 . We can And 1 and 2 by solving the following system of equations:





2 r g( ∗ ) = g( 1 ) + g( 2 );



√ 2 r ∗ g( ∗ ) = 1 g( 1 ) + 2 g( 2 ): Therefore, by Theorem 7, the design with equal weights on 1 and 2 has 100r% e$ciency. 6.2. The p-point c-e
A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

215

c-optimal designs are 1-point designs. However, similar to the last section, we adjust the length of the vector c∗ such that (c1∗ ; c2∗ ) is on the boundary of G ∗ . We have the following theorem which characterizes the p points of c-e$cient designs. with weights q1 ; : : : ; qp Theorem 8. The design p; r with support points 1 ; : : : ; p and  p p has at least 100r% e


∗ ∗ the corresponding induced design point of i and A = r( g( ); g( ∗ )). The proof of Theorem 8 is similar to the proof of Theorem 7. For given p and r, Theorem 8 could produce inAnite number of 100r% e$cient p-point designs. In the following context, we conAne our search in the class of equally spaced designs which is used frequently in practice. For the equally spaced p-point designs, the design points are 1 ; 1 + d; : : : ; 1 + (p − 1)d. There are two parameters which are unknowns, namely 1 and the step d. If we Ax q1 ; : : : ; qp , then there are two equations to be solved for 1 and d p √ ∗ 

rc1 = qi g( i ); i=1



rc2∗ =

p 

qi i



g( i );

i=1

where i = 1 + (i − 1)d. As a special case, when the weights are equal, we have p 

√ p rc1∗ = g( i ); i=1 p



√ p rc2∗ = i g( i ): i=1

6.3. The D-e
216

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

Proof. For the design 4; r , we calculate det(I (; ; 4; r )), which is det(I (; ; 4; r )) = (q1 g( 1 ) + q2 g( 2 ))(q1 12 g( 1 ) + q2 22 g( 2 )) ¿ (q1 1 g( 1 ) + q2 2 g( 2 ))2 = r(g( 1∗ ) 1∗ )2 = r det(I (; ; 0 )): Hence, e= c (4; r ) ¿ 100r%. Application: We want to include a particular point 1 ¿ 0 (and − 1 due to the symmetry) in the design and And another point 2 ¿ 0 (and − 2 ) so that the resulting 4-point design has at least 100r% D-e$ciency.





Link the two induced design points ( g( 1 ); 1 g( 1 )) and r( g( ∗ );

∗ ∗ g( )), and extent the line to intersect the curve of g( ) for ¿ 0, says 2 . Similarly, we can establish the following theorem for 2p-point symmetric D-e$cient designs. Theorem 10. The design 2p; r with support points 1 ¿ 0; − 1 ; : : : ; p ¿ 0, and − p and with weights 12 q1 ; 12 q1 ; : : : ; 12 qp ; 12 qp has 100r% D-e
r ∗ g( ∗ ) =

p 

qi i g( i ):

i=1

7. Conclusion In this article, we considered a class of 2-parameter nonlinear models in which the Fisher information matrix of the maximum likelihood estimates has a particular general structure. We then proved that any nonlinear model which yields this structure for its Fisher information matrix produces a D-optimum design which is supported on precisely two points. We also introduced methods for constructing D- and c-e$cient designs which are supported on more than the number of minimum points. These e$cient designs are more useful designs to be recommended for practical applications than the D-optimum designs which are supported on two points only. D-optimum deigns are needed to give us a benchmark for the optimality. A new optimality tool based on the theory of maximum principle in di=erential equations is introduced for characterizing D-optimum designs for nonlinear models. Acknowledgements The authors thank the editorial board and the referee for their several constructive comments on the earlier version of this article.

A.S. Hedayat et al. / Journal of Statistical Planning and Inference 124 (2004) 205 – 217

217

References Abdelbasit, K.M., Plackett, R.L., 1983. Experimental design for binary data. J. Amer. Statist. Assoc. 78, 90–98. Chaloner, K., Verdinelli, I., 1995. Bayesian experimental design: a review. Statist. Sci. 10, 273–304. Cherno=, H., 1953. Local optimal designs for estimating parameters. Ann. Math. Statist. 24, 586–602. DasGupta, A., 1996. Review of optimal Bayes designs. In: Ghosh, S., Rao, C.R. (Eds.), Handbook of Statistics, Vol. 13. Elsevier, Amsterdam, pp. 1099 –1148. Dette, H., Sahm, M., 1998. Minimax optimal designs in nonlinear regression models. Statist. Sinica 8, 1249–1264. Elfving, G., 1952. Optimum allocation for linear regression theory. Ann. Math. Statist. 23, 255–262. Fedorov, V.V., 1972. Theory of Optimal Experiments. Academic Press, New York. Ford, I., Torsney, B., Wu, C.F.J., 1992. The use of a canonical form in the construction of locally optimal design for non-linear problems. J. Roy. Statist. Soc. Ser. B 54, 569–583. Hedayat, A.S., Yan, B., Pezzuto, J.M., 1997. Modelling and identifying optimum designs for Atting dose-response curves based on raw optimal density data. J. Amer. Statist. Assoc. 92, 1132–1140. Hedayat, A.S., Yan, B., Pezzuto, J.M., 1999. Design for Atting non-linear dose-response curve. Proceedings of the Biopharmaceutical Section of the American Statistical Association, pp. 132–137. Hedayat, A.S., Yan, B., Pezzuto, J.M., 2002. Optimal designs for estimating EDp based on raw optical density data. J. Statist. Plann. Inference 104, 161–174. Karlin, S., Studden, W.J., 1966a. Optimal experimental designs. Ann. Math. Statist. 37, 783–815. Karlin, S., Studden, W.J., 1966b. Tchebyche= Systems: With Applications in Analysis and Statistics. Interscience, New York. Kiefer, J., Wolfowitz, J., 1960. The equivalence of two extremum problems. Canad. J. Math. 12, 363–366. Lin, Y.B., Studden, W.J., 1988. E$cient Ds optimal designs for multivariate polynomial regression on the q-cube. Ann. Statist. 16, 1225–1240. Mathew, T., Sinha, B.K., 2001. Optimal designs for binary data under logistic regression. J. Statist. Plann. Inference 93, 295–307. Protter, M.H., Weinberger, H.F., 1967. Maximum Principles in Di=erential Equations. Prentice-Hall Inc., New Jersey. Vila, J.P., 1991. Local optimality of replications from a minimal D-optimal design in regression: a su$cient and quasi-necessary condition. J. Statist. Plann. Inference 29, 261–277. White, L., 1973. An extension of the general equivalence theorem to nonlinear models. Biometrika 60, 345–348. Wu, C.F.J., 1988. Optimal design for percentile estimation of a quantal response curve. In: Dodge, Y., Fedorov, V., Wynn, H.P. (Eds.), Optimal Design and Analysis of Experiments. Elsevier, Amsterdam, pp. 213–223. Wynn, H.P., 1970. The sequential generation of D-optimal experimental designs. Ann. Math. Statist. 41, 1655–1664.