Bayesian-type estimators of change points

Bayesian-type estimators of change points

Journal of Statistical Planning and Inference 91 (2000) 195–208 www.elsevier.com/locate/jspi Bayesian-type estimators of change points  Jaromr An...

110KB Sizes 0 Downloads 37 Views

Journal of Statistical Planning and Inference 91 (2000) 195–208

www.elsevier.com/locate/jspi

Bayesian-type estimators of change points  Jaromr Antoch, Marie Huskova ∗ Department of Statistics, Charles University, SokolovskÃa 83, CZ-18675 Praha 8-KarlÃn, Czech Republic

Abstract The purpose of this paper is to introduce Bayesian-type estimators of change point(s). These estimators have smaller variance than related argmax-type estimators and can be also viewed as a one-step estimators, where the argmax-type estimators are used as the preliminary ones. The Bayesian least-squares (LS)-type estimators are introduced and studied. Con dence intervals based on bootstrap approximation for the change point are also constructed. Finite sample performance is checked in a simulation study. We concentrate on the shift in location model; however, appropriately modi ed procedure can be applied to the other models and other types c 2000 Elsevier Science B.V. All rights reserved. of preliminary estimators as well. MSC: primary 62G20; secondary 62E20; 60F17 Keywords: Estimators of change point(s); Fixed and local changes; Location model; Bootstrap

1. Introduction Let Y1 ; : : : ; Yn be independent observations having distribution function (cdf) Fn for 16i6m and cdf Gn (6= Fn ) for m ¡ i6n, where m(¡ n) is unknown and called the change point. The cdf Fn and Gn may be known or unknown and typically, but not necessarily, depend on n. Usually, it is assumed that the di erences Fn −Gn (6= 0) either do not depend on n or tend to 0 in a certain way when n → ∞. The former case is called ÿxed changes while the latter one local changes. A number of authors as Hinkley (1970), Hinkley and Hinkley (1970), Cobb (1978), Worsley (1986), Carlstein (1988), Siegmund (1988), Ritov (1990), Dumbgen (1991), Brodsky and Darkhovsky (1993), Ferger (1994a,b,c), Antoch et al. (1995) and Gombay et al. (1996) among others, proposed various estimators of the change point m in di erent models. These estimators are usually de ned as an argmax (or argmin) of a  97=201=1163. Supported by the Grant GACR Corresponding author. E-mail addresses: [email protected] .cuni.cz (J. Antoch), [email protected] .cuni.cz (M. Huskova). 



c 2000 Elsevier Science B.V. All rights reserved. 0378-3758/00/$ - see front matter PII: S 0 3 7 8 - 3 7 5 8 ( 0 0 ) 0 0 1 7 8 - 6

196

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

sequence of functionals, for survey see, e.g., Csorg˝o and Horvath (1997) or Antoch and Huskova (1999). Our primary interest is to construct Bayesian estimators of the change point m and to investigate their limit distribution. In this paper, we develop Bayesian-like LS-estimators of the change point m, investigate their limit distributions and construct related con dence intervals. As a motivation we use the results of Chapter 7 in Ibragimov and Has’minskii (1981) on the random variables V = argmax{W (s) − |s|=2; s ∈ R1 } and

(1.1)

R +∞

s exp{W (s) − |s|=2} ds ; VB = R−∞ +∞ exp{W (s) − |s|=2} ds −∞ where {W (s); s ∈ R1 } is a two-sided standard Wiener process, i.e.,  W1 (−s); s ¡ 0; W (s) = s¿0; W2 (s);

(1.2)

(1.3)

with {W1 (t); t ∈ [0; ∞)} and {W2 (t); t ∈ [0; ∞)} being independent standard Wiener processes. Ibragimov and Has’minskii (1981) showed that V is related to the maximum likelihood estimator of the parameter  when we observe a Wiener process with no drift up to time t =  and with unit drift afterwards; VB is related to the Bayesian estimator of  with respect to quadratic loss. Moreover, they showed that var VB =var V ≈ 0:73. However, the authors of this paper do not know any work where the explicit form of the distribution function of VB is derived. Typically, in the case of local changes properly standardized argmax-type estimators converge in distribution to the random variable V , eventually to argmax{W (s) − g(s); s ∈ R1 } with a certain nonrandom function g. This leads to the idea to construct estimators of change that converge (after proper standardization) in distribution to VB . In this paper, we concentrate on the estimators related to VB that will be called Bayesian-type estimators. We study their limit behavior both under xed and local changes and show that bootstrap method provides reasonable approximation to their distributions. This is of particular interest, because the explicit form of the distribution function of V has been derived by several authors, see Stryhn (1996) for the latest development. Moreover, in the case of xed changes the estimators are not asymptotically distribution free. In Section 2, we propose and study Bayesian-type estimators of the change point m related to the LS-type argmax estimator. In Section 3, we show their parallel M- and R-counterparts. The consistency of the bootstrap approximation is showed in Section 4. The paper is completed by the results of a simulation study.

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

197

2. Bayesian LS-estimator of a change in location model We assume that the observations follow the model Yi =  + ei ;

i = 1; : : : ; m;

=  + n + ei ;

i = m + 1; : : : ; n;

(2.1)

where m(¡ n);  and n are unknown parameters, e1 ; : : : ; en are independent identically distributed (iid) with zero mean and nite generally unknown variance 2 ¿ 0. The argmax least-squares (LS-) type estimator mˆ of the parameter m is de ned as  r n |Sk |; k = 1; : : : ; n − 1 ; (2.2) mˆ = argmax k(n − k) where Sk =

k P i=1

(Yi − Y n );

k 1 P Yi ; Y k = k i=1

16k6n;

(2.3)

16k6n:

(2.4)

Next we introduce the Bayesian least-squares m˜ estimator of m, that is of the form Pn k exp{wkn } ; (2.5) m˜ = Pk=1 n k=1 exp{wkn } where

 mˆ P  |mˆ − k|   qˆn ; 16k ¡ m; ˆ eˆ i − qˆ2n   2  i=k+1 wkn = 0; k = m; ˆ   k  0 P ˆ − k| 02 |m   eˆ i − qˆn ; mˆ ¡ k6n;  −qˆn 2 i=m+1 ˆ

with qˆn =

ˆn ; ˆmˆ

0 Y k =

n P 1 Yi ; n − k i=k+1

ˆ2k =

k 1P (Yi − Y k )2 ; k i=1

2

ˆ0k =

qˆ0n =

 Yi − Y mˆ   ;  ˆmˆ 0 eˆ i =  Yi − Y mˆ   ; ˆ0mˆ

ˆn ; ˆ0mˆ

16i6m; ˆ mˆ ¡ i6n;

(2.7)

16k ¡ n;

(2.8)

16k6n;

(2.9)

n P 1 0 (Yi − Y k )2 ; n − k i=k+1

0 ˆn = Y mˆ − Y mˆ :

(2.6)

16k ¡ n;

(2.10) (2.11)

198

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

Remark. The estimator m˜ can be rewritten in the form Pn (k − m) ˆ exp{wkn } Pn ; m˜ = mˆ + k=1 k=1 exp{wkn } which can be interpreted as the fact that m˜ is a one-step estimator with mˆ being the preliminary one: )n ( exp{wkn } Pn j=1 exp{wjn } k=1

can be viewed as a posterior distribution of m and then m˜ is the Bayesian estimator of m with respect to quadratic loss. Particularly, the above-de ned weights are approximations to the posterior distribution where ei ’s have N(0; 2 ) distribution and the prior is of the form P(m = k) = g(k=n), where g is a smooth function on (0,1). In the sequel, we shall study the limit properties (as n → ∞) of the considered estimators. Namely, we formulate results on the limit behavior of the estimators mˆ and m˜ under two situations: (i) n =  6= 0 is xed ( xed change); 6 0; n → 0 as n → ∞ (local change). (ii) n = Theorem 2.1 concerns xed changes and Theorem 2.2 describes local changes. Theorem 2.1 (Fixed change). Let the random variables Y1 ; : : : ; Yn follow model (2:1) with n =  6= 0 ÿxed; and e1 ; : : : ; en be iid random variables with zero mean; nonzero variance 2 and E|ei |2+ ¡ ∞ with some  ¿ 0. Assume; moreover; that m = bn c

with some ∈ (0; 1);

(2.12)

where bac denotes the integer part of a. • Then mˆ − m converges; as n → ∞; in distribution to

where

argmax {wk ; k = 0; ±1; ±2; : : :};

(2.13)

 0  P 2 k   ; Z − i   2 2    i=k+1 wk = 0; k = 0;    k  2 k P  − Zi − 2 ;  i=1  2

(2.14)

−∞6k ¡ 0;

0 ¡ k ¡ ∞;

with {Zi }∞ i=−∞ being iid distributed as e1 =. • Also, m˜ − m converges, as n → ∞; in distribution to P∞ k exp{wk } Pk=−∞ : ∞ k=−∞ exp{wk }

(2.15)

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

199

Theorem 2.2 (Local change). Let the random variables Y1 ; : : : ; Yn follow model (2:1) with e1 ; : : : ; en being iid random variables with zero mean, nonzero variance 2 ; E|ei |2+ ¡ ∞ for some  ¿ 0 and m=bn c with some unknown ∈ (0; 1). We assume; moreover; that √ |n | n → ∞ as n → ∞: (2.16) n → 0 and p log log n Then; as n → ∞; 2n D (mˆ − m) → V 2 and 2n D (m˜ − m) → VB ; 2 where V and VB are deÿned in (1:1) and (1:2); respectively.

(2.17)

(2.18)

Proof. We rst gather some material proved elsewhere. Local alternatives. The result in (2.17) on the limit distribution mˆ under the local alternatives was proved in Antoch et al. (1995). It implies, as n → ∞, that mˆ − m = OP (−2 n ): Then by few standard steps we have, as n → ∞, ˆn − n = OP (1); ˆ2mˆ − 2 = Op (n− )

and

2

ˆ0mˆ − 2 = Op (n− )

with some ¿ 0. ˆ Notice, that the estimator m˜ will not Further, put qn∗ = n = and assume that m ¡ m. change if the weights wkn are replaced by the weights mˆ P ei mˆ − m ∗ − qn∗2 : = wkn − qn∗ wkn  2 i=m+1 Next, we de ne an auxiliary estimator with replacement of qˆn and qˆ0n by qn∗ , and ˆ2m 2 and ˆ0m by 2 . We denote this estimator by mˆ a and nd that can be expressed in the form Pn k exp{vkn } ; (2.19) mˆ a = Pk=1 n k=1 exp{vkn } where  P ei |m − k| m  qn∗ i=k+1 − qn∗2     2    e  mˆ − m  m ˆ  ; 16k ¡ m; − qn∗2 (mˆ − k)  −qn∗2 (mˆ − k)    mˆ   P e mˆ e |m − k| vkn = −qn∗ ki=m+1 i − qn∗2 − qn∗2 (k − m) ˆ   2     m ˆ − m   ; m6k ¡ m; ˆ  −qn∗2 (mˆ − k)   mˆ   0   ei |m − k| e  −q∗ Pk − qn∗2 + qn∗ (k − m) ˆ mˆ ; mˆ ¡ k6n: n i=m+1  2 

200

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

We show that, as n → ∞, qn2 (mˆ a − m) = OP (1);

(2.20)

D

(2.21)

P

(2.22)

qn2 (mˆ a − m) → VB ; qn2 (m˜ − mˆ a ) → 0: These three assertions then imply (2.18). We start with (2.20). Choose Bn such that, as n → ∞, √ |qn | n p → ∞: Bn → ∞ and Bn log log n

(2.23)

By the Hajek–Renyi inequality we have ) ( max(k; P m) 1 ei = OP (Bn−1=2 ): max −2 qn (m − k) i=min(k; m)+1 |k−m|¿qn Bn

(2.24)

Moreover, by (2.17) we also have mˆ − m = OP (qn−2 ), which in combination with (2.24) yields vkn = − 12 qn2 |m − k|(1 + oP (1)) uniformly for |k − m|¿qn−2 Bn . Hence, for every  ∈ (0; 12 ) and n large enough we have ! P P 2 |k − m|exp{vkn } = OP |k − m|exp{−qn |m − k|} |k−m|¿qn−2 Bn

|k−m|¿qn−2 Bn

= OP (qn−2 Bn exp{−Bn2 }) = oP (qn−2 ) and P |k−m|¿qn−2 Bn

exp{vkn } = OP

P |k−m|¿qn−2 Bn

! exp{−qn2 |m

− k|}

= OP (exp{−Bn2 }) = oP (1): Standard steps give, as n → ∞, vkn − vk = oP (1) uniformly for |k − m|¿qn−2 Bn , where vk = sign(m − k)qn

max(k; P m) i=min(k; m)+1

Last three results imply that P qn2 (mˆ a − m) =

|k−m|6Bn qn−2 (k

P

ei |k − m| − qn2 :  2 − m)qn2 exp{vk }(1 + oP (1)) + oP (1)

|k−m|6Bn qn−2

exp{vk }(1 + oP (1)) + oP (1)

(2.25)

:

(2.26)

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

201

The weak convergence of partial sums of independent random variables yields ) ( m P ei D[−B;0] (2.27) ; s ∈ b−B; 0c −→ {W1 (−s); s ∈ [0; B]} qn  i=m+[sqn ]+1 and

( −qn

m+[sq P n] i=m+1

ei ; s ∈ b0; Bc 

) D[−B;0]

−→ {W2 (s); s ∈ [0; B]}

(2.28)

for every B ¿ 0, where {W1 (s); s ∈ [0; ∞)} and {W2 (s); s ∈ [0; ∞)} are independent standard Wiener processes. Now, (2.26) – (2.28) imply (2.20) and (2.21). In order to show (2.22), we realize that qˆn − qn = oP (qn−1 ) and also max(k; P m)

(eˆ i −

i=min(k; m)+1

  m P ei ) = oP ei + OP (|m − k|(n−1=2 + n =n))  i=k+1

uniformly for 16k6n. Then going through the above part of the proof we nd out that (2.17) holds. We proceed quite analogously if mˆ ¡ m. This completes the proof. Proof of Theorem 2.1. Assertion (2.13) is proved in Antoch and Huskova (1999). Concerning the proof of (2.15), we proceed as in the proof of (2.18). We nd that (2.13) implies mˆ − m = OP (1); qˆn − = = oP (1) and qˆn − qn = oP (1), so that going through the proof of Theorem 2.1 we can conclude that all results hold true if  is xed. The details are omitted. Remark 2.1. Going through the proof of (2.18) of Theorem 2.2, we can suggest several modi cations of the estimator m˜ that have the same limit distribution. For instant, assertion (2.18) (eventually (2.13)) remains true if the estimators mˆ and qˆn are replaced by the estimators mˆ and qˆn with the properties mˆ − m = OP (qn−2 )

and

qˆn − qn = OP (qn−1 ):

Remark 2.2. The same limit distribution as m˜ has also the estimator P −2 k exp{wkn } |k−m|6B ˆ n qˆn ˜ ; m˜ = P −2 exp{wkn } |k−m|6B ˆ n qˆn

(2.29)

ˆ n − m) ˆ and Bn → ∞. This means that Bn can where {Bn } has the property Bn 6min(m; be chosen tending very slowly to ∞. This is due to the fact that the weights exp{wkn } for k that are not close to the change point m (and/or to an argmax estimator m) ˆ are negligible and do not in uence the limit behavior of the considered estimator.

202

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

3. Bayesian M- and R-estimators of a change in location model Antoch and Huskova (2000) studied Bayesian-like R- and M-estimators of the change point. Similar to the results of the present paper they showed that these estimators have smaller variances than the related argmax-type estimators. Con dence intervals of the change point based on exchangeability arguments and bootstrap were constructed and theoretical results illustrated by a real data set. 4. Bootstrap approximation Now, we turn to the problem of the bootstrap approximations to the distribution functions of the Bayesian estimator m˜ of m studied in Section 2. This will allow us to construct asymptotic approximations to the con dence intervals for m. Bootstrap approximation to the argmax-type estimators is discussed, e.g., in Dumbgen (1991), Antoch et al. (1995) and Antoch and Huskova (1999) as well as in the references mentioned there. Bootstrap sampling scheme: Find the argmax estimator mˆ given by (2.2), the 0 Bayesian-type estimator m˜ and the estimators Y m˜ ; Y m˜ ; ˜2n ; q˜n and q˜0n de ned by (2.4), (2.8) – (2.11), respectively, with mˆ replaced by m˜ where necessary. ∗ ∗ Take two independent samples Y1∗ ; : : : ; Ym∗˜ and Ym+1 ˜ ; : : : ; Yn from the empirical cdf of Y1 ; : : : ; Ym˜ and Ym+1 ˜ ; : : : ; Yn ; respectively. Compute the bootstrap estimators e˜∗i of eˆ i as  ∗ Yi − Y m˜   ; 16k6m; ˜   ˜m˜ ∗ e˜ i = 0  Yi∗ − Y m˜   ; m˜ ¡ k6n:  ˜0m˜ ∗ are obtained from The bootstrap values wkn  Pm˜ |m˜ − k|   q˜n i=k+1 e˜∗i − q˜2n ; 16k ¡ m; ˜   2  ∗ (4.1) = 0; wkn k = m; ˜    P ˜ − k|  ∗ 02 |m  −q˜0 n ; m˜ ¡ k6n: n i=m+1 ˜ e˜ i − q˜n 2 Finally, the bootstrap estimator m˜ ∗ is de ned by (2.5) with the weights wkn replaced ∗ . by their bootstrap counterparts wkn However, it is more convenient to use slightly modi ed estimator, namely P ∗ (k − mˆ ∗ ) exp{wkn } ∗ |k−mˆ ∗ |6Bn q˜−2 ∗ n P (4.2) m˜˜ = mˆ + ∗ exp{wkn } |k−mˆ ∗ |6Bn q˜−2 n

with Bn ful lling, as n → ∞, ˜ n − m) ˜ Bn 6min(m;

and

Bn → ∞:

(4.3)

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

203

∗ The estimator m˜˜ is the bootstrap counterpart of the estimators m˜˜ de ned by (2.29). Of course, other bootstrap estimators can be also proposed. We denote by P ∗ and by E ∗ the probability and the expectations, respectively, corresponding to the bootstrap sampling scheme given Y1 ; : : : ; Yn . In the following two theorems we examine the performance of

˜ P ∗ (q˜2n (m˜ ∗ − m)6x)



˜˜ P ∗ (q˜n (m˜˜ − m)6x)

and

as the estimators of P(qn2 (m˜ − m)6x)

and

P(qn (m˜˜ − m)6x)

both for xed and local changes. Theorem 4.1 (Fixed change). Let the assumptions of Theorem 2:1 be satisÿed. Then; as n → ∞; P

x ∈ R1 ;

(4.4)

P

x ∈ R1 ;

(4.5)

˜ − P(m˜ − m6x) → 0; P ∗ (m˜ ∗ − m6x) ∗

˜˜ − P(m˜˜ − m6x) → 0; P ∗ (m˜˜ − m6x) with Bn satisfying (4:3).

Theorem 4.2 (Local change). Let the assumptions of Theorem 2:2 be satisÿed. Then; as n → ∞; P

˜ − P(qn2 (m˜ − m)6x)|} → 0; sup {|P ∗ (q˜2n (m˜ ∗ − m)6x)

(4.6)

∗ P ˜˜ − P(qn2 (m˜˜ − m)6x)|} → 0; sup {|P ∗ (q˜2n (m˜˜ − m)6x)

(4.7)

x∈R1

x∈R1

where q˜n and ˜2n are deÿned by (2:7) and (2:10); respectively; with mˆ replaced by m˜ and {Bn }n satisÿes (4:3). Remark 4.1. Theorem 4.2 imply that, as n → ∞; P

˜ − P(VB 6x)|} → 0: sup {|P ∗ (q˜2n (m˜ ∗ − m)6x)

x∈R1

∗ The assertion remains true if the estimators m˜ and m˜ ∗ are replaced by m˜˜ and m˜˜

Remark 4.2. According to Theorems 4.1 and 4.2 the bootstrap ‘work’ both for xed as well as local changes. Both theorems enable us to construct a bootstrap approximation for the con dence interval for m. In the case of xed changes, by Theorem 2.1 the limiting cdf of m˜ − m is not continuous and therefore the approximation to the quantiles does not exists for all ∈ (0; 1). We usually work with nominal type of con dence level.

204

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

In the case of local changes, by Theorem 2.2 the resulting limit cdf of the estimator is continuous and therefore the descried bootstrap sampling scheme provides also bootstrap approximations to the 100(1 − )% quantile of the distribution of q˜2n (m˜ − m) for arbitrary ∈ (0; 1). This means that in the case of local changes the bootstrap approximation to the 100(1 − )% con dence interval has the form ˜ + h∗ (1 − =2)q˜−2 (m˜ − h∗ ( =2)q˜−2 n ; m n );

(4.8)

where h∗ ( ); ∈ (0; 1) is the bootstrap approximation to the corresponding quantile. The similar bootstrap scheme can be used when working with the Bayesian M- and R-estimators, see Antoch and Huskova (2000) for details. We give only the sketch of the proof of Theorem 4.2, the proof of Theorem 4.1 is omitted since it is very similar to that of Theorem 4.2. Proof of Theorem 4.2. We follow the line of the proof of Theorem 2.1. However, in the bootstrap world m˜ is the true change point that is known and thus the situation is slightly simpler. Standard arguments give E ∗ e˜∗i = 0;

(4.9)

 Pm˜ 1 i=1 |e˜∗i |r    ; i = 1; : : : ; m; ˜ P ˜r E ∗ |e˜∗i |r = m˜ n  |e˜∗i |r  i=m+1 ˜  1 ; i = m˜ + 1; : : : ; n r n − m˜ ˜ for r ¿ 0 and also with probability tending to 1 P∗

E ∗ |e˜∗i |r → E|ei |r =r ;

(4.10)

i = 1; : : : ; n; 0 ¡ r ¡ 2 + :

(4.11)

By the Hajek–Renyi inequality we have for every A ¿ 0 )! ( max(k; ˜ P m) 2 1 ∗ e˜ i 6 2 (var ∗ e˜∗1 + var ∗ e˜∗n ): max P ∗−2 q ˜ ( m ˜ − k) A Bn |k−m|¿ ˜ q˜n Bn i=min(k;m)+1 ˜ n Applying this inequality with A = An ; An → 0; A2n Bn → ∞ and using (4.9) – (4.11), we can see that with probability tending to 1 |m˜ − k| for |k − m|¿ ˜ q˜−2 n Bn ; 2 which further implies that there exists D ¿ 0 such that, as n → ∞; ! P ∗ |k − m| ˜ exp{wkn }6D exp{−Bn (1 − An )=2} → 1 P ∗ 6 − q˜n (1 − An ) wkn

|m−k|¿ ˜ q˜−2 n Bn

and P

P |m−k|¿ ˜ q˜−2 n Bn

! ∗ exp{wkn }6D exp{−Bn (1

− An )=2}

→ 1:

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

205

The weak convergence of partial sums of independent random variables yields, conditionally on Y1 ; : : : ; Yn ; for every B ¿ 0 as an n → ∞; ) ( m˜ P D[−B;0] ∗ e˜ i ; s ∈ [ − B; 0] −→ {W1 (−s); s ∈ [0; B]} (4.12) q˜n i=m+bs ˜ q˜n c+1

and

( −q˜n

m+bsq ˜ P nc i=m+1 ˜

) e˜∗i ;

s ∈ [0; B]

D[−B;0]

−→ {W2 (s); s ∈ [0; B]};

(4.13)

where {W1 (s); s ∈ [0; ∞)} and {W2 (s); s ∈ [0; ∞)} are independent standard Wiener processes. Applying (4.12) and (4.13) we nish the proof of (4.6) as in that of Theorem 2.2. The assertion of (4.7) is evident from the proof of (4.6).

5. Simulations To see the behavior of suggested Bayes estimator, we prepared a simulation study. We used model (2.1) with n = 80; m = 20; 40 and 60;  = 1 and 1.5, and normally and Laplace-distributed errors with variances =1 and 1.5. Selected results are summarized in Tables 1–16. These tables provide the mean values and standard deviations of the Table 1 n = 80; m = 20;  = 0;  = 1; L() ∼ N(0; 1)

Mean Std













21.51 8.18

20.99 7.84

1.08 0.28

1.02 0.30

−0:05 0.27

−0:02 0.28

Table 2 n = 80; m = 20;  = 0;  = 1; L() ∼ N(0; 1:52 )

Mean Std













25.67 15.42

25.15 14.99

1.17 0.55

1.06 0.54

−0:10 0.46

−0:02 0.46

Table 3 n = 80; m = 20;  = 0;  = 1; L() ∼ L(0; 1)

Mean Std













21.48 8.30

20.95 7.98

1.08 0.28

1.02 0.30

−0:03 0.27

−0:01 0.28

206

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

Table 4 n = 80; m = 20;  = 0;  = 1; L() ∼ L(0; 1:52 )

Mean Std













25.72 15.65

25.25 15.30

1.16 0.57

1.05 0.57

−0:09 0.47

−0:01 0.48

Table 5 n = 80; m = 20;  = 0;  = 1:5; L() ∼ N(0; 1)

Mean Std













20.21 2.98

19.71 2.73

1.54 0.26

1.50 0.27

−0:03 0.24

−0:01 0.25

Table 6 n = 80; m = 20;  = 0;  = 1:5; L() ∼ N(0; 1:52 )

Mean Std













21.51 8.18

20.99 7.84

1.62 0.42

1.53 0.44

−0:07 0.40

−0:02 0.42

Table 7 n = 80; m = 20;  = 0;  = 1:5; L() ∼ L(0; 1)

Mean Std













20.17 2.93

19.65 2.70

1.53 0.26

1.51 0.27

−0:03 0.24

−0:01 0.25

Table 8 n = 80; m = 20;  = 0;  = 1:5; L() ∼ L(0; 1:52 )

Mean Std













21.48 8.30

20.95 7.98

1.62 0.42

1.53 0.45

−0:07 0.40

−0:02 0.42

Table 9 n = 80; m = 40;  = 0;  = 1; L() ∼ N(0; 1)

Mean Std













40.00 6.51

39.94 6.08

1.06 0.22

1.01 0.23

−0:03 0.18

−0:01 0.18

estimators of change point based on 10 000 simulations for di erent setups. While m; ˆ ˆ ˜ and ˆ correspond to the classical estimator mˆ de ned by (2.2), the values m; ˜  and ˜ to the Bayes estimator m˜ de ned by (2.5). The results for m = 60 are not presented being parallel to those for m = 20. Our results clearly show advantage of the use of the Bayesian estimator.

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

207

Table 10 n = 80; m = 40;  = 0;  = 1; L() ∼ N(0; 1:52 )

Mean Std













39.95 12.21

39.95 11.71

1.18 0.41

1.07 0.43

−0:10 0.34

−0:05 0.34

Table 11 n = 80; m = 40;  = 0;  = 1; L() ∼ L(0; 1)

Mean Std













39.92 6.50

39.87 6.21

1.06 0.22

1.01 0.24

−0:03 0.19

−0:02 0.19

Table 12 n = 80; m = 40;  = 0;  = 1; L() ∼ L(0; 1:52 )

Mean Std













40.09 11.93

39.97 11.48

1.17 0.44

1.06 0.46

−0:09 0.35

−0:04 0.35

Table 13 n = 80; m = 40;  = 0;  = 1:5; L() ∼ N(0; 1)

Mean Std













40.02 2.69

39.98 2.46

1.52 0.22

1.49 0.23

−0.01 0.17

−0.01 0.17

Table 14 n = 80; m = 40;  = 0;  = 1:5; L() ∼ N(0; 1:52 )

Mean Std













40.00 6.51

39.94 6.09

1.59 0.32

1.52 0.35

−0.05 0.27

−0.02 0.27

Table 15 n = 80; m = 40;  = 0;  = 1:5; L() ∼ L(0; 1)

Mean Std













40.01 2.64

39.96 2.47

1.53 0.22

1.49 0.23

−0.01 0.16

−0.01 0.17

Table 16 n = 80; m = 40;  = 0;  = 1:5; L() ∼ L(0; 1:52 )

Mean Std













39.92 6.50

39.87 6.21

1.60 0.33

1.52 0.36

−0.05 0.28

−0.02 0.28

208

J. Antoch, M. HuÄskovÃa / Journal of Statistical Planning and Inference 91 (2000) 195–208

References Antoch, J., Huskova, M., 1999. Estimators of changes. In: Ghosh, S. (Ed.), Nonparametrics, Asymptotics and Time series. Marcel Dekker, New York, pp. 533–578. Antoch, J., Huskova, M., 2000. Bayesian like M- and R-estimators of changes, Discussiones Mathematicae, Probability and Statistics, 20, 115–134. Antoch, J., Huskova, M., Veraverbeke, N., 1995. Change-point problem and bootstrap. J. Non-param. Statist. 5, 123–144. Brodsky, B.S., Darkhovsky, B.E., 1993. Nonparametric Methods in Change-Point Problems. Kluwer Academic Publications, Dordrecht. Carlstein, E., 1988. Nonparametric estimation of a change-point. Ann. Statist. 16, 188–197. Cobb, G.W., 1978. The problem of the Nile: conditional solution to a change-point problem. Biometrika 65, 243–251. Csorg˝o, M., Horvath, L., 1997. Limit Theorems in Change-Point Analysis. Wiley, New York. Dumbgen, L., 1991. The asymptotic behavior of some nonparametric change-point estimators. Ann. Statist. 19, 1471–1494. Ferger, D., 1994a. On the rate of almost sure convergence of Dumbgen change-point estimators. Statist. Probab. Lett. 25, 27–31. Ferger, D., 1994b. Asymptotic distribution theory of change-point estimators and con dence intervals based on bootstrap approximation. Math. Methods Statist. 3, 362–378. Ferger, D., 1994c. Change-Point estimators in case of small disorders. J. Statist. Plan. Inference 40, 33–49. Gombay, E., Horvath, L., Huskova, M., 1996. Estimators and tests for change in variances. Statist. Decisions 14, 145–159. Hinkley, D.V., 1970. Inference about the change-point in a sequence of random variables. Biometrika 57, 1–17. Hinkley, D.V., Hinkley, E.A., 1970. Inference about the change-point in a sequence of binomial variables. Biometrika 57, 477–488. Ibragimov, I.A., Has’minskii, R.Z., 1981. Statistical Estimation. Asymptotic Theory. Springer, Heidelberg. Ritov, Y., 1990. Asymptotic ecient estimation of the change-point with unknown distribution. Ann. Statist. 18, 1829–1839. Siegmund, D., 1988. Con dence sets in change-point problems. Internat. Statist. Rev. 56, 31– 48. Stryhn, H., 1996. The location of maximum of asymmetric two-sided Brownian motion with triangular drift. Statist. Probab. Lett. 29, 279–284. Worsley, K.J., 1986. Con dence regions and tests for a change-point in a sequence of exponential family random variables. Biometrika 73, 91–104.