Implementing empirical characteristic function procedures

Implementing empirical characteristic function procedures

Statistics & Probability Letters 4 (1986) 65-67 North-Holland IMPLEMENTING March 1986 EMPIRICAL CHARACTERISTIC FUNCTION PROCEDURES A.H. WELSH The ...

207KB Sizes 2 Downloads 62 Views

Statistics & Probability Letters 4 (1986) 65-67 North-Holland

IMPLEMENTING

March 1986

EMPIRICAL CHARACTERISTIC FUNCTION PROCEDURES

A.H. WELSH The University of Chicago, 5734 University Avenue, IL 60637, USA Received July 1985 Revised October 1985

Abstract: The problem of estimating the first positive zero of the real part of a characteristic function is discussed. Knowledge of the location of this zero is essential for the application of inferential procedures based on the empirical characteristic function. A simple explicit nonparametrie estimator is proposed and shown to solve simultaneously both the finite sample and asymptotic problems.

1. Introduction

The recent work of Feuerverger and Mureika (1977), CsSrg5 (1981) and Marcus (1981) on the empirical characteristic function has led directly to the development of procedures which depend on the behavior of the empirical characteristic function at one or more data determined points. These procedures include various tests (CsSrg5 and Heathcote (1982), Welsh (1984), CsSrg5 (1985a, b, c) and the functional least squares estimation procedure (CsSrg5 (1983), Heathcote and Welsh (1983), Welsh and Nicholls (1986)). In order for these procedures to be implemented, a working interval in the neighborhood of the origin in which to choose the points must be identified. The efficacy of the procedures increases with the size of the interval but if it becomes so large that the real part of the characteristic function vanishes, the procedures break down. The possible extent of the interval is determined by the zeroes of the characteristic function and in this paper we consider the problem of estimating these zeroes. Specifically, let X~. . . . , Xn be independent, identically distributed real random variables with common distribution function F and characteristic function

c(t)=

exp(itx) dF(x)=u(t)+iv(t), --00

t~R.

Since u(t) is symmetric, we consider the problem of estimating the first positive zero of u(t), a0 =inf{/>~ 0: u ( t ) = 0}. Let /,/

C~(t) =n-1 E exp(itXj) =U~(t)+iV~(t), jffil tER,

and define the random variable

An=inf{t>~O: Un(t)=O}. It is not hard to show (see the proof of the theorem below) that An---' a0 almost surely as n ---, oo provided u(t) < 0 for some t < o0. Distributions for which this holds include the normal distribution with non-zero mean, the stable distributions other than the symmetric stable distributions with zero location and the uniform distribution. For n > 1, it is impossible to calculate A,, explicitly. Since the equation u(t) = 0 often does not have a unique root (this is the case for all the distributions listed above) and, the first root may not even be isolated, standard approximation methods may fail asymptotically. Moreover, the requisite moment conditions may greatly restrict the applicability of such procedures. Chambers

0167-7152/86/$3.50 © 1986, Elsevier Science Publishers B.V. (North-Holland)

65

STATISTICS & PROBABILITY LETTERS

Volume 4, Number 2

March 1986

un(t) O. i00 -~

2. T h e e s t i m a t i o n

0.075- i

Recall that U,,(t) is uniformly c o n t i n u o u s and U. (0) = 1 so that A,, > 0. Let s E [0, A,,). Then, for any t ~ (s, A,,),

procedure

O.050 -

tuo(,)-u,,(.)l

E tcos(tL)-cos(sXj) t j=l

0. 025 -

t/

<~2n-' ~ 0.000-

[sin((t-s)L/2}

<~21-~]t-sl~m~, -0. 025

l

j=l

-

I

4

I 6

I 8

10 c

Fig. 1. Graph of the real part of the empirical characteristic function of 1000 normal random variables with the real part of the normal characteristic function superimposed.

where

0
_~=,1~1 ~, 0 < a ~ < l .

m e = n-1~,!

Thus, for

t ~ ( s , A,,), f . ( t ) >~ U ~ ( s ) - 2 1 - ~ l t - s l ~ m . . T h e right h a n d side is an a p p r o x i m a t i o n of U,,(t) on (s, A . ) and its zero is an a p p r o x i m a t i o n of A,,. Hence, put

a n d H e a t h c o t e (1981) n o t e d that U,,(t) > 1 - t 2 m 2 / 2 ,

T,,.0 = 0 (1.1)

where m 2 = n-lE~_,Xj2; a n d derived the lower b o u n d ( 2 / m 2 ) I/2 for A,. This b o u n d is very conservative and asymptotically requires

f



"

2'-"

,

k=0,1,2,....

me

T h e properties of { T,,.k } are s u m m a r i z e d in the following Theorem.

X2 d F ( x ) < ~ .

,/

In the next section, we propose a simple, safe n o n - p a r a m e t r i c m e t h o d of estimating a 0 by app r o x i m a t i n g A. which requires only a fractional m o m e n t condition on F. In Figure 1, u ( t ) = e x p ( - t 2 / 2 ) is superimp o s e d on a plot of U.(t) based o n 1000 simulated s t a n d a r d n o r m a l variables. A l t h o u g h u ( t ) > 0 for It l < oo, U.(t) has three zeroes on [0,10]. F o r each n < co, U.(t) is an almost periodic function a n d hence has infinitely m a n y zeroes. It follows that any p r o c e d u r e based on the empirical characteristic function at a point b e y o n d the first zero is unreliable a n d this holds for any n < oo, w h e t h e r the p o i n t is fixed or r a n d o m . T h u s A . should always be d e t e r m i n e d a n d the result below shows that o u r estimator estimates A,, (finite or not) for fixed n even when a 0 = oo. 66

and

Theorem. (i) For each fixed n < oo,

T.. k ? A,, almost surely as k --->oo. (ii) I f u ( t ) < O forsome It I < oo and oo

[

Ixl ~ d F ( x ) < oo

for some 0 < a <~1, then for N sufficiently large, sup I T..k -- A. I ~ 0

almost surely as k ~ oo.

n>~ N

Moreooer, 7.. k ~ a o almost surely as n, k ~ oo. Proof. (i) Notice that {T,. k } is a m o n o t o n e increasing sequence which is b o u n d e d by A,,. Let B,, be any n u m b e r in (0, A.). Then set

A,, = i n f

2~ -~,

: 0 ~< t ~< B ,

ma

.

Volume 4, Number 2

STATISTICS & PROBABILITY LETTERS

By the definition of A n, A n > 0 and Tn. k > B n for k >1 [ B , , / A n ] + 1, where [x] denotes the integer part of x. (ii) Let e > 0 be given. Choose 8 > 0 such that inf{t>/0" u(t)-8=0}>a

0-e/2

and inf(t>~0" u(t)+,5=0} / N 1, u(t)-8<~

Un(t)<~u(t)+8

uniformlyint~,~'.

Hence, with probability arbitrarily close to one, f o r a l l n >_-N 1, a o - e / 2 ~< A n ~ a 0 + e / 2 .

(2.1)

Also, b y the strong law of large numbers, there exists an N 2 such that, with probability arbitrarily close to one, for all n >/N2, m~/N--m a x ( N1, N2 },

8 u = inf

~
21 - " ( # , + 8)

21-~'

: 0 ~< t ~< a 0 - e / 2

"0~
.

a

But BN > 0 b y the choice of 8 so that with probability arbitrarily close to one, for all k >/ [(a o - e / 2 ) / S N ] + 1, I T,,k - An I < e.

(2.2)

T h e second part of (ii) follows from (2.1) and (2.2). [] In practice, when n is fixed, part (i) of the t h e o r e m is the important result and a = 1 simplifies calculation; to use part (ii) of the theorem, it m a y be necessary to use a < 1. If n is finite, the effective initial value is Tna = ( n E f f 1l Xj l) -1.

March 1986

Other

alternatives for T n, 0 = 0 include 7 n"11~,0 -( 2 / r n 2 ) 1/2 and Tn t2~F I / ( 2 max[ X, [). F o r the .0 data represented in Figure 1, T/~ ) is of o r d e r 0.5, and T°)=n.0 1.422; after one iteration, Tnl--1.831 which is an improvement. At multiples of 50 iterations from T,a , the sequence is 1.831, 3.653, 4.188, 4.421, 4.504, 4.531, 4.541 . . . . so that after 300 iterations the estimate of the first zero is 4.541 at which point U,(Tn.3OO)= 6.2 × 10 -5.

Acknowledgement I am grateful.to Stephen M. Stigler, the Editor and an a n o n y m o u s referee, for helpful c o m m e n t s which improved this paper. The referee also pointed out an error in the statement of the theorem.

References [1] Chambers, R.L. and C.R. Heathcote (1981), On the estimation of slope and the identification of outliers in linear regression, Biometrika 68, 21-33. [2] Csi$rg6, S. (1981), Limit behavior of the empirical characteristic function, Ann. Probab. 9, 130-144. [3] Cs6rg6, S. (1983), The theory of functional least squares, J. Austral. Math. Soc. Ser. A 34, 336-355. [4] Cs6rg6, S. (1985a), Testing for independence by the empirical characteristic function, J. Multivar. Anal., to appear. [5l CsSrg6, S. (1985b), Testing for stability, in: K. Sarkadi, ed., Colloquia Math. Soc. Bolyai, Goodness of Fit (NorthHolland, Amsterdam), to appear. [6] Cs6rgS, S. (1985c), Testing for linearity, Statist. and Probab. Letters. 3, 45-50. 17l Cs6rgS, S. and Heathcote, C.R. (1982), Some results concerning symmetric distributions, Bull. Austral. Math. Soc. 25, 327-335. [8] Feuerverger, A. and Mureika, R.A. (1977), The empirical characteristic function and its applications, Ann. Statist. 5, 88-97. [9] Heathcote, C.R. and A.H. Welsh (1983), The robust estimation of autoregressive processes by functional least squares, J. Appl. Probab. 20, 737-753. [10] Marcus, M.B. (1981), Weak convergence of the empirical characteristic function, Ann. Probab. 9, 194-201. [11] Welsh, A.H. (1984), A note on scale estimates based on the empirical characteristic function and their application to test for normality, Statist. Probab. Letters, 2, 345-348. [12] Welsh, A.H. and D.F. Nicholls (1986), Robust estimation of regression models with dependent regressor: The functional last squares approach, Econometric Theory, to appear. 67