A class of autoregressive processes

A class of autoregressive processes

Statistics and Probability Letters 78 (2008) 1355–1361 www.elsevier.com/locate/stapro A class of autoregressive processes S.D. Krishnarani ∗ , K. Jay...

446KB Sizes 8 Downloads 170 Views

Statistics and Probability Letters 78 (2008) 1355–1361 www.elsevier.com/locate/stapro

A class of autoregressive processes S.D. Krishnarani ∗ , K. Jayakumar Department of Statistics, University of Calicut, Kerala –673635, India Received 30 June 2007; accepted 11 December 2007 Available online 31 December 2007

Abstract A new class of first-order autoregressive minification process which generalizes any autoregressive process of first order, is introduced. A necessary and sufficient condition for this process to be stationary is given. Some properties of the process are studied. Second-order extension of the process is also discussed. c 2008 Elsevier B.V. All rights reserved.

1. Introduction A time series model, which is not Gaussian, is called a non-Gaussian time series model. The need for non-Gaussian autoregressive models have been long felt from the fact that many naturally occurring time series are clearly nonGaussian with Markovian correlation structure. Tavares (1980) introduced an autoregressive minification process  X0 n=0 Xn = (1.1) k min(X n−1 , ∈n ) n ≥ 1 where k > 1 is a constant and {∈n } is an innovation process of independent and identically (i.i.d.) random variables such that {X n } is a stationary Markov process with marginal distribution FX 0 (x). He considered a particular case, where {∈n , n = 1, 2, . . .} is a sequence of i.i.d. exponential random variables with mean λ(k −1) and X 0 is exponential with mean λ. This model generates first-order autoregressive exponential process with mean λ and it is useful in hydrological applications. Lewis and McKenzie (1991) discussed about minification processes and their transformations. They gave a necessary and sufficient condition for {X n } in (1.1) to be a stationary process. Also they have shown that many of the important features of the process are invariant under monotone transformation. In this paper, we develop a class of autoregressive models that has the capability to define autoregressive models with any given distribution as marginal. In Section 2, we introduce an autoregressive process and give a necessary and sufficient condition for the process to be stationary. In Section 3, we define a general stationary Markov process with innovation and give examples. Some properties of this process are also studied. Second-order extension of the process is also given. ∗ Corresponding author.

E-mail addresses: krishna [email protected] (S.D. Krishnarani), [email protected] (K. Jayakumar). c 2008 Elsevier B.V. All rights reserved. 0167-7152/$ - see front matter doi:10.1016/j.spl.2007.12.019

1356

S.D. Krishnarani, K. Jayakumar / Statistics and Probability Letters 78 (2008) 1355–1361

2. A general class of autoregressive processes Let F(·) be a non-degenerate distribution function with F(−∞) = 0 and F(∞) = 1. Consider the monotone transformation, φ(x) = log

F(x) ¯ F(x)

(2.1)

¯ where φ(−∞) = −∞, φ(∞) = ∞ and F(x) = 1 − F(x). Define the general Markov process as  −1 φ [φ(X n−1 ) − log p] with probability p Xn = Min[φ −1 (φ(X n−1 ) − log p), ∈n ] with probability 1 − p

(2.2)

where {∈n } is a sequence of i.i.d. random variables, ∈n is independent of X i0 s (i < n), 0 < p < 1. Theorem 2.1. Let X 0 have distribution function F. The process {X n } in (2.2) is a strictly stationary Markov process if and only if ∈0n s are i.i.d. with distribution function F. Proof. Let ∈0n s be i.i.d. random variables with distribution function F and let X 0 have distribution function F. Expressing (2.2) in terms of survival functions, we have F¯xn (x) = p F¯ X n−1 [φ −1 (φ(x) + log p)] + (1 − p) F¯ X n−1 [φ −1 (φ(x) + log p)] F¯εn (x). When n = 1, this becomes, F¯x1 (x) = p F¯ X 0 [φ −1 (φ(x) + log p)] + (1 − p) F¯ X 0 [φ −1 (φ(x) + log p)] F¯∈1 (x). Since φ(x) = log

F(x) , ¯ F(x)

¯ F(x) = Using this in (2.3), we get F¯ X 1 (x) = Suppose that X n−1

(2.3)

1 . 1+eφ(x)

= F¯ X 0 (x) and hence X 1 d X 0 . = has distribution function F. Then we get X n d X n−1 d X 0 . 1 1+eφ(x)

=

=

Hence it follows by induction that the process (2.2) is stationary. Conversely, suppose that {X n } is stationary. Then F¯ X (x) = F¯ X [φ −1 (φ(x) + log p)] [ p + (1 − p)P(∈> x)] Since X 0 has distribution function F, we have p + (1 − p) F¯∈ (x) = That is, F¯∈ (x) =

1 1+eφ(x)

1 1+eφ(x) 1 1+ peφ(x)

=

1 + peφ(x) . 1 + eφ(x)

and hence the proof.



Theorem 2.2. If ∈0n s are i.i.d. with distribution function F and X 0 is arbitrary, then {X n } in (2.2) converges in distribution to Z where Z has the distribution function F. Proof. Suppose that ∈0n s are i.i.d. with distribution function F. Then from (2.2)  h i 1 −1 ¯ ¯ FX 1 (x) = FX 0 φ (φ(x) + log p) p + (1 − p) 1 + eφ(x) " # h i 1 + peφ(x) . = F¯ X 0 φ −1 (φ(x) + log p) 1 + eφ(x)

(2.4)

Similarly when n = 2,   F¯ X 2 (x) = F¯ X 1 [φ −1 (φ(x) + log p)] p + (1 − p) F¯∈ (x) " # h i 1 + peφ(x) −1 . = F¯ X 1 φ (φ(x) + log p) 1 + eφ(x)

(2.5)

S.D. Krishnarani, K. Jayakumar / Statistics and Probability Letters 78 (2008) 1355–1361

1357

Substituting (2.4) in (2.5), we get # " h i 1 + p 2 eφ(x) −1 2 . F¯ X 2 (x) = F¯ X 0 φ (φ(x) + log p ) 1 + eφ(x) n φ(x)

p e ]. Then, it can be proved that, Suppose, F¯ X n (x) = F¯ X 0 [φ −1 (φ(x) + log p n )][ 1+ 1+eφ(x) # " h i 1 + p n+1 eφ(x) −1 n+1 . F¯ X n+1 (x) = F¯ X 0 φ (φ(x) + log p ) 1 + eφ(x) n φ(x)

p e Thus, F¯ X n (x) = F¯ X 0 [φ −1 (φ(x) + log p n )][ 1+ ] 1+eφ(x) 1 ¯ As n → ∞, FX n (x) → φ(x) . 1+e

d

Hence X n − → Z , where Z has distribution function F.



Theorem 2.3. Suppose that ∈0n s have distribution function H. A necessary and sufficient condition for the first-order autoregressive process {X n } in (2.2) to be stationary is (1 − p)H (x) = P[X 0 ≤ x/ X 0 > φ −1 (φ(x) + log p)].

(2.6)

Proof. Let ∈0n s have distribution function H and {X n } in (2.2) is stationary. Let G i be the distribution function of X i , i = 0, 1, 2, . . . . We have X n d X 0 for all n. =

From (2.2), G¯ 1 (x) = G¯ 0 [φ −1 (φ(x) + log p)][ p + (1 − p) H¯ (x)]. If X 1 d X 0 , then the above implies that =

(1 − p) H¯ (x) =

G¯ 0 (x) − p.G¯ 0 [φ −1 (φ(x) + log p)] . G¯ 0 [φ −1 (φ(x) + log p)]

That is, (1 − p)H (x) =

G¯ 0 [φ −1 (φ(x)+log p)]−G¯ 0 (x) . G¯ 0 [φ −1 (φ(x)+log p)] −1 = P[X 0 ≤ x/ X 0 > φ (φ(x) + log

Hence, (1 − p)H (x) p)]. Conversely, suppose that (1 − p)H (x) = P[X 0 ≤ x/ X 0 > φ −1 (φ(x) + log p)]. ¯ G¯ 0 (x) where y = φ −1 [φ(x) + log p]. This means that (1 − p)H (x) = G 0 (y)− G¯ (y) 0

¯ p G¯ 0 (y) . Thus (1 − p) H¯ (x) = G 0 (x)− G¯ 0 (y) −1 ¯ ¯ From (2.2) G n (x) = G n−1 [φ (φ(x) + log p)][ p + (1 − p) H¯ (x)] For n = 1, G¯ 1 (x) = G¯ 0 (y)[ p + (1 − p) H¯ (x)] ¯ p G¯ 0 (y) If we substitute (1 − p) H¯ (x) = G 0 (x)− , we get G¯ 1 (x) = G¯ 0 (x). G¯ 0 (y) That is, X 1 d X 0 . =

Suppose that X n d X 0 , then G¯ n+1 (x) = G¯ 0 (x). =

Thus X n d X 0 ⇒ X n+1 d X 0 . =

=

Hence X n d X 0 for all n and the process is stationary. =

This completes the proof.



3. The stationary Markov process with innovation Definition 3.1. Let φ(x) = log F(x) ¯ F(x) We say that {X n } is a general stationary Markov process with innovation if  −1 φ [φ(X n−1 ) − log p] with probability p Xn = Min[φ −1 (φ(X n−1 ) − log p), ∈n ] with probability 1 − p

(3.1)

1358

S.D. Krishnarani, K. Jayakumar / Statistics and Probability Letters 78 (2008) 1355–1361

where {∈n } is a sequence of i.i.d. random variables with common distribution function F, ∈n is independent of X i , i = 0, 1, 2, . . . , n − 1 with X 0 having distribution function F, 0 < p < 1. Now we study some properties of the process given in (3.1). The joint survival function of (X n , X n+1 ) is  1   0 ≤ xn ≤ φ −1 [φ(xn+1 ) + log p]  φ(xn+1 ) 1 + e P(X n > xn , X n+1 > xn+1 ) = 1 + peφ(xn+1 )   0 < φ −1 [φ(xn+1 ) + log p] < xn < ∞.  (1 + eφ(xn ) )(1 + eφ(xn+1 ) )

(3.2)

It can be easily seen that the process {X n } is not time reversible. Also it can be seen that P(X n+1 > X n ) =

p+1 . 2

Thus we have, as p increases more up runs in the process {X n } can be observed. The distribution of extremes of the process {X n } in (3.1) is given below. Let N be a geometric random variable with probability mass function P(N = n) = p(1 − p)n−1 , n = 1, 2, . . . . Assuming that N is independent of X i0 s (i = 1, 2, . . .), we define geometric minimum (maximum) as, TN = Min(X 1 . . . X N )[Max(X 1 . . . X N )]. The distribution of the geometric minimum and maximum are evaluated below. F¯TN (x) = P [Min(X 1 . . . X N ) > x] 1 = 1 φ(x) 1 + pe

(3.3)

FMn (x) = P[Max(X 1 . . . X N ) < x] eφ(x) = 1 . φ(x) p +e

(3.4)

Some examples of {X n } in (3.1) are given below. Example 3.1. Let F(x) = 1 −

1 1+eαx , −∞

 1   X n−1 − log p α   Xn = 1  Min X n−1 − log p, ∈n α

< x < ∞. Then with probability p (3.5) with probability 1 − p

where {∈n } is a sequence of i.i.d. logistic random variables with distribution function F. Then {X n } defines a firstorder autoregressive logistic process. Note that if we assume ∈0n s as i.i.d. semi-logistic random variables, then (3.5) defines a stationary AR(1) process with semi-logistic distribution as marginal, if X 0 d ∈1 . =

Note that (3.5) is the logistic process studied in Arnold and Robertson (1989). 1 ¯ Example 3.2. Let F(x) = 1+x α , x > 0, α > 0. Then (3.1) becomes ( 1 p − α nX n−1 o with probability p Xn = − α1 Min p X n−1 , ∈n with probability 1 − p

where {∈n } is a sequence of i.i.d. Pareto III random variables with distribution function F(x). Note that (3.6) is the Pareto process studied in Yeh et al. (1988).

(3.6)

1359

S.D. Krishnarani, K. Jayakumar / Statistics and Probability Letters 78 (2008) 1355–1361

Example 3.3. When F(x) = 1 − e−x , x > 0, the process (3.1) takes the form    1  X n−1   e −1 with probability p log 1 + p     Xn = 1   Min log 1 + (e X n−1 − 1) , ∈n with probability 1 − p p

(3.7)

where {∈n } is a sequence of i.i.d. exponential random variables with distribution function, F(x) = 1 − e−x , x > 0. That is, the model (3.1) defines a stationary exponential process when X 0 d ∈1 , where ∈1 =

has the exponential distribution. Example 3.4. When f (x) = 21 e−|x| , −∞ < x < ∞, the process (3.1) becomes    p − 1 e|X n−1 |   + with probability p log  p  2 p   Xn = p − 1 e|X n−1 |   Min log + , ∈n with probability 1 − p 2p p

(3.8)

where {∈n } is a sequence of i.i.d. Laplace random variables. Example 3.5. When X n0 s have uniform distribution,  1 if 0 < x < 1 f (x) = 0 otherwise, then (3.1) becomes  X n−1    p + (1 − p)X n−1   Xn = X  n−1  Min , ∈n p + (1 − p)X n−1

with probability p (3.9) with probability 1 − p

where {∈n } is a sequence of i.i.d. uniform (0, 1) random variables. The sample path behaviour of the uniform autoregressive process (3.9) is presented in Fig. 3.1. Now we develop an AR (2) process as a generalization of the model (3.1). Let φ(x) be as in (2.1). Define  −1 with probability p1 φ (φ(X n−1 ) − log p1 ) X n = Min(φ −1 (φ(X n−1 ) − log p1 ), ∈n ) with probability p2  Min(φ −1 (φ(X n−2 ) − log p1 ), ∈n ) with probability p3

(3.10)

where {∈n } is a sequence of i.i.d. random variables. Theorem 3.1. Let X 0 have distribution function F. The process {X n } in (3.10) is stationary if and only if ∈n d X, X 1 . =

Proof. (3.10) in terms of survival function is F¯ X n (x) = F¯ X n−1 [φ −1 (φ(x) + log p1 )][ p1 + p2 F¯ε (x)] + F¯ X n−2 [φ −1 (φ(x) + log p1 )] p3 F¯ε (x) If X n0 s are stationary, then p1 + (1 − p1 ) F¯∈ (x) = and hence F¯∈ (x) =

1 . 1+eφ(x)

1 1+eφ(x) 1 1+ p1 eφ(x)

=

1 + p1 eφ(x) 1 + eφ(x)

(3.11)

1360

S.D. Krishnarani, K. Jayakumar / Statistics and Probability Letters 78 (2008) 1355–1361

(a) p = 0.2.

(b) p = 0.4.

(c) p = 0.6.

(d) p = 0.8. Fig. 3.1. Sample path behaviour of uniform autoregressive process.

Conversely, let X 0 , X 1 d ∈n and ∈0n s are i.i.d. with distribution function F. =

When n = 2, (3.11) becomes, F¯ X 2 (x) = F¯ X 0 [φ −1 (φ(x) + log p1 )][ p1 + p2 F¯∈ (x) + p3 F¯∈ (x)]   1 1 = p1 + (1 − p1 ) 1 + p1 eφ(x) 1 + eφ(x) 1 = . 1 + eφ(x) That is, X 2 d X 0 . =

If we assume that X n−2 , X n−1 d X 0 then we can prove that X n d X 0 . =

=

Hence the process {X n } is stationary. Note that one may be able to define different extensions of the AR (1) process {X n } in (3.1) and the process we defined in (3.10) is one such extension. Now as an illustration of this AR(2) process, we define a second-order logistic AR(2) process.

S.D. Krishnarani, K. Jayakumar / Statistics and Probability Letters 78 (2008) 1355–1361

1361

In this case, the process (3.10) takes the form  1  X n−1 − log p1     α    1 X n = Min X n−1 − α log p1 , ∈n        Min X n−2 − 1 log p1 , ∈n α

with probability p1 with probability p2

(3.12)

with probability p3

where 0 < pi < 1, i = 1, 2, 3, p1 + p2 + p3 = 1 and {∈n } is a sequence of i.i.d. logistic random variables with survival function 1 , α > 0, −∞ < x < ∞ and X 1 , X 0 d ∈1 .  F¯∈ (x) = = 1 + eαx References Arnold, B.C., Robertson, C.A., 1989. Autoregressive logistic process. J. Appl. Probab. 26, 524–531. Lewis, P.A.W., McKenzie, E.D., 1991. Minification processes and their transformations. J. Appl. Probab. 28, 45–57. Tavares, L.V., 1980. An exponential Markovian stationary process. J. Appl. Probab. 17, 1117–1120. Yeh, H.C., Arnold, B.C., Robertson, C.A., 1988. Pareto process. J. Appl. Probab. 25, 291–301.