On general periodic time-varying bilinear processes

On general periodic time-varying bilinear processes

Economics Letters 114 (2012) 353–357 Contents lists available at SciVerse ScienceDirect Economics Letters journal homepage: www.elsevier.com/locate/...

248KB Sizes 15 Downloads 106 Views

Economics Letters 114 (2012) 353–357

Contents lists available at SciVerse ScienceDirect

Economics Letters journal homepage: www.elsevier.com/locate/ecolet

On general periodic time-varying bilinear processes Abdelouahab Bibi ∗ , Ines Lescheb Département de Mathématiques, Université Mentouri Constantine, Algeria

article

abstract

info

Article history: Received 11 October 2011 Accepted 15 November 2011 Available online 25 November 2011 JEL classification: C22 C19 C13

We study in this note the class of bilinear processes with periodic time-varying coefficients. We give necessary and sufficient conditions ensuring the existence of strict and second order stationary solutions (in periodic sense) and for the existence of higher order moments. The given conditions can be applied to periodic ARMA or periodic GARCH models. The central limit theorem and the law of iterated logarithm (LIL) for higher order sample moments are showed. © 2011 Elsevier B.V. All rights reserved.

Keywords: Periodic models Bilinear processes Periodic bilinear processes

Bilinear time series models, introduced by Granger and Andersen (1978), are certainly among the most popular nonlinear models that are capable to model with success the non-Gaussian data. The stupefying development of these models is based on time-invariant parameter assumption. In many applications, this assumption maybe not appropriate when the series to be modelled exhibits structural changes and non-stationarity behaviour. However, one possible way is to consider a bilinear model whose coefficients may arbitrarily vary with respect to time according to some specific structures. Periodic or almost periodic coefficients are perhaps the most popular structures used in time series exhibiting structural changes in regime at known dates. In this paper, we consider a general bilinear model (Xt )t ∈Z , Z := {0, ±1, ±2, . . . , } defined on some probability space (Ω , ℑ, P) which exhibits periodically time-varying coefficients, i.e., Xt =

p  i =1

ai (t )Xt −i +

q  j =0

bj (t )et −j +

Q P  

cij (t )Xt −i et −j

(1.1)

i=1 j=1

denotes by PBL (p, q, P , Q ) where (et )t ∈Z is a sequence of independent and identically distributed (i.i.d.) random variables defined



Corresponding author. E-mail addresses: [email protected] (A. Bibi), [email protected] (I. Lescheb).

0165-1765/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.econlet.2011.11.013

on the same probability space (Ω , ℑ, P) with E log+ |et | < +∞ where log+ (x) := max {log(x), 0} , x > 0 and el is independent of Xk , k < l. The parameters (ai (t ))1≤i≤p , (bi (t ))0≤i≤q   and cij (t ) 1≤i≤P ,1≤j≤Q are periodic in t with period s, i.e., ai (t ) =



1. The model and its Markovian representation



ai (t + ks), bi (t ) = bi (t + ks) and cij (t ) = cij (t + ks) for any (t , k) ∈ Z × Z. Of course, the model (1.1) includes some popular models, namely, periodic ARMA (PARMA) models and a large class of periodic GARCH models (see Kristensen, 2009 and Bibi and Lessak, 2009 for further discussion). These models are globally non-stationary, but are stationary within each season, are becoming an appealing tool for investigating both non-Gaussianity and distinct ‘‘seasonal’’ patterns appearing for instance in hydrology, climatology, econometrics, etc. . . , and continue to gain a growing interest of researchers. By introducing the state r = (p + q)-vector X t := (Xt , Xt −1 , . . . , Xt −p+1 , et , . . . , et −q+1 )′ , and assuming that, without loss of generality, that P = p, Eq. (1.1) can be expressed in the following Markovian representation Xt = H ′ X t and X t = At X t −1 + et bt where H = (1, 0, . . . , 0)′ ∈ Rr , At := A0 (t ) +

(1.2)

Q

j =1

Aj (t )et −j ,

Aj (t ), bt 0≤j≤Q is a sequence of appropriate pairs of periodic squared r × r-matrices Aj (t ) and r-vector bt . Eq. (1.2) is the same as the defining relation for random coefficient periodic autoregressive models (see Franses and Paap, 2011 and Aknouche and Guerbyenne, 2009).





354

A. Bibi, I. Lescheb / Economics Letters 114 (2012) 353–357

In this paper, we are interested first in the existence of causal solutions i.e., solutions such that Xt is measurable with respect to   ℑ(t e) := σ et −j , j ≥ 0 and its probabilistic properties. Second, we focus on some interesting statistical properties such as the central limit theorem and the law of iterated logarithm of higher order empirical moments. Some notations are used throughout the paper:   I(n) is the n × n identity matrix, for any n × m matrix M = mij , and γ ∈]0, 1],

 γ  define a matrix operator |M |γ as |M |γ := mij  . It is obvious  γ  γ  γ  |M |γ that MX  ≤ |M |γ X  for all X ∈ Rm and  i Mi  ≤   i i where M ≤ N means mij ≤ nij for all i and j and N = nij . ⊗ is the usual Kronecker product of matrices and M ⊗r = M ⊗ M ⊗ · · · ⊗ M r-times. If M is squared matrix ρ(M ) represents the maximum

with υ1 (t ) = bt and υj (t ), j ≥ 2 are  defined recursively by υj (t ) = Ai (t )υj−1 (t − 1), i = 0, . . . , q , 2 ≤ j ≤ r (resp. O1 (t ) = H ′ , Oj (t ) = [Oj−1 (t − 1)Ai (t ), 0 = 1, . . . , q], 2 ≤ j ≤ r). This discussion is summarized in the following theorem.

 

Theorem 2.1. Let X t t ∈Z be the process defined by (2.1). Then 1. γ (s) (A) < 0 is a sufficient condition for that (2.1) (respectively (1.2)) to have an unique, causal, s.p.s and p.e solution given (1) (2) by the series X t (respectively by the series X t ). Moreover the (1)

(2)

series X t , X t process



(1)

converges absolutely a.s, X t

X ′st , Xst′ +1 , . . . , Xs′(t +1)−1

′

= X (t 2) a.s, and the

is strictly stationary. t ∈Z

modulus of the eigenvalues of the matrix M.

2. If furthermore, the model (1.2) is controllable and observable then γ (s) (A) < 0 is a necessary condition of the existence of s.p.s solution for (1.2).

2. Strict and second order periodic stationarity

Proof.

Iterating (1.2) s-times to get the following equation X t = A(t )X t −s + η(t ) in which A(t ) =

s−1

(2.1) At −i and η(t ) =

s−1



k−1 i=0



At −i bt −k et −k where, as usual, empty products are set equal  to I(r ) . It is worth is a noting that since (et )t ∈Z is an i.i.d. process, A(t ), η(t ) i=0

k=0

t ∈Z

 

strictly stationary and ergodic process. Processes similar to X t t ∈Z in (2.1) have been examined by Brandt (1986), Bougerol and Picard (1992) and Basrak et al. (2002) and thus we may apply their results to derive conditions for the existence of the socalled periodically stationary and ergodic solutions to (1.2) and hence to (1.1). For this purpose, let ∥.∥ denote any operator norm on the sets of r × r and r × 1 matrices. Define the topLyapunov exponent associated with the strictly stationary and (s) ergodic sequence  A = (A(t ))t ∈Z by γ (A) =   of random matrices

   t −1 log  j=0 A(s(t − j)) . Since, both E log+ ∥A(t )∥       and E log+ η(t ) are finite then, from Brandt (1986), (see inft >0 1t E

Bougerol and Picard, 1992), the unique, causal, strictly periodically stationary (s.p.s) and periodically ergodic (p.e) solution of (2.1)  (1)

is given by X t

=



k≥0

k−1 j =0

A(t − js) η(t − ks) whenever

γ (s) (A) < 0. It is worth noting that γ (s) (A) < 0 H⇒ γ (A) < 0    n −1  1 where γ (A) = infn>0 n E log  j=0 At −j  and thus (1.2) admits    k−1 (2) a solution given by X t = k≥0 j=0 At −j bt −k et −k which can be expressed in Volterra expansion as follows (2)

Xt

= bt e t +

∞  



l1 −1



Ak1 (t − i1 ) · · ·

r =1 l1 ,...,lr ≥1 0≤ki ̸=ki+1 ≤q i1 =0 l r −1



Akr (t − ir )bt −|l| e(l,k)

(2.2)

ir =0

where e(l,k) =

 li −1 et −ki −|l(i−1) |−s et −|l(r ) | , l(r ) = (l1 , . . . , {ri:ki ̸=0} s=0

1. The sufficient condition is a consequence of Theorem 1 of Brandt (1986) and Theorem 1.1 of Bougerol and Picard (1) (1992). On the other hand,  for any n ≥ 1, we set X t (n)

n k−1 n (2) = k=0 j=0 A(t − js) η(t − ks) and X t (n) = k=0   k−1 j=0 At −j bt −k et −k , 1 ≤ v ≤ s. Then it can be shown that for any v ∈ {0, . . . , s − 1}   sn +v k−1   (2) (1) At −j bt −k et −k X t (sn + v) = X t (n) + k=sn+1

= X (t 1) (n) +

n 

j=0

A(t − js)

j =0

×

 v−1  k+1  k=0

 At −sn−j

bt −k−1 et −k−sn−1 .

j =1





  (1) (2) Thus X t (n) − X t (sn + v) ≤ 

n

j =0

A(t − js) .K (t ) → 0



as n → +∞ for some sequence K (t ) of positive and bounded random variables. For the last statement, see Bibi and Lessak (2009). 2. If there exists a causal s.p.s solution of (1.2)  given by the (2)

series X t , then limk→∞  H ′



k−1 i =0

At −i bt −k  = 0, a.s, for all



t ∈ Z. By controllability and  observability condition we obtain

t −1

limt →∞ 

i=0

A(s(t − i)) = 0, a.s, and the result follows



from Lemma 3.4 of Bougerol and Picard (1992).



Example 2.1. For PBL (1, 0, 1, 1) model, with b0 (t ) = 1, we obtain  s−1 s−1 i=0 At −i = i=0 (a1 (t − i) + c11 (t − i)et −i ). Hence PBL (1, 0, 1, 1) admit a causal s.p.s and p.e solution if and only if γ (s) (A) := s−1 i=0 E {log(|a1 (i) + c11 (i)e0 |)} < 0. It is worth noting that the existence of ‘‘explosive regimes’’ (i.e., E {log(|a1 (i)+c11 (i)e0 |)} > 0) does not preclude the existence of s.p.s solution. Example 2.2 (PARMA Models). For a PARMA (p,q) model with  b0 (t ) = 1, the sufficient condition reduces to ρ

s−1 i=0

A0 (i)

<

  lr ) and l(r )  = i=1 li . For γ (s) (A) < 0 to be also necessary, the model (1.2) fur-

1. The last condition becomes also necessary if the model is controllable and observable.

thermore, needs to be controllable and observable. Recalling here that Eq. (1.2) is said to be controllable (respectively observable) if rank {Cr (t )} := r (resp. rank {Or (t )} := r) where Cr (t ) :=

Now, we focus our attention on necessary and sufficient conditions ensuring the existence of the square-integrable solution for (1.1). A sufficient condition was given recently by Bibi and Lessak (2009). Necessary conditions for the square-integrability solution of general PBL (p, q, p, Q ) models have not been obtained

  .. .. .. . . . υ1 (t ).υ2 (t ). · · · .υr (t ) (resp. Or (t ) := [O1 (t )..O2 (t ).. · · · ..Or (t )])

A. Bibi, I. Lescheb / Economics Letters 114 (2012) 353–357

to our best knowledge. These solution processes, when exist, belong to the class of periodically correlated (p.c ) processes characterized by E {Xt +s } = E {Xt } and Cov (Xt +s , Xk+s ) = Cov (Xt , Xk ) for all t , k (see Gladyshev, 1961). Periodically correlated processes have been received a particular interest because of their connection with multivariate second order stationary processes. Indeed, define the v -th component of the s-dimensional vector X (t ) by Xt (v) := Xst +v , v = 0, . . . , s − 1 and noting µ(v) = E {Xt (v)} and γv (h) = Cov (Xt (v), Xt (v − h)) for all nonnegative h.   Then the process (Xt )t ∈Z is a p.c of period s if, and only if, X (t ) t ∈Z is a second order stationary process. However, in the special case when Q = 1 and q = 0 with b0 (t ) ̸=  0, Bibi and Moon-Ho (2006) showed that if E {et } = 0 and E e2t = σ 2 , then PBL(p, 0, p, 1) has a causal p.c solution whenever

 λ(1) = ρ

s−1  

A0 (v) + σ ⊗2

2 ⊗2 A1

  (v) < 1.

(2.3)

355

Proof. Inthis case, Q = 1, A0 (t ) = a1 (t ) and A1 (t ) = c11(t ), s−1 ⊗2 s−1 2 2 ⊗2 2 2 so = v=0 A0 (v) + σ A1 (v) v=0 a1 (v) + σ c11 (v) . If b0 (t ) ̸= 0 the model is controllable and hence the sufficient Condition (2.5) is also necessary.  Corollary 2.2. For   the PARMA (p, q) case, Condition (2.5) reduces to

ρ

s−1

v=0

A⊗ 0 (v) < 1.

Proof. Straightforward and hence omitted.



For the so-called subdiagonal PBL(p, q, p, Q ) processes i.e., when cij (t ) = 0 if i < j in (1.1) an elegant Markovian representation can be given for which some interesting properties are established. Indeed, Eq. (2.1) can be split up into X t = A0 (t )X t −1 +

v=0

Q 

Aj (t )X t −j et −j + et bt

(2.7)

j =1

In what follows, we show how to derive a condition analogous to (2.3) for the existence of a p.c solution of general model (1.1) or equivalently for (1.2). The following theorem gives sufficient and necessary conditions for the existence of a p.c process solution of Eq. (1.2).

where in (2.7) the first row of matrices Aj (t ) becomes (cjj (t ), . . . , cpj (t ), 0, . . . , 0), j = 1, . . . , Q . By a simple extension of Pham (1985) we obtain the following Markovian representation X t = H ′ Z t and

Theorem 2.2. Assuming that

where Z t := (X t , et X t , et −1 X t −1 , . . . , et −Q +1 X t −Q +1 ) , H ∈ Rr with r = (p + q) (Q + 1) and Mt (et ) and bt (et ) are finite order polynomials in et . More precisely Mt (et ) = M  1 (t ) + et M2 (t ) and bt (et ) = et b1 (t ) + e2t b2 (t ) with b1 (t ), b2 (t ) and {M1 (t ), M2 (t )} are periodic appropriate vectors of Rr and r × r matrices. For the representation (2.8) we keep the same notation as in (2.1), s−1 i.e., Z t = M (t )Z t −s + ξ (t ) where M (t ) = i=0 Mt −i (et −i ) and



E e2t = σ 2 ,

  

2j+1

E et



2(Q +1)

E et





< +∞ and (2.4)

= 0 for any 0 ≤ j ≤ Q .

If

 λ(Q ) = ρ

s−1 

 2 2 A⊗ 0 (v) + σ

v=0

Q 

 2 A⊗ j (v)

j =1

<

1 Q

ξ (t ) = (2.5)

then the infinite series (2.2) converging in mean square and absolutely a.s and constitute the unique, causal, s.p.s and p.e solution to (1.2). Conversely, if (1.2) is controllable and observable and has a causal p.c solution, then

 λ(Q ) = ρ

s−1 

 A0 (v) + σ ⊗2

v=0

2

Q 

 Aj (v) ⊗2

< 1.

(2.6)

j=1

Sketch of proof of Theorem 2.2. To prove the theorem, we use the same approach as in Ispany (1997). We first define the equivalence relationship on the pair of the process e(l,k) by     (l, k) ∼ (l′ , k′ ) ⇔ E e(l,k) e(l ,′ k′ ) ̸= 0 and let Q l, k =

      ♯ l′ , k′ : l, k ∼ l′ , k′ . It can be seen that (l, k) ∼ (l′ , k′ ) ⇔    ′   k = k  and hence Q l, k ≤ Q |k| . With this notation, we have from (2.2) and after some tedious algebra E



Xt2



  l 1 −1  ≤K Q | | H ′ A ( t − i1 ) · · ·  i =0 k1 l ,k 1 2 l  2  ki r −1  Akr (t − ir )bt −|l|  σ ki ̸=0 ,  i =0 

Z t = Mt (et ) Z t −1 + bt (et )

k

r

and the result follows from Lemma 4.3 from Ispany (1997).



Corollary 2.1. For the model PBL(1, 0, 1, 1). The sufficient Condis−1 2 tion (2.5) reduces to v=0 (a21 (v) + σ 2 c11 (v)) < 1 which is also necessary if b0 (t ) ̸= 0 for the existence of a causal, p.c and p.e., solution.



s−1 k−1 k=0

i=0

(2.8) ′







Mt −i (et −i ) bt −k (et −k ). Hence, we have the

following corollary. Corollary 2.3. Let γ (s) (M ) be the Lyapunov exponent associated with respect to the sequenceof random matrices M := (M (t ))t ∈Z and  assume that κδ := E |e0 |δ < +∞ for some δ > 0. Then if

  ∗ γ (s) (M ) < 0 there is δ ∗ ∈]0, 1[ such that E |Xt |δ < +∞. Proof. The proof is the same as in Basrak et al. (2002) by choosing  a norm ∥.∥ satisfying ∥M ∥δ ≤ |M |δ  for some δ > 0.  Proposition 2.1. Suppose that κ2(m+1) < ∞ for some integer  2m m ≥ 1. Then the following statements are equivalent: i. E Z ⊗ < t

+∞, ii. ρ

 s −1

⊗2m v=0 E (M1 (v) + eo M2 (v))





< 1.

Proof. The proof follows closely the arguments of Theorem 2.2. 

 

Proposition 2.2. Assume that E e20

s−1 = σ 2 < +∞, then ρ( v= 0

(M1⊗2 (v) + σ 2 M2⊗2 (v))) < 1 is a sufficient condition for both (2.8) and (2.7) have an unique,  causal, p.c solutions given by Z t =  k−1 = H ′ Z t where the k≥0 j=0 Mt −j (et −j ) bt −k (et −k ) and X t above series converges in mean square and absolutely a.s. The variance–covariance matrix Γv (0) = E {(Z t (v) − µ(v))(Z t (v) − µ(v))′ } is given by   ⊗2 ⊗2 Γv⊗2 (0) = M1⊗2 (v) + σ 2 M2⊗2 (v) Γv− (2.9) 1 (0) + Σv (0) where Σv (0) is the covariance matrix of et (v)b1 (v) + (e2t (v) − σ 2 )b2 (v)+ et (v)M2 (v)µ(v). Moreover the solution process is unique, s.p.s and p.e.

356

A. Bibi, I. Lescheb / Economics Letters 114 (2012) 353–357

Conversely, a necessary condition for the existence of a p.c solution is that there exists a covariance matrix Γv (0) solution of Eq. (2.9). Proof. Straightforward and hence omitted.

 2  σ 2 (v) := limn→∞ (n)−1/2 E  Sn (v) < +∞, and if σ 2 (v) > 0 then lim supn→∞ (2n log log n)−1/2  Sn (v) = σ (v) and lim infn→∞ Sn (v) = −σ (v) almost surely. (2n log log n)−1/2 

Proof. To prove Theorem 3.1, we use the same approach as in Liu

r (1992). We first define the following R -valued processes S n (t ) = At S n−1 (t − 1) + et bt I{n≥0} and for all n ∈ N: ∆n (t ) = S n (t ) −   n−1 A S n −1 ( t ) = bt −n et −n . It is easily seen that for all n ≥ 0, t − i i=0

(e)

(e)

S n (t ) and ∆n (t ) are respectively ℑt and ℑt −1 measurable, and for any m > Q

 m− Q −1  k−1  k=1

 At −i

b t −k e t −k

i=0

 k−1 

k=m−Q

 At −i

b t −k e t −k

m−Q −1



∞ 

∆k (t ) +

∆k (t ).

k=m−Q

k=1

Hence, we have that



(e)



    = E H ′ (X t − E X t )|ℑ(t e−)m     ∞     (e) ′ =E H ∆k (t ) − E ∆k (t ) |ℑt −m . k=m−Q

  ≤ Var(Y ) and  ′  n/2 E ∆n (t )∆n (t ) ≤ C .λ(Q ) (cf. Liu, 1992) we deduce that for m ≥ Q By using the inequalities E (E {Y |ℑ} − E {Y })2

∞    1/2 E (E {Yt |ℑt −m })2 m=1



∞  m=1







n≥m−Q

  

Var H ′ ∆n (t )



1/2

2 1/2  

≤2

∞ 

 

+



H ′k ∆n (t )

2 1/2 H ′ ∆n (t )

   E

2 1/2  

n≥m−Q

≤ C.

2 

n≥m−Q

m=1



E

  m/2 1/2 λ(Q ) < +∞. m≥1

This completes the proof.



It is well known now that, even for stationary series, the first and second empirical moments are not sufficient to distinguish between linearity and nonlinearity. So, the resort to the higher order moments is unavoidable. However, we need to consider the

(k(r ))

limiting laws of the partial sums  Sn (v) = t =0 Xt (v)Xt (v + k1 ) · · · Xt (v + kr ) − nµk(r ) (v) where k(r ) := (k1 , . . . , kr ) and µk(r ) (v) = E {Xt (v)Xt (v + k1 ) · · · Xt (v + kr )}. By using exactly the same argument as in Theorem 3.1, we have the following general result.

n−1

Theorem 3.2. Consider the model   (1.1) and its vectorial  represen rQ   j tation (1.2). Assume that E e0  < +∞ and E e0  = 0 for every odd positive integer j ≤ rQ . Under the causality assumptions (2.4) and  (2.5), the  following limit laws hold

(k(r )) (n)−1/2 Sn (v) ❀ N 0, σk2(r ) (v) as n → +∞ where σk2(r ) (v) 2  k(r ) ( ) (v) < +∞, and if σk2(r ) (v) > 0 then := limn→∞ (n)−1/2  Sn (k(r )) lim supn→∞ (2n log log n)−1/2  Sn (v) = σk(r ) (v), lim infn→∞ k ( r ) ) −1/2 ( Sn (v) = −σk(r ) (v) almost surely. (2n log log n)

i =0

= ∆0 (t ) +

E Yt |ℑt −m

Var H ′k ∆n (t )

n≥m−Q

Theorem 3.1. Consider the model (1.1) and its vectorial representation (1.2). Under the causality assumptions (2.4) the  and (2.5),  following limit laws hold (n)−1/2 Sn (v) ❀ N 0, σ 2 (v) where

+



+

The central limit theorem (CLT ) and the law of the iterated logarithm (LIL) concerning the sample moments of bilinear processes have been established by Liu (1992) using some results originated by Hall and Heyde (1980) in the context of stationary and ergodic processes. This technique can be adapted to our model. For this purpose and throughout this section, we shall assume that the causality assumptions (2.4) and (2.5) are satisfied. The following theorem (cf. Hall and Heyde, 1980, Theorem 5.4 and Corollary 5.4), plays an important role in establishing both CLT and LIL for sample moments of periodic bilinear processes. For   the model (1.1) defined by the first component of X t t ∈Z which satisfies (1.2), set Yt = Xt − E {Xt }. The first partial sum of interest  n −1 n−1 is  Sn (v) = t =0 (Xst +v − µ(v)) = t =0 Xst +v − nµ(v).

∞ 



m=1



3. Limit theorems

X t = e t bt +



∞ 

1/2

 Var H



 k≥m−Q

∆k (t )

Acknowledgements This research was partially supported by Algerian Ministry of Higher Education and Scientic Research contract PNR2011/8/u250/ 738-CNEPRU. References Aknouche, A., Guerbyenne, H., 2009. Periodic stationarity of random coefficient periodic autoregressions. Statistics & Probability Letters 79, 990–996. Basrak, B., Davis, R.A., Mikosch, T., 2002. Regular variation of GARCH processes. Stochastic Processes and their Applications 99, 95–115. Bibi, A., Lessak, R., 2009. On stationarity and β -mixing of periodic bilinear processes. Statistics & Probability Letters 79, 79–87. Bibi, A., Moon-Ho, R., 2006. Estimation of periodic bilinear time series models. Communications in Statistics-Theory and Methods 35, 1745–1756. Bougerol, P., Picard, N., 1992. Strict stationarity of generalized autoregressive processes. The Annals of Probability 20 (4), 1714–1730. Brandt, A., 1986. The stochastic equation Yn+1 = An Yn−1 + Bn with stationary coefficients. Advances in Applied Probability 18, 211–220. Franses, P.H., Paap, R., 2011. Random-coefficients periodic autoregressions. Statistica Neerlandica 65 (1), 101–115. Gladyshev, E.G., 1961. Periodically correlated random sequences. Soviet Mathematics 2, 385–388.

A. Bibi, I. Lescheb / Economics Letters 114 (2012) 353–357 Granger, C.W., Andersen, A.P., 1978. An Introduction to Bilinear Time Series Models. Vandenhoek and Ruprecht, Gottingen. Hall, P., Heyde, C.C., 1980. Martingale Limit Theory and its Applications. Academic Press. Ispany, M., 1997. On stationarity of additive bilinear state-space representation of time series. In: Csiszar, I., et al. (Eds.), Stochastic Differential and Difference Equations. Birkhauser, pp. 143–155.

357

Kristensen, D., 2009. On stationarity and ergodicity of the bilinear model with applications to the GARCH models. Journal of Time Series Analysis 30 (1), 125–144. Liu, J., 1992. On stationarity and asymptotic inference of bilinear time series models. Statistica Sinica 2, 479–494. Pham, D.T., 1985. Bilinear Markovian representation of bilinear models. Stochastic Processes and their Applications 20, 295–306.