ARTICLE IN PRESS Journal of Statistical Planning and Inference 140 (2010) 1529–1541
Contents lists available at ScienceDirect
Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi
Integer-valued autoregressive processes with periodic structure Magda Monteiro a, Manuel G. Scotto b,, Isabel Pereira b a b
´ gueda, Research Unit ‘Matema ~ de A ~ ´tica e Aplicac- oes’, Escola Superior de Tecnologia e Gestao Universidade de Aveiro, Portugal ~ ´tica, Research Unit ‘Matema ´tica e Aplicac- oes’, ´rio de Santiago, 3810-193 Aveiro, Portugal Departamento de Matema Universidade de Aveiro, Campo Universita
a r t i c l e in fo
abstract
Article history: Received 14 May 2008 Received in revised form 10 December 2009 Accepted 11 December 2009 Available online 22 December 2009
In this paper the periodic integer-valued autoregressive model of order one with period T, driven by a periodic sequence of independent Poisson-distributed random variables, is studied in some detail. Basic probabilistic and statistical properties of this model are discussed. Moreover, parameter estimation is also addressed. Specifically, the methods of estimation under analysis are the method of moments, least squares-type and likelihood-based ones. Their performance is compared through a simulation study. & 2009 Elsevier B.V. All rights reserved.
Keywords: Count processes Periodic autocovariances Binomial thinning
1. Introduction Since their introduction by Bennett (1958) and Gladyshev (1961, 1963) much attention has been given to periodically correlated (or cyclostationary) processes, partially because of their wide applicability to hydrology (Vecchia, 1985; Salas, 1993; Tesfaye et al., 2006), economics (Franses, 1994; Franses and Paap, 2004; Haldrup et al., 2007), meteorology (Bloomfield et al., 1994; Lund et al., 1995, 2007), and signal processing (Gardner et al., 2006). Further examples can be found in Hurd and Miamee (2007) and the references therein. A large part of the literature on this topic is devoted to the so-called periodic autoregressive moving average (PARMA) models which are extensions of the commonly used ARMA models, having parameters which vary periodically in time. The analysis of basic probabilistic properties of PARMA models as well as statistical inference and forecasting techniques has been addressed by Lund et al. (2006), Shao (2006), Basawa et al. (2004), Shao and Ni (2004), Basawa and Lund (2001), Lund and Basawa (2000), and Bentarzi and Hallin (1994) among other authors. It is worth to mention that all references given in the previous paragraph deal with the case of continuous-valued (i.e. conventional) periodically correlated processes. In contrast, however, the analysis of periodically correlated series of counts has not received much attention in the literature. This paper aims to give a contribution towards this direction. Motivation to include discrete data models comes from the need to account for the discrete nature of certain data sets, often counts of events, objects or individuals. Examples of applications can be found in the analysis of the number of rainy days (Cui and Lund, 2009), time series of count data that are generated from stock transactions (Quoreshi, 2006) where each transaction refers to a trade between a buyer and a seller in a volume of stocks for a given price, statistical control processes (Weiß, 2009), telecommunications (Weiß, 2008), and also in the analysis of optimal alarm systems (Monteiro et al., 2008), experimental biology (Zhou and Basawa, 2005), social science (McCabe and Martin, 2005), and queueing systems (Ahn et al., 2000).
Correspondence author. Tel./fax: + 351 234370670.
E-mail address:
[email protected] (M.G. Scotto). 0378-3758/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2009.12.015
ARTICLE IN PRESS 1530
M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
monthly unemployed of short duration in Penamacor 130 120
x
110 100 90 80 70 0
20
40
60 80 month
100
120
Fig. 1. Monthly number of short-term unemployed people in Penamacor County (Portugal), from January 1997 to December of 2007.
In this paper, we investigate basic probabilistic and statistical properties of the Periodic INteger-valued AutoRegressive process of order one with period T (hereafter PINARð1ÞT ) defined by the recursive equation Xt ¼ ft 3Xt1 þ Zt ;
t2N
ð1Þ
with ft ¼ aj 2 ð0; 1Þ for t ¼ j þ kT ðj ¼ 1; . . . ; T; k 2 N0 Þ, where the thinning operator 3 is defined as d ft 3Xt1 ¼
Xt1 X
Ui;t ðft Þ;
i¼1
being ðUi;t ðft ÞÞ, for i ¼ 1; 2; . . . ; a periodic sequence of independent Bernoulli random variables (r.v.’s) with success probability PðUi;t ðft Þ ¼ 1Þ ¼ ft . Furthermore, it is assumed that ðZt Þ constitutes a periodic sequence of independent Poisson-distributed random variables with mean ut ðZt Pðut ÞÞ with ut ¼ lj for t ¼ j þkT ðj ¼ 1; . . . ; T; k 2 N0 Þ, which are assumed to be independent of Xt1 and ft 3Xt1 . To avoid ambiguity, T is taken as the smallest positive integer satisfying (1). It is important to stress the fact that discreteness of the process ðXt Þ is ensured by the 3operator since this operator incorporates the discrete nature of the variates and acts as the analogue of the standard multiplication used in the continuous-valued moving average model. The concept of thinning is well known in classical probability theory and has been in use in the Bienayme´–Galton–Watson branching processes literature as well as in the theory of stoppedsum distributions. Potential applications of INAR processes with periodic structure can be found in the analysis of demographic or labor market data set. As an illustrative example we display in Fig. 1 the time series trace plot of the monthly number of short-term unemployed people in Penamacor County (Portugal), from January 1997 to December of 2007. Fig. 2 displays the sample means, the sample variances by months of the year, and the autocorrelation function. ¨ as ¨ et al., 2002; Brann ¨ as ¨ and Further examples can be found in the analysis of international tourism demand (Brann ¨ Nordstrom, 2006). In general, the data exhibit a strong form of periodic variation over the day of the week in addition to a strong seasonal variation over the year, with a peak in July–August. The rest of the paper is organized as follows: in Section 2, we demonstrate the existence of an almost surely unique non-negative integer-valued periodically stationary process satisfying (1). Expressions for the periodic mean and autocovariance of the periodically stationary distribution are also given. Parameter estimation is covered in Section 3. In Section 4 the results are illustrated through a simulation study. Finally, some concluding remarks are given in Section 5. 2. Basic properties of the PINAR model The analysis of the existence and uniqueness of a periodically stationary and causal PINARð1ÞT process can be obtained via the analysis of the multivariate integer-valued autoregressive process introduced by Latour (1997). Note that Eq. (1) admits the representation Yt ¼ A3Yt1 þ ft
ð2Þ
ARTICLE IN PRESS M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
1531
105 100
200
95 150
90 85
100 4
2
6 8 months
10
12
2
4
6 8 months
10
12
1.0
ACF
0.5
0.0
−0.5
0
5
10
15 20 Lag
25
30
Fig. 2. Sample means (top row, left) and variances (top row, right) by months of the year and the autocorrelation function (bottom row, left) of the monthly number of unemployed people in Penamacor County (Portugal), from January 1997 to December of 2007.
with Yt ¼ ðX1 þ tT ; X2 þ tT ; . . . ; XT þ tT Þ0 , where 0 denotes matrix transpose, 0 a1 1 0 0 ... C B^ ^ ^ ^ C B C A¼B TY 1 C B @0 0 ... a A Tk
k¼0
and 0
1 B a2 B B B a3 a2 B B ^ B B TY 3 ft ¼ B3Zt ¼ B B aT1j B B Bj¼0 B B TY 2 B aTj @ j¼0
TY 4
0
0
...
0
1
0
...
0
a3
1
...
0
^
^
^
^
aT1j
j¼0 TY 3
TY 5
aT1j . . .
1
j¼0
aTj
j¼0
TY 4 j¼0
aTj
...
aT
0
1
C 0C C 1 0 0C C Z1 þ tT C ^C B C C B Z2 þ tT C C3B C CB C 0C @^ A C C ZT þ tT C C C 1A
with ðft Þ being a sequence of i.i.d. integer-valued random vectors independent of the operators, with finite mean given by P P Eðft Þ ¼ Bðl1 ; l2 ; . . . lT Þ0 and Vðft Þ ¼ B Z B0 where Z ¼ CovðZt ; Zt þ j Þ ¼ diagðl1 ; . . . ; lT Þ. In view of the fact that the eigenvalues of the matrix A are lesser than one and that ft is independent of Ys ðso tÞ, it follows by Proposition 3.1 in Latour (1997, p. 236) that there exist an almost surely unique non-negative integer-valued stationary process satisfying (2). Next we obtain the periodically stationary distribution of ðXt Þ. First, however, we prove the following result. For simplicity in notation we define 8 i1 Y > > < ftj ; i 4 0; bt;i ¼ j ¼ 0 > > : 1; i ¼ 0;
ARTICLE IN PRESS 1532
M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
which can be rewritten as (
bt;i ¼
bt;j bkT;T ; i ¼ j þ kT; j ¼ 1; 2; . . . ; T; 1;
i ¼ 0:
Note that the sequence ðbt;i Þ is periodic with period T. Proposition 2.1. For a fixed value of j ¼ 1; . . . ; T, with T 2 N, the process ðXt Þ for t ¼ j þ kT and k 2 N0 is an irreducible, aperiodic and positive recurrent (and hence ergodic) Markov chain. Moreover, the stationary distribution of ðXt Þ is given by that of Vj ¼
þ1 X T 1 X
m1
ðbj;j bT;a bT;T Þ3ZTðm þ 1Þa þ
j1 X
bj;m 3Zjm ;
m¼0
m¼1a¼0
where the series converges almost surely and also in L2 . Proof. See Appendix A. Now we are prepared to obtain the periodic mean and autocovariance function of ðXt Þ. Lemma 2.1. For a fixed value of j ¼ 1; . . . ; T, with T 2 N, t ¼ j þkT and k 2 N0 Pj1 P b ljk þ bj;j Tj1 bT;i lTi i¼0 mj ¼ mt ¼ EðXt Þ ¼ VðXt Þ ¼ k ¼ 0 j;k ð3Þ 1bT;T P1 with the convention i ¼ 0 ¼ 0. Moreover, for j ¼ 1; . . . ; T and h Z 0, gj þ kT ðhÞ ¼ gj ðhÞ ¼ bj þ h;h mj and gj þ kT ðhÞ ¼ gj ðhÞ ¼ bj þ kT;h mj þ kTh . Note that the mean mj can be calculated recursively through the expression
mj ¼ bj;j mT þ
j1 1 X
bj;j k ¼ 0
!
bj;k ljk ;
j ¼ 1; . . . ; T:
Moreover, in contrast to the autocovariance function of a stationary series, gj ðÞ is not symmetric in h; however gt ðhÞ ¼ gth ðhÞ and gt ðhÞ ¼ gt þ h ðhÞ. Furthermore, in view of the fact that h can be rewritten in the form h ¼ i þmT, for m m some i 2 f1; . . . ; Tg and m 2 N, the autocovariance function takes the form gj ðhÞ ¼ bT;T bj þ i;i mj and gj ðhÞ ¼ bT;T bj þ T;i mj þ Ti . The main result of this section is given in Theorem 2.1 below. Theorem 2.1. The marginal distribution of ðXt Þ, with t ¼ j þ kT for a fixed value of j ¼ 1; . . . ; T, with T 2 N, and k 2 N0 , is Poisson with mean mj if and only if ðZt Þ forms a sequence of independent Poisson-distributed random variables with mean lj . Proof. See Appendix A. 3. Parameters estimation Let ðX1 ; . . . ; XNT Þ be a sequence of r.v.’s satisfying (1) being h ¼ ða1 ; l1 ; . . . ; aT ; lT Þ the vector of unknown parameters. The notation used assumes N complete cycles of observations. Without lost of generality we assume that X0 ¼ x0 . The methods of estimation under analysis in this section are the method of moments, least squares-type and likelihoodbased ones. 3.1. Moments-based estimators In this section we discuss Yule–Walker estimators (YW) for the vector of parameters h which consist in the solution to the Yule–Walker type equations ( mi ¼ ai mi1 þ li ; ð4Þ gi þ kT ð1Þ ¼ bi þ 1;1 s2i
ARTICLE IN PRESS M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
1533
for i ¼ 1; . . . ; T and m0 ¼ mT , with gi þ kT ð1Þ, mi , and s2i replaced by their corresponding empirical counterparts. Hence, the YW-estimators, a^ i;YW and l^ i;YW , of ai and li , for i ¼ 1; . . . ; T are given by 8 P > N N1 > k ¼ 0 ðXkT X T;N ÞðX1 þ nT X 1;N Þ > ; i ¼ 1; > P > 2 < ðN1Þ N k ¼ 0 ðXkT X T;N Þ a^ i;YW ¼ PN1 > > k ¼ 0 ðXi1 þ kT X i1;N ÞðXi þ kT X i;N Þ > ; i ¼ 2; . . . ; T > PN1 > 2 : k ¼ 0 ðXi1 þ kT X i1;N Þ and (
l^
i;YW
¼
with
X i;N ¼
X i;N a^ i X i1;N ;
i ¼ 2; . . . ; T;
X 1 a^ 1 X T;N ;
i¼1
8 N1 > 1X > > Xi þ kT ; > > < Nk ¼ 0
i ¼ 1; . . . ; T1;
N > 1 X > > > XkT ; > : Nþ1
i ¼ T:
k¼0
3.2. Conditional least squares estimators (CLS) The CLS-estimators h^ CLS ¼ ða^ 1;CLS ; l^ 1;CLS ; . . . ; a^ T;CLS ; l^ T;CLS Þ of h are obtained by minimizing the expression N 1 X
Q ðhÞ ¼
T X
ðXi þ kT ai Xi þ kT1 li Þ2
k¼0i¼1
yielding
a^ i;CLS ¼
N
PN1
P PN1 Xi þ kT Xi1 þ kT N1 k ¼ 0 Xi þ kT k ¼ 0 Xi1 þ kT 2 PN1 2 PN1 N k ¼ 0 Xi1 þ kT k ¼ 0 Xi1 þ kT
k¼0
and
l^
i;CLS
¼N
N 1 X
1
Xi þ kT a^ i;CLS
k¼0
N 1 X
! Xi1 þ kT
k¼0
for i ¼ 1; . . . ; T. It is easy to check that a^ i;YW ¼ a^ i;CLS , for i ¼ 2; . . . ; T whereas l^ i;YW ¼ l^ i;CLS , for i ¼ 2; . . . ; T1. However, it follows from straightforward modifications of the proof of Theorem 3 in Freeland and McCabe (2005) that a^ 1;CLS ¼ a^ 1;YW þop ð1Þ, l^ 1;CLS ¼ l^ 1;YW þ op ð1Þ and l^ T;CLS ¼ l^ T;YW þ op ð1Þ. In obtaining the asymptotic distribution of h^ CLS , we first prove the following lemma. P Lemma 3.1. Let SN ðhÞ ¼ N1 k ¼ 0 xk ðhÞ with T X @Ui þ kT ; xk ðhÞ ¼ Ui þ kT @h i¼1 being Ui þ kT ¼ Xi þ kT ai Xi þ kT1 li . As N-1 N1
@SN ðhÞ P -A1 @h
ð5Þ
with 2
C1
6 6 02 A1 ¼ 6 6 ^ 4 02 being Ci ¼ ½ N 1
mi1 þ m mi1
N 1 X k¼0
02
02
...
C2
02
...
^
&
^
02
02
...
2 i1
mi1 1
02
3
7 02 7 7; ^ 7 5
CT
for i ¼ 1; . . . ; T, with m0 ¼ mT . Moreover P
xk ðhÞxk ðhÞ0 -A2
ð6Þ
ARTICLE IN PRESS 1534
M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
with 2
O1
6 6 02 A2 ¼ 6 6 ^ 4 02
02
02
...
O2
02
...
^
&
^
02
02
...
3
02
7 02 7 7; ^ 7 5
OT
being "
#
ai ð1ai Þmi1;3 þ li mi1;2 ai ð1ai Þmi1;2 þ li mi1 ; Oi ¼ ai ð1ai Þmi1;2 þ li mi1 ai ð1ai Þmi1 þ li j and mi;j ¼ E½Xij þ kT , m0;j ¼ E½XkT , a0 ¼ aT and l0 ¼ lT , for i ¼ 1; . . . ; T. Finally P 0 N 1 SN ðhÞ-~
ð7Þ
with ~ 0 ¼ ð0; . . . ; 0Þ0 of dimension 2T 1. Proof. See Appendix A. The following result establishes the asymptotic distribution of the CLS-estimators. Theorem 3.1. Under the conditions above D 1 0; A1 N1=2 ðh^ CLS hÞ-Nð~ 1 A2 A1 Þ:
Proof. See Appendix A. 3.3. Weighted conditional least squares estimators (WCLS) The WCLS-estimators h^ WCLS ¼ ða^ 1;WCLS ; l^ 1;WCLS ; . . . ; a^ T;WCLS ; l^ T;WCLS Þ of h are obtained by minimizing the sum of the squared error between the observations and its conditional mean and weighted by the inverse of the conditional variance, given by Q ðhÞ ¼
N 1 X
T X ðXi þ kT ai Xi þ kT1 li Þ2
k¼0i¼1
a^ i ð1a^ i ÞXi þ kT1 þ l^ i
yielding PN1
k¼0
a^ i;WCLS ¼
PN1 Xi þ kT Xi þ kT1 PN1 PN1 1 Xi þ kT Xi þ kT1 k¼0 k¼0 ^ ^ k¼0 ^ ^ V ðh ; Xi þ kT1 Þ V^ ðh^ ; Xi þ kT1 Þ V ðh ; Xi þ kT1 Þ V^ ðh^ ; Xi þ kT1 Þ ; !2 Xi2þ kT1 PN1 PN1 PN1 1 Xi þ kT1 k¼0 ^ ^ k¼0 ^ ^ k¼0 ^ ^ Þ Þ Þ V ðh ; X V ðh ; X V ðh ; X i þ kT1
l^ i;WCLS ¼
PN1
k¼0
PN1
Xi þ kT a^ i;WCLS PN1 1 k¼0 ^ ^ V ðh ; X
k¼0
i þ kT1
Xi þ kT1
i þ kT1
;
i þ kT1 Þ
being V^ ðh^ ; Xi þ kT1 Þ ¼ a^ i;CLS ð1a^ i;CLS ÞXi þ kT1 þ l^ i;CLS , for i ¼ 1; . . . ; T. 3.4. Conditional maximum likelihood estimation (CML) The conditional likelihood function for the PINARð1ÞT model can be shown to be LðhjX1 ; . . . ; XNT Þ ¼
T Y
N1 Y
pi ðxi þ kT jxi1 þ kT Þ
k¼0i¼1
with pi ðxi þ kT jxi1 þ kT Þ ¼ eli
minðxi þX kT ;xi1 þ kT Þ
x
xi1 þ kT m Cmi1 þ kT am i ð1ai Þ
m¼0
lxi i þ kT m ðxi þ kT mÞ!
:
The CML-estimators h^ CML ¼ ða^ 1;CML ; l^ 1;CML ; . . . ; a^ T;CML ; l^ T;CML Þ are obtained maximizing the conditional log-likelihood function lðhjX1 ; . . . ; XNT Þ ¼
N 1 X
T X
k¼0i¼1
lnðpi ðxi þ kT jxi1 þ kT ÞÞ
ARTICLE IN PRESS M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
1535
yielding the systems 8 N1 N 1 X X > pi ðxi þ kT 1jxi1 þ kT Þ > > ; ðx a x Þ ¼ l > i i i þ kT i1 þ kT > pi ðxi þ kT jxi1 þ kT Þ
ð8Þ
N 1 X > pi ðxi þ kT 1jxi1 þ kT Þ > > > ¼N > : pi ðxi þ kT jxi1 þ kT Þ k¼0
for i ¼ 1; . . . ; T. In order to solve this system, numerical procedures have to be employed. Note, however, that the CML estimates for the li ’s are readily available from that for the ai ’s through the following expression:
l^ i;CML ¼
1 1 NX ðx a^ i;CML xi1 þ kT Þ: N k ¼ 0 i þ kT
Large sample distribution of the CLM-estimators is given below. Theorem 3.2. The CML-estimators are asymptotically normal, i.e. 3 2 a^ 1;CML a1 7 6^ 6 l 1;CML l1 7 7 pffiffiffiffi6 7 D 6 ^ N6 0; I1 Þ; 7 - Nð~ 7 6 6 a^ T;CML aT 7 5 4 l^ T;CML lT
ð9Þ
being 2
M1
6 6 02 I¼6 6 ^ 4 02
02
02
...
M2
02
...
^
&
^
02
02
...
02
3
7 02 7 7; ^ 7 5
MT
" # @2 lðhÞ 6 E 6 @a2i 6 Mi ¼ 6 2 6 4 E @ lðhaÞ @ai @li 2
2 3 @ lðhÞ E 7 @ai @li 7 " #7 7 @2 lðhÞ 7 5 E 2 @li
with Table 1 Sample mean and mean square error (in brackets) for the models a ¼ ð0:85; 0:50; 0:76; 0:63Þ and k ¼ ð4; 1; 3; 2Þ.
a n ¼ 80 YW CLS WCLS CML
n ¼ 400 YW CLS WCLS CML
n ¼ 2000 YW CLS WCLS CML
k
0.737 (0.039) 0.740 (0.037) 0.741 (0.037) 0.838 (0.009)
0.423 (0.018) 0.423 (0.018) 0.422 (0.018) 0.493 (0.006)
0.706 (0.036) 0.706 (0.036) 0.706 (0.036) 0.769 (0.013)
0.597 (0.028) 0.597 (0.028) 0.597 (0.027) 0.649 (0.011)
4.717 (1.965) 4.697 (1.882) 4.692 (1.886) 3.995 (0.577)
1.771 (1.728) 1.771 (1.728) 1.786 (1.685) 1.078 (0.434)
3.291 (1.356) 3.291 (1.356) 3.289 (1.356) 2.885 (0.524)
2.264 (1.488) 2.255 (1.488) 2.259 (1.452) 1.846 (0.547)
0.837 (0.007) 0.837 (0.007) 0.838 (0.007) 0.854 (0.002)
0.492 (0.003) 0.492 (0.003) 0.492 (0.003) 0.502 (0.002)
0.754 (0.008) 0.754 (0.008) 0.754 (0.008) 0.764 (0.003)
0.628 (0.006) 0.628 (0.006) 0.627 (0.006) 0.635 (0.002)
4.080 (0.326) 4.078 (0.323) 4.077 (0.321) 3.960 (0.131)
1.079 (0.297) 1.079 (0.297) 1.083 (0.276) 0.975 (0.129)
3.040 (0.277) 3.040 (0.277) 3.039 (0.275) 2.975 (0.108)
2.018 (0.311) 2.020 (0.310) 2.023 (0.305) 1.963 (0.125)
0.847 (0.002) 0.847 (0.002) 0.847 (0.002) 0.849 (0.001)
0.499 (0.001) 0.499 (0.001) 0.499 (0.001) 0.500 (0.0003)
0.759 (0.002) 0.759 (0.002) 0.760 (0.001) 0.761 (0.001)
0.629 (0.001) 0.629 (0.001) 0.629 (0.001) 0.631 (0.0004)
4.017 (0.074) 4.016 (0.074) 4.015 (0.073) 4.003 (0.028)
1.014 (0.071) 1.014 (0.071) 1.017 (0.064) 0.998 (0.026)
3.005 (0.056) 3.005 (0.057) 3.003 (0.055) 2.993 (0.024)
2.004 (0.065) 2.005 (0.065) 2.004 (0.062) 1.991 (0.025)
ARTICLE IN PRESS 1536
M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
" E
# þ1 X þ1 X @2 lðhÞ N ¼ 2 PðXi1 þ T ¼ xi1 þ T Þ 2 2 @ai ai ð1ai Þ xi þ T xi1 þ T 2
½ð2ai 1Þxi þ T a2i xi1 þ T pi ðxi þ T jxi1 þ T Þ 2
þ 2ð1ai Þli pi ðxi þ T 1jxi1 þ T Þ þ li pi ðxi þ T 2jxi1 þ T Þli
) p2i ðxi þ T 1jxi1 þ T Þ ; pi ðxi þ T jxi1 þ T Þ
2
þ1 X þ1 X @ lðhÞ N p2 ðxi þ T 1jxi1 þ T Þ ¼ PðXi1 þ T ¼ xi1 þ T Þ pi ðxi þ T 1jxi1 þ T Þ þ li pi ðxi þ T 2jxi1 þ T Þli E ai ð1ai Þ x x @ai @li pðxi þ T jxi1 þ T Þ iþT
i1 þ T
and " E
@2 lðhÞ 2
@li
# ¼ N
þ1 X þ1 X
( PðXi1 þ T ¼ xi1 þ T Þ
pi ðxi þ T 2jxi1 þ T Þ
xi þ T xi1 þ T
) p2i ðxi þ T 1jxi1 þ T Þ : pi ðxi þ T jxi1 þ T Þ
bias (1)
Proof. See Appendix A.
0.5
0.5
0.5
0.0
0.0
0.0
−0.5
−0.5
−0.5
bias (2)
YW
bias (3)
CLS WCLS CML
0.5
0.5
0.5
0.0
0.0
0.0
−0.5
−0.5
−0.5
YW
YW
CLS WCLS CML
CLS WCLS CML
0.5
0.5
0.5
0.0
0.0
0.0
−0.5
−0.5
−0.5
YW
bias (4)
YW
CLS WCLS CML
YW
CLS WCLS CML
CLS WCLS CML
0.5
0.5
0.5
0.0
0.0
0.0
−0.5
−0.5
−0.5
YW
CLS WCLS CML
YW
CLS WCLS CML
YW
CLS WCLS CML
YW
CLS WCLS CML
YW
CLS WCLS CML
YW
CLS WCLS CML
Fig. 3. Boxplots of the biases for a considering the set of parameters a ¼ ð0:85; 0:50; 0:76; 0:63Þ and k ¼ ð4; 1; 3; 2Þ.
ARTICLE IN PRESS M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
1537
4. Simulation study The aim of this section is to illustrate the theoretical findings given in Section 3 and to assess the small, moderate and large sample behavior of the YW, CLS, WCLS and CML estimators. Throughout the analysis it shall be assumed that T ¼ 4. The simulation study contemplates the following combination of l’s and a’s: a ¼ ð0:85; 0:50; 0:76; 0:63Þ and k ¼ ð4; 1; 3; 2Þ. We simulated time series of length n ¼ NT ¼ 80; 400; 2000 with 1000 independent replicates. The results are summarized in Table 1. A closer look at the table shows the superiority of the CML method in terms of both bias and mean square error (MSE) when the sample size increases. The simulation study also reveals that there is no gain in considering the WCLS method over the CLS method, being the differences between their sample means and their mean square errors negligible. Note that the YW, CLS and WCLS estimates of a, componentwise, tend to be biased to the left and negatively skewed; see Fig. 3. The bias and skewness, however, are reduced when the sample size increases. This is in contrast with the YW, CLS and WCLS
bias (1)
n = 80 8 6 4 2 0 −2 −4 −6
bias (2)
YW
bias (3)
CLS
WCLS
8 6 4 2 0 −2 −4 −6 YW
CML
CLS
WCLS
CML
8 6 4 2 0 −2 −4 −6
8 6 4 2 0 −2 −4 −6 YW
CLS
WCLS
CLS
WCLS
CML
8 6 4 2 0 −2 −4 −6 CLS
WCLS
CML
YW
CLS
WCLS
CML
CLS
WCLS
CML
YW
CLS
WCLS
CML
YW
CLS
WCLS
CML
YW
CLS
WCLS
CML
8 6 4 2 0 −2 −4 −6 YW
CLS
WCLS
CML
8 6 4 2 0 −2 −4 −6
8 6 4 2 0 −2 −4 −6
YW 8 6 4 2 0 −2 −4 −6
YW
CML
8 6 4 2 0 −2 −4 −6 YW
bias (4)
n = 2000
n = 400
8 6 4 2 0 −2 −4 −6
8 6 4 2 0 −2 −4 −6 YW
CLS
WCLS
CML
Fig. 4. Boxplots of the biases for k considering the set of parameters a ¼ ð0:85; 0:50; 0:76; 0:63Þ and k ¼ ð4; 1; 3; 2Þ.
ARTICLE IN PRESS 1538
M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
estimates of k which tend to be biased to the right and positively skewed, specially for the largest values of the l’s. As expected, both the bias and the skewness are also reduced when the sample size increases; see Fig. 4. This is in agreement with the asymptotic properties of the estimators: unbiasedness and consistency.
5. Conclusions This paper has presented the periodic integer-valued autoregressive model of order one with period T, driven by a periodic sequence of independent Poisson-distributed random variables, by extending the conventional periodic autoregressive model. The stationarity and ergodicity of the process are established. Method of moments, least squarestype and likelihood-based estimators of the model parameters are derived and their asymptotic properties obtained. Moreover, an important issue in fitting conventional PARMA models to periodic time series lies with parsimony. Even very simple PARMA models can have an inordinately large number of parameters. This is also true when dealing with PINAR models. Therefore, the development of procedures for dimensionality reduction is an impeding problem. This remains a topic of future research.
Acknowledgment The authors would like to express their gratitude to an anonymous referee for all helpful comments and constructive criticism.
Appendix A
Proof of Proposition 2.1. It is easy to see that Xt is a Markov chain on N0 with the following transition probabilities: ð1Þ Pab ¼ PðXt ¼ bjXt1 ¼ aÞ ¼
minðb;aÞ X
am ut fm e t ð1ft Þ
m¼0
ubm t 40: ðbmÞ!
ð10Þ
From (10) it follows that ðXt Þ is an irreducible and aperiodic Markov chain. Thus, it is either positive recurrent or ðj þ kTÞ limk- þ 1 Pab ¼ 0, for any a; b 2 N0 . By iterating Eq. (1) and after rearranging some terms, it follows that d
Xj þ kT ¼ bj þ kT;j þ kT 3X0 þ
k T 1 X X
m1
ðbj;j bT;a bT;T Þ3ZTðm þ 1Þa þ
j1 X
bj;m 3Zjm ¼ Yj þ kT
say:
ð11Þ
m¼0
m¼1a¼0
The equality in distribution holds both unconditionally and conditionally given X0 ¼ x0 , say. Note that the first term on the right-hand side of (11) is op ð1Þ, both unconditionally and conditionally. Next we show that Yj þ kT converges almost surely. For all e 40 and k; n 2 N0 P
" # n T 1 X X k kþn m þ k1 max jYj þ kT Yj þ Tðk þ lÞ j 4 e r e1 E bj;j bT;T 3X0 bj;j bT;T 3X0 þ ðbj;a bT;a bT;T Þ3ZTðm þ k þ 1Þa ¼ e1
1rlrn
bj;j bkT;T ð1bnT;T ÞEðX0 Þ þ bj;j bk1 T;T
n T 1 X X
!
m¼1a¼0
bT;a bm T;T lTa !0 as k-þ 1:
m¼1a¼0
P Pj1 PT1 m1 Note that the variable VjðkÞ ¼ km ¼ 1 a ¼ 0 bj;j bT;a bT;T 3ZTðm þ 1Þa þ m ¼ 0 bj;m 3Zjm , converges in distribution (both unconditionally and conditionally) to the same limit. Hence, it follows that lim PðXj þ kT ¼ bjX0 ¼ aÞ ¼ lim PðYj þ kT ¼ bjX0 ¼ aÞ ¼ lim PðVjðkÞ ¼ bÞ ¼ lim PðYj þ kT ¼ bÞ ¼ lim PðXj þ kT ¼ bÞ;
k- þ 1
k- þ 1
k- þ 1 ðj þ kTÞ
k- þ 1
k- þ 1
since VjðkÞ is independent of X0 . Suppose now that limk- þ 1 Pab ¼ 0 ¼ PðYj ¼ bÞ, for any a; b 2 N0 , where Yj represents the almost sure limit of Yj þ kT . However, this contradicts the fact that PðYj o1Þ ¼ 1. This proves the first part of the proposition. Almost sure convergence of VjðkÞ follows by the same argument as above. We skip the details. Finally, in order to prove the
ARTICLE IN PRESS M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
1539
convergence in L2 , we proceed as follows: E½ðVjðkÞ Vjðk þ nÞ Þ2 ¼
n X n X T 1 X T 1 X m¼1
þ
n T1 X T1 X X m¼1a¼0
l ¼ 1
lam
m þ l þ 2k2 b2j;j bT;a bT;b bT;T lTa lTb þ
a¼0b¼0
n T 1 X X
m þ k1
ðbj;j bT;a bT;T
þ 2k2 2 lTa þ b2j;j b2T;a b2m lTa Þ T;T
m¼1a¼0
þ 2k2 b2j;j bT;a bT;b b2m lTa lTb : T;T
b ¼ 0
baa
After some tedious calculations, we obtain 2k þ 1 2 bj;j
E½ðVj þ kT Vj þ Tðk þ nÞ Þ2 ¼ bT;T
k
2k
2
ðconstantÞ þ bT;T bj;j ðconstantÞ þ bT;T bj;j ðconstantÞ!0; k- þ 1:
Therefore, for all n 4 0, E½jVj þ kT Vj þ Tðk þ nÞ j2 !0, as k- þ 1. This concludes the proof.
&
Proof of Theorem 2.1. We first prove the if part: assume that Zj þ kT Pðlj Þ. Note that Xj þ kT can be rewritten as Xj þ kT ¼ bT;T 3Xj þ Tðk1Þ þ
j1 X
bj;i 3Zji þ
Tj1 X
bj;j bT;i 3ZTi :
i¼0 i¼0 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl} Yj
PkT1
k
Since m ¼ 0 bj þ kT;m 3Zj þ kTm Pðð1bT;T Þmj Þ, it follows that Yj Pðð1bT;T Þmj Þ. Moreover, the probability generating function of Xj þ kT takes the form PXj þ kT ðrÞ ¼ PXj þ Tðk1Þ ð1bT;T þ bT;T rÞ PY ðrÞ which implies that
PXj þ kT ðrÞ emj ð1rÞ ¼ emj ð1bT;T Þð1rÞ ¼ m ð1ð1b þ b rÞÞ ; T;T T;T PXj þ Tðk1Þ ð1bT;T þ bT;T rÞ e j which lead us to conclude that PXj þ kT ðrÞ ¼ emj ð1rÞ is the probability generating function of a Poisson-distributed random variable of mean mj . Finally, we prove the only if part. If Xj þ kT is a Poisson random variable, then emj ð1rÞ ¼ PXj þ Tðk1Þ ð1bT;T þ bT;T rÞ PY ðrÞ ¼ emj bT;T ð1rÞ PY ðrÞ; providing that PY ðrÞ ¼
j1 Y
ebj;i lji ð1rÞ
Tj1 Y
i¼0
¼
j1 Y
ebj;j bT;i lTi ð1rÞ
ð12Þ
i¼0
PZji ð1bj;i þ bj;i rÞ
i¼0
Tj1 Y
PZTi ð1bj;j bT;i þ bj;j bT;i rÞ;
ð13Þ
i¼0
which is the probability generating function of a Poisson r.v. of mean mj ¼ and (13) it follows that
Pj1
m¼0
bj;m ljm þ bj;j
PTj1 m¼0
bT;m lTm . By (12)
PZji ð1bj;i þ bj;i rÞ ¼ elji ð11bj;i þ bj;i rÞ and PZTi ð1bj;j bT;i þ bj;j bT;i rÞ ¼ elTi ð11 þ bT;i bj;j bT;i bj;j rÞ : Thus, PZji ðsÞ ¼ elji ð1sÞ and PZTi ðrÞ ¼ elTi ð1rÞ which implies Zi Pðli Þ, for i ¼ 1; . . . ; T. This completes the proof.
&
1
Proof of Lemma 3.1. First, note that E½N @SN ðhÞ=@h ¼ A1 . Thus the proof of (5) is concluded if we show that " 2 # 1 @SN ðhÞ 2 @SN ðhÞ ¼E N V N A21 -02T : @h @h In proving (14) observe that 2 M1;N 2 6 0 6 2 @SN ðhÞ N2 ¼6 6 ^ @h 4 02
02 M2;N
02 02
^
&
^
02
02
...
... ...
3 02 7 02 7 7 ^ 7 5 MT;N
with Mi;N ¼ N 2
N 1 N 1 X X n¼0k¼0
"
2 2 Xi1 þ nT Xi1 þ kT þXi1 þ nT Xi1 þ Tk
2 Xi1 þ Tn Xi1 þ Tk þ Xi1 þ Tn
2 Xi1 þ nT Xi1 þ kT þXi1 þ kT
N 2 Xi1 þ nT Xi1 þ kT þ 1
# :
ð14Þ
ARTICLE IN PRESS 1540
M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
After some tedious but straightforward calculations it is possible to prove that for r; w ¼ 0; 1; 2 lim N 2
N- þ 1
N1 X N1 X
r w E½Xi1 þ nT Xi1 þ kT ¼
n¼0k¼0
1 1 ðIfr ¼ 0g þ Ifr ¼ 1g mi1;1 þIfr ¼ 2g mi1;2 Þmi1;w þ ðIfw ¼ 0g þIfw ¼ 1g mi1;1 þIfw ¼ 2g mi1;2 Þmi1;r : 2 2
Hence, 2 lim E½Mi;N ¼ C2i N- þ 1
¼4
m2i1;2 þ m2i1;1
mi1;2 mi1;1 þmi1;1
mi1;2 mi1;1 þ mi1;1
m2i1;1 þ 1
3 5
for i ¼ 1; . . . ; T, providing "
lim E
N- þ 1
N1
@SN ðhÞ A1 @h
N1
@SN ðhÞ A1 @y
T #
¼ 02T :
This proves the result in (14). To prove (6) we proceed as follows: by symmetry of the matrix in (6) 3 2 Ck11 Ck12 Ck13 . . . Ck1T 7 6 N 1 N 1 6 C X X Ck22 Ck23 . . . Ck2T 7 7 6 k12 N1 xk ðhÞxk ðhÞ0 ¼ N 1 6 ^ ^ & ^ ^ 7 5 k¼0 k¼04 Ck1T Ck2T Ck3T . . . CkTT
ð15Þ
with " Ckij ¼
Ui þ kT Xi1 þ kT Uj þ kT Xj1 þ kT
Ui þ kT Uj þ kT Xj1 þ kT
Ui þ kT Xi1 þ kT Uj þ kT
Ui þ kT Uj þ kT
# j Z i:
;
Note that the ðm; lÞ th entry of the matrix Ckij , denoted by ðCkij Þml , for j; i ¼ 1; . . . ; T; jai and l; m ¼ 1; 2, can be expressed in the 2l 2m 2 4ml form ðCkij Þml ¼ Ui þ kT Xi1 þ kT Uj þ kT Xj1 þ kT whereas ðCkii Þml ¼ Ui þ kT Xi1 þ kT . Consequently, 2l 2m E½ðCN ij Þml ¼ E½Ui þ kT Xi1 þ kT Xj1 þ kT E½Uj þ kT jXi1 þ kT ; Ui þ kT ; Xj1 þ kT ¼ 0 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} 0
and 2 4ml E½ðCN ii Þml ¼ E½Xi1 i þ kT ai Xi1 þ kT li Þ jXi1 þ kT ¼ ai ð1ai Þmi1;ð5mlÞ þ li mi1;ð4mlÞ : þ kT E½ðX |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} V½Xi þ kT jXi1 þ kT 1
PN1
0 Thus, it follows that E½N k ¼ 0 xk ðhÞxk ðhÞ ¼ A2 . Again, after some tedious but straightforward calculations it can be prove that " ! !# N 1 N 1 X X N 1 xk ðhÞxk ðhÞ0 A2 xk ðhÞxk ðhÞ0 A2 0 -02T ; E N1 k¼0
k¼0
as N-1. This proves the result in (6). The proof of Lemma 3.1 is complete upon showing the result in (7). In doing so, first note that E½Ui þ kT ¼ 0 and E½Ui þ kT Xi1 þ kT ¼ E½Xi1 þ kT E½Ui þ kT jXi1 þ kT ¼ 0, for i ¼ 1; . . . ; T, leading to E½N1 SN ðhÞ ¼ ~ 0: In order to calculate the variance–covariance matrix of N 1 SN ðhÞ, the following preliminary results are needed N1 SN ðhÞðN1 SN ðhÞÞ0 ¼ N 2
N 1 N 1 X X n¼0
E½xn ðhÞxk ðhÞ0 þ N2
N 1 X
xn ðhÞxn ðhÞ0
n¼0
k ¼ 0 kan
with 2 N 1 N 1 X X n¼0
k ¼ 0 kan
xn ðhÞxk ðhÞ0 ¼
F1;1
N 1 N 1 6 X X 6 F2;1
6 6 ^ n¼0k¼04 kan FT;1
F1;2
F1;3
...
F2;2 ^
F2;3 &
... ^
FT;2
FT;3
...
F1;T
3
7 F2;T 7 7; ^ 7 5
FT;T
being Fi;j , for i; j ¼ 1; . . . ; T, zero-mean ð2 2Þ matrices with ðm; lÞ th entry given by 2l 2m ðFi;j Þml ¼ Ui þ nT Xi1 þ nT Uj þ kT Xj1 þ kT ;
m; l ¼ 1; 2:
ARTICLE IN PRESS M. Monteiro et al. / Journal of Statistical Planning and Inference 140 (2010) 1529–1541
1541
Thus, the variance–covariance matrix of N1 SN ðhÞ takes the form E½N 1 SN ðhÞðN1 SN ðhÞÞ0 ¼ N 2
N1 X
E½xn ðhÞxn ðhÞ0 ¼ N1 A2 -02T ;
n¼0
as N-1. The proof is complete.
&
Proof of Theorem 3.1. Let F k1 ¼ sðXkT ; XkT1 ; . . .Þ. Since E½xk ðhÞjF k1 ¼ ~ 0 it turns out that xk ðhÞ is a zero-mean martingale with respect to F k . By (6) it follows that the central limit theorem for martingales (Hall and Heyde, 1980) applies to Sk ðhÞ, providing D
N1=2 SN ðhÞ-Nð~ 0; A2 Þ;
N-1:
ð16Þ
Now consider the first-order Taylor expansion @SN ðhÞ ^ ðh CLS hÞ þ RN ; SN ðh^ CLS Þ ¼ SN ðhÞ þ @h
ð17Þ
being the remainder term RN ¼ Op ð1Þ, as N-1. Now use SN ðh^ CLS Þ ¼ 0 in (17) to get @SN ðhÞ 1 1=2 N SN ðhÞ þ op ð1Þ: N1=2 ðh^ CLS hÞ ¼ N 1 @h By (5) in Lemma 3.1, Eq. (16) and Slutzky’s Theorem in Eq. (18) the result follows.
ð18Þ &
Proof of Theorem 3.2. The proof follows easily as a generalization, for the periodic case, of the arguments given by Franke and Seligmann (1993, pp. 325–326). We omit the details. &
References Ahn, S., Gyemin, L., Jongwoo, J., 2000. Analysis of the M/D/1-type queue based on an integer-valued autoregressive process. Oper. Res. Lett. 27, 235–241. Basawa, I.V., Lund, R.B., 2001. Large sample properties of parameter estimates for periodic ARMA models. J. Time Ser. Anal. 22, 651–666. Basawa, I.V., Lund, R.B., Shao, Q., 2004. First-order seasonal autoregressive processes with periodically varying parameters. Statist. Probab. Lett. 67, 299–306. Bennett, W.R., 1958. Statistics of regenerative digital transmission. Bell Syst. Tech. J. 37, 1501–1542. Bentarzi, M., Hallin, M., 1994. On the invertibility of periodic moving-average models. J. Time Ser. Anal. 15, 263–268. Bloomfield, P., Hurd, H.L., Lund, R.B., 1994. Periodic correlation in stratospheric ozone data. J. Time Ser. Anal. 15, 127–150. ¨ as, ¨ K., Nordstrom, ¨ Brann J., 2006. Tourist accommodation effects of festivals. Tourism Econ. 12, 291–302. ¨ as, ¨ K., Hellstrom, ¨ ¨ Brann J., Nordstrom, J., 2002. A new approach to modelling and forecasting monthly guest nights in hotels. Internat. J. Forecast. 18, 19–30. Cui, Y., Lund, R.B., 2009. A new look at time series of counts. Biometrika 96, 781–792. Franke, J., Seligmann, T., 1993. Conditional maximum likelihood estimates for INAR(1) processes and their application to modelling epileptic seizure counts. In: Subba Rao, T. (Ed.), Developments in Time Series Analysis. Chapman & Hall, London, pp. 310–330. Franses, P.H., 1994. A multivariate approach to modeling univariate seasonal time series. J. Econometrics 63, 133–151. Franses, P.H., Paap, R., 2004. Periodic Time Series. Oxford University Press, Oxford. Freeland, R.K., McCabe, B., 2005. Asymptotic properties of CLS estimators in the Poisson AR(1) model. Statist. Probab. Lett. 73, 147–153. Gardner, W.A., Napolitano, A., Paura, L., 2006. Cyclostationary: half a century of research. Signal Process. 86, 639–697. Gladyshev, E.G., 1961. Periodically correlated random sequences. Soviet Math. 2, 385–388. Gladyshev, E.G., 1963. Periodically and almost periodically correlated random processes with continuous time parameter. Theory Probab. Appl. 8, 173–177. Haldrup, N., Hyllerberg, S., Pons, G., Sanso´, A., 2007. Common periodic correlation features and the interaction of stocks and flows in daily airport data. J. Bus. Econom. Statist. 25, 21–32. Hall, P., Heyde, C.C., 1980. Martingale Limit Theory and its Applications. Academic Press, New York. Hurd, H.L., Miamee, A., 2007. Periodically Correlated Random Sequences: Spectral Theory and Practice. Wiley, New Jersey. Latour, A., 1997. The multivariate GINAR(p) process. Adv. Appl. Probab. 29, 228–248. Lund, R.B., Basawa, I.V., 2000. Recursive prediction and likelihood evaluation for periodic ARMA models. J. Time Ser. Anal. 21, 75–93. Lund, R.B., Hurd, H.L., Bloomfield, P., Smith, R.L., 1995. Climatological time series with periodic correlation. J. Climate 11, 2787–2809. Lund, R.B., Shao, Q., Basawa, I.V., 2006. Parsimonious periodic time series modeling. Aust. N. Z. J. Statist. 48, 33–47. Lund, R.B., Wang, X.L., Lu, Q.Q., Reeves, J., Gallagher, C., Feng, Y., 2007. Changepoint detection in periodic and autocorrelated time series. J. Climate 11, 5178–5190. McCabe, B.P.M., Martin, G.M., 2005. Bayesian prediction of low count time series. Internat. J. Forecast. 21, 315–330. Monteiro, M., Pereira, I., Scotto, M.G., 2008. Optimal alarm systems for count processes. Comm. Statist. Theory Methods 37, 3054–3076. Quoreshi, A.M.M.S., 2006. Bivariate time series modelling of financial count data. Comm. Statist. Theory Methods 35, 1343–1358. Salas, J.D., 1993. Analysis and modeling of hydrologic time series. In: Maidment, D.R. (Ed.), Handbook of Hydrology. McGraw-Hill, New York, pp. 12–15 (Chapter 19). Shao, Q., Ni, P.P., 2004. Least-squares estimation and ANOVA for periodic autoregressive time series. Statist. Probab. Lett. 69, 287–297. Shao, Q., 2006. Mixture periodic autoregressive time series models. Statist. Probab. Lett. 76, 609–618. Tesfaye, Y.G., Meerschaert, M.M., Anderson, P.L., 2006. Identification of PARMA models and their application to the modeling of riverflows. Water Resour. Res. 42 W01419 (11pp.). Vecchia, A.V., 1985. Periodic autoregressive-moving average (PARMA) modelling with applications to water resources. Water Res. Bull. 21, 721–730. Weiß, C.H., 2008. The combined INARðpÞ models for time series of counts. Statist. Probab. Lett. 78, 1817–1822. Weiß, C.H., 2009. Controlling jumps in correlated processes of Poisson counts. Appl. Stochastic Models Bus. Ind. 25, 551–564. Zhou, J., Basawa, I.V., 2005. Least-squared estimation for bifurcation autoregressive processes. Statist. Probab. Lett. 74, 77–88.