ARTICLE IN PRESS Journal of Statistical Planning and Inference 140 (2010) 1110–1124
Contents lists available at ScienceDirect
Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi
Random sampling of long-memory stationary processes Anne Philippe a,, Marie-Claude Viano b a b
Universite´ de Nantes, Laboratoire de Mathe´matiques Jean Leray, UMR CNRS 6629, 2 rue de la Houssinie re - BP 92208, 44322 Nantes Cedex 3, France Laboratoire Paul Painleve´ UMR CNRS 8524, UFR de Mathe´matiques - Bat M2, Universite´ Lille 1, Villeneuve d’Ascq, 59655 Cedex, France
a r t i c l e i n f o
abstract
Article history: Received 7 December 2008 Received in revised form 14 September 2009 Accepted 24 October 2009 Available online 3 November 2009
This paper investigates the second order properties of a stationary process after random sampling. While a short memory process gives always rise to a short memory one, we prove that long-memory can disappear when the sampling law has heavy enough tails. We prove that under rather general conditions the existence of the spectral density is preserved by random sampling. We also investigate the effects of deterministic sampling on seasonal long-memory. & 2009 Elsevier B.V. All rights reserved.
MSC: 60G10 60G12 62M10 62M15 Keywords: Aliasing FARIMA processes Generalised fractional processes Regularly varying covariance Seasonal long-memory Spectral density
1. Introduction The effects of random sampling on the second order characteristics and more specially on the memory of a stationary second order discrete time process are the subject of this paper. We start from X ¼ ðXn ÞnZ0 , a stationary discrete time second order process with covariance sequence sX ðhÞ and a random walk ðTn ÞnZ0 independent of X. The sampling intervals Dj ¼ Tj Tj1 are independent identically distributed integer random variables with common probability law S. We fix T0 ¼ 0. Throughout the paper we consider the sampled process Y defined by Yn ¼ XTn ;
n ¼ 0; 1; . . . :
ð1:1Þ
The particular case where S ¼ dk , the Dirac measure at point k, shall be mentioned as deterministic sampling (some authors prefer systematic or periodic sampling). Either because of practical restraints or because it can be used as a way to reduce serial dependence, deterministic sampling is quite an old and common practice in all areas using time series, such as econometric, signal processing, climatology, astronomy. Corresponding author.
E-mail address:
[email protected] (A. Philippe). 0378-3758/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2009.10.011
ARTICLE IN PRESS A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
1111
Introducing some randomness in the sampling procedure, as a possible method to avoid the well known aliasing phenomenon was proposed by several authors. See, without exhaustivity, Beutler (1970), Masry (1971) and Shapiro and Silverman (1960) where several random schemes are introduced. The particular idea of sampling along a random walk was first proposed in Shapiro and Silverman (1960) where simple conditions for alias-free sampling are given. Random sampling can also be viewed as an interesting way to model data at irregularly spaced moments as it is the case in numerous applications. Think for example of observations of sunspots where missing data correspond to weather phenomena, or, in industrial processes, missing values caused by instrument failures. Several examples of natural occurrence of deterministic or random sampling are well detailed in Greenshtein and Ritov (2000). In the domain of time series analysis, attention has been particularly paid to the effect of sampling on parametric families of processes. After studies devoted to deterministic sampling (see Brewer, 1975; Niemi, 1984; Sram and Wei, 1986, among others), the effect of random sampling on ARMA or ARIMA processes is investigated in Kadi et al. (1994) and Oppenheim (1982). The main result is that the ARMA structure is preserved, the order of the autoregressive part being never increased after sampling. Precisely, if AðLÞXn ¼ BðLÞen and A1 ðLÞYn ¼ B1 ðLÞZn are the minimal representations of X and of the sampled process Y, the P roots of the polynomial A1 ðzÞ belong to the set f jZ1 SðjÞrkj g where the rk ’s are the roots of AðzÞ. As Sð1Þa1 implies P j jZ1 SðjÞrkj j o jrk j, a consequence is that the convergence to zero of sY ðhÞ is strictly faster than that of sX ðhÞ. In other words, sampling an ARMA process along any random walk shortens its memory. In Oppenheim (1982) it is even pointed out that some ARMA processes could give rise to a white noise through a well chosen sampling law. One can wonder whether deterministic or random sampling also reduce long-memory. Most papers dealing with this question only consider deterministic sampling. Let us mention Chambers (1998), Hwang (2000), Souza and Smith (2002), Souza (2005) and Souza et al. (2006) where can be found a detailed study of deterministic sampling and time aggregation of the FARIMA ðp; d; qÞ process with related important statistic questions such as the estimation of the memory parameter d from the sampled process. Concerning the memory, it is proved in Chambers (1998) that when the spectral density of ðXn Þ has the typical form f ðlÞ ¼ LðlÞjlj2d ;
0 o d o 1=2;
where L is bounded and slowly varying near zero, it is still the case for the process obtained by any deterministic sampling. In other words, contrary to what happens in the case of short memory processes like ARMA, long memory seems to be unchanged by deterministic sampling. The question of long-memory processes observed at random instants remains rarely tackled. We note in a recent paper (Bardet and Bertrand, 2008), a wavelet-based estimation of the spectral density of a continuous time Gaussian process having the above typical form at frequency zero. The sampling law has a finite second moment, which means, in view of our results (see Section 3.1 below), that the sampled process keeps the same memory parameter as the initial one. Concentrating on the second order properties of the processes, the present paper we give answers to the following questions:
When the covariance sequence of the basic process Xn is summable (or square summable), is it also the case for the covariance of the sampled process?
When the spectrum of Xn is absolutely continuous, is it still the case for the sampled process? Is it possible to obtain short memory from long memory? Can deterministic sampling modify the singular behaviour of the spectral density near zero? In all the sequel, a second order stationary process X is said to have long memory if its covariance sequence is nonsummable X jsX ðhÞj ¼ 1; ð1:2Þ hZ0
definition including the FARIMA family and also the family of Generalised Fractional Processes (see Gray et al., 1989; Leipus and Viano, 2000), whose densities can present singularities at non-zero frequencies. A question under study in this paper is what can be said of X X jsY ðhÞj ¼ jEðsX ðTh ÞÞj ð1:3Þ hZ0
hZ0
depending on whether (1.2) is satisfied or not. In Section 2 we prove preservation of Lp - convergence of the covariance and of absolute continuity of the spectrum. We show in particular that short memory is always preserved. The main results of the paper concern changes of memory when sampling a process with regularly varying covariance. P jSðjÞ o 1, They are gathered in Section 3. We show that the intensity of memory of such processes is preserved if EðT1 Þ ¼ while this intensity decreases when EðT1 Þ ¼ 1. For sufficiently heavy tailed S, the sampled process has short memory, which is somehow not surprising since with a heavy tailed sampling law, the sampling intervals can be quite large.
ARTICLE IN PRESS 1112
A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
In Section 4 we consider processes presenting long-memory with seasonal effects, and investigate the particular effects of deterministic sampling. We show that in some cases the seasonal effects can totally disappear after sampling. We also give examples where the form jlj2d of the spectral density at zero frequency is modified by deterministic sampling. 2. Some features unchanged by random sampling Taking p ¼ 1 in Proposition 1 below confirms an intuitive claim: random sampling cannot produce long-memory from short memory. Propositions 3–5 state that, at least in all situations investigated in this paper, random sampling preserves the existence of a spectral density. 2.1. Preservation of summability of the covariance P Proposition 1. (i) Let pZ1. If jsX jp o 1, the same holds for sY . (ii) In the particular case p 2 ½1; 2 both processes X and Y have spectral densities linked by the relation þ1 Z p 1 X ^ yÞÞj fX ðyÞ dy; fY ðlÞ ¼ ðeil Sð 2p 1 p ^ yÞ ¼ EðeiyT1 Þ is the characteristic function of S. where Sð Proof. The covariance sequence of the sampled process is given by 8 sY ð0Þ ¼ sX ð0Þ; > > < 1 X s sX ðjÞSh ðjÞ; hZ1; Y ðhÞ ¼ EðsX ðTh ÞÞ ¼ > > : j¼h
ð2:1Þ
where Sh , the h-times convoluted of S by itself, is the probability distribution of Th . As the sequence Th is strictly increasing, sX ðTh Þ is almost surely a sub-sequence of sX ðhÞ. Then, (i) follows from X X X jEðsX ðTh ÞÞjp rE jsX ðTh Þjp r jsX ðhÞjp : h
h
h
The proof of (ii) is immediate, using (i) fY ðlÞ ¼
1 X ijl 1 X ijl 1 X e sY ðjÞ ¼ e EðsX ðTj ÞÞ ¼ 2p j2Z 2p j2Z 2p j2Z
Z p p
eijl EðeiT j y ÞfX ðyÞ dy;
where the series converges in L2 ð½p; pÞ when pa1, the covariance being then square-summable without being summable. & 2.2. Preservation of the existence of a spectral density In this part we assume that X has a spectral density denoted by fX . Firstly, it is well known that the existence of a spectral density is preserved by deterministic sampling (see (4.3) below). Secondly, from Proposition 1 it follows that, for any sampling law, the spectral density of Y exists when the covariance of X is square summable. It should also be noticed that when proving that the ARMA structure is preserved by random sampling (Oppenheim, 1982) (see also Kadi et al., 1994, for the multivariate case) gives an explicit form of the spectral density of Y when X is an ARMA process. The three propositions below show that preservation of the existence of a spectral density by random sampling holds for all the models considered in the present paper. The proofs are based on the properties of Poisson kernel recalled in Appendix A.2 1 1 s2 ; s 2 ½0; 1½ ð2:2Þ Ps ðtÞ ¼ 2p 1 2s cos t þ s2 and the representation given in Lemma 2 of the covariance of sampled process. Lemma 2. For all jZ0, Z p sY ðjÞ ¼ lim eijy gðr; yÞ dy; r-1
ð2:3Þ
p
where 1 gðr; yÞ ¼ 4p
Z p p
fX ðlÞ
1 ^ lÞ 1 reiy Sð
þ
1 ^ lÞ 1 reiy Sð
! dl
ð2:4Þ
ARTICLE IN PRESS A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
1 4
¼
Z p p
fX ðlÞ
1
p
þPrr ðt yÞ þ Prr ðt þ yÞ dl
1113
ð2:5Þ
and where ^ lÞ ¼ rðlÞeitðlÞ Sð
ð2:6Þ
also denoted reit for the sake of shortness. Proof. The proof of the lemma is relegated to Appendix A.1. Proposition 3. If fX is bounded in a neighbourhood of zero, the sampled process Y has a spectral density given by ! Z p 1 1 1 fX ðlÞ þ dl: fY ðyÞ ¼ lim ^ lÞ 1 reiy Sð ^ lÞ r-1 4p p 1 reiy Sð
ð2:7Þ
Proof. Firstly, it is easily seen that, if ya0, gðr; yÞ has a limit as r-1 . Hence, the proof of the proposition simply consists in exchanging the limit and integration in (2.3) in order to show that Z p sY ðjÞ ¼ eijy lim gðr; yÞ dy r-1
p
implying that fY ðyÞ ¼ lim gðr; yÞ r-1
and thus (2.7) according to (2.4). Now we prove that conditions of Lebesgue’s theorem hold for (2.3). As we can suppose that the sampling is not deterministic, ^ lÞj o 1; jSð
8l 20; p
^ lÞj, (see Feller, 1971). Hence, thanks to the continuity of jSð ^ lÞj o 1; sup jSð
8e 4 0:
jlj 4 e
The integral (2.4) is split into two parts: choosing e such that fX is bounded on Ie ¼ ½e; e and using the fact that the integrand in (2.4) is positive (see (A.7)): !Z ! ! Z 1 1 1 1 fX ðlÞRe Re þ dlr sup fX ðlÞ þ dl ^ lÞ 1 reiy Sð ^ lÞ ^ lÞ 1 reiy Sð ^ lÞ Ie Ie 1 reiy Sð 1 reiy Sð l2Ie which leads, thanks to Lemma 13, to ! ! ! Z 1 1 fX ðlÞRe þ dlr4p sup fX ðlÞ g ðr; yÞ ¼ 4p sup fX ðlÞ : ^ lÞ 1 reiy Sð ^ lÞ Ie 1 reiy Sð l2Ie l2Ie Now, Z Iec
1
fX ðlÞRe
^ lÞ 1 reiy Sð
þ
!
1 ^ lÞ 1 reiy Sð
ð2:8Þ
Z 1 þ Prr ðt yÞ þPrr ðt þ yÞ dl: dl ¼ p fX ðlÞ c Ie
p
Applying (A.8) with s ¼ r rðlÞ o rðlÞ;
t ¼ tðlÞ7y
and
^ lÞj Z ¼ 1 sup jSð jlj 4 e
yields Z
p fX ðlÞ Iec
1
p
Z Z p 2 2 þ Prr ðt yÞ þPrr ðt þ yÞ dlr 1 þ fX ðlÞ dlr 1 þ fX ðlÞ dl:
Z
Iec
Gathering (2.11) and (2.12) leads to the result via Lebesgue’s theorem.
Z
ð2:9Þ
p
&
In the next two propositions, the spectral density of X is allowed to be unbounded at zero. Their proofs are in the Appendix. We first suppose that the sampling law has a finite expectation. It shall be proved in Section 3.1 that in this case the intensity of memory of X is preserved. Proposition 4. If E T1 o1 and if the spectral density of X has the form fX ðlÞ ¼ jlj2d fðlÞ;
ð2:10Þ
ARTICLE IN PRESS 1114
A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
where f is non-negative, integrable and bounded in a neighbourhood of zero, and where 0rd o 1=2, then the sampled process has a spectral density fY given by (2.7). Proof. See Appendix A.3. The case EðT1 Þ ¼ 1 is treated in the next proposition, under the extra assumption (2.11) meaning that SðjÞ is regularly varying at infinity. We shall see in Section 3.2 (see Proposition 7) how the parameter d is transformed when the sampling law satisfies condition (2.11). In particular we shall see that if 1 o g o 3 4d the covariance of the sampled process is square summable, implying the absolute continuity of the spectral measure of Y. The proposition below shows that this property holds for every g 21; 2. Proposition 5. Assume that the spectral density of X has the form (2.10). If the distribution S satisfies the following condition: PðT1 ¼ jÞ ¼ SðjÞcjg
when j-1;
ð2:11Þ
where 1 o g o 2 and with c 4 0, then the conclusion of Proposition 4 is still valid. Proof. See Appendix A.4. 3. Random sampling of processes with regularly varying covariances The propositions below are valid for the family of stationary processes whose covariance sX ðhÞ decays arithmetically, up to a slowly varying factor. In the first paragraph we show that if T1 has a finite expectation, the rate of decay of the covariance is unchanged by sampling. We then see that, when EðT1 Þ ¼ 1, the memory of the sampled process is reduced according to the largest finite moment of T1 . 3.1. Preservation of the memory when EðT1 Þ o 1
Proposition 6. Assume that the covariance of X is of the form
sX ðhÞ ¼ ha LðhÞ; where 0 o a o 1 and where L is slowly varying at infinity and ultimately monotone (see Bingham et al., 1987). If EðT1 Þ o 1, then, the covariance of the sampled process satisfies
sY ðhÞha ðEðT1 ÞÞa LðhÞ as h-1. Proof. We prove that as h-1 a sX ðTh Þ Th LðTh Þ a:s: ¼ !ðEðT1 ÞÞa a h h LðhÞ LðhÞ
ð3:1Þ
and that, for h large enough, sX ðTh Þ ha LðhÞr1: Since Th is the sum of independent and identically distributed random variables Th ¼ distribution T1 2 L1 , the law of large numbers leads to
ð3:2Þ Ph
j¼1
Dj , with common
Th a:s: !EðT1 Þ: h
ð3:3Þ
LðTh Þ a:s: !1; LðhÞ
ð3:4Þ
Now
indeed, Th Zh because the intervals Dj Z1 for all j and (3.3) implies that for h large enough Th r2EðT1 Þh: Therefore, using the fact that L is ultimately monotone, we have LðhÞrLðTh ÞrLð2EðT1 ÞhÞ
ð3:5Þ
if L is ultimately increasing (and the reversed inequalities if L is ultimately decreasing). Finally (3.5) directly leads to (3.4) since L is slowly varying at infinity (see Theorem 1.2.1 in Bingham et al., 1987). Clearly, (3.3) and (3.4) imply the convergence (3.1).
ARTICLE IN PRESS A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
1115
In order to prove (3.2), we write a=2 a=2 a=2 a=2 Th LðTh Þ sX ðTh Þ Th Th LðTh Þ Th ¼ ¼ : h h h ha LðhÞ LðhÞ ha=2 LðhÞ As a 40 and Th Zh, a Th r1: h Moreover, ha=2 LðhÞ is decreasing for h large enough (see Bingham et al., 1987), so that a=2
Th LðTh Þ r1: ha=2 LðhÞ These two last inequalities lead to (3.2). Since sY ðhÞ ¼ EðsX ðTh ÞÞ we conclude the proof by applying Lebesgue’s theorem, and we get
sY ðhÞ -ðEðT1 ÞÞa : ha LðhÞ
&
3.2. Decrease of memory when EðT1 Þ ¼ 1 If EðT1 Þ ¼ 1 it is known that Th =h-1, implying that the limit in (3.1) is zero. In other words, in this case the convergence to zero of sY ðhÞ could be faster than ha . The aim of this section is to give a precise evaluation of this rate of convergence. Proposition 7. Assume that the covariance of X satisfies jsX ðhÞjrch
a
;
ð3:6Þ
where 0 o a o 1. If liminf xb PðT1 4 xÞ 4 0
ð3:7Þ
x-1
for some b 2 ð0; 1 (implying EðT1b Þ ¼ 1) then jsY ðhÞjrCh
a=b
:
ð3:8Þ
Proof. From hypothesis (3.6), jsY ðhÞjrEðjsX ðTh ÞjÞrcEðTha Þ: Then,
EðTha Þ ¼
1 X
PðTh rjÞðja ðj þ 1Þa Þra
1 X
PðTh rjÞja1 :
ð3:9Þ
j¼0
j¼h
From hypothesis (3.7) on the tail of the sampling law, it follows that, for large enough h and for all jZh b PðTh rjÞrP max Dl rj ¼ PðT1 rjÞh rð1 Cjb Þh reCh=j : 1rlrh
Gathering (3.9) and (3.10) then gives
EðTha Þra
1 X
b
ja1 eCh=j :
j¼h
The last sum has the same asymptotic behaviour as Z h1b Z 1 b xa1 eCh=x dx ¼ ha=b ua=b1 eCu du; h
0
and the result follows since 8 R1 Z h1b < h-1 ! 0 ua=b1 eCu du ua=b1 eCu du : ¼ R 1 ua=b1 eCu du 0 0
when b o1; when b ¼ 1:
&
Next proposition states that the bound in Proposition 7 is sharp under some additional hypotheses.
ð3:10Þ
ARTICLE IN PRESS 1116
A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
Proposition 8. Assume that
sX ðhÞ ¼ ha LðhÞ; where 0 o a o1 and where L is slowly varying at infinity and ultimately monotone. If
b ¼: supfg : EðT1g Þ o 1g 20; 1
ð3:11Þ
then, for every e 40,
sY ðhÞZC1 ha=be :
ð3:12Þ
Proof. Let e 40. We have abe=2 abe=2 Th Tha Th be=2 be=2 ¼ LðT Þ ¼ T LðT Þ ¼ Th LðTh Þ; h h ha=be ha=be ha=be h hd
sX ðTh Þ
where
d¼
a=b þ e 1 a þ be : ¼ be b a þ be=2 aþ 2
Using Proposition 1.3.6 in Bingham et al. (1987), be=2
Th
h-1
LðTh Þ ! þ1
a:s: 1=d
Moreover we have d 4 1=b. Therefore, the assumption (3.11) implies EðT1 Þ o 1 with 1=d o 1. Then, we can apply the law of large numbers of Marcinkiewicz–Zygmund (see Stout, 1974, Theorem 3.2.3), which yields Th a:s: !0 hd
as h-1:
ð3:13Þ
Therefore by applying Fatou’s Lemma
sY ðhÞ h-1 ! 1: ha=be
&
3.3. Example: the FARIMA processes A FARIMA ðp; d; qÞ process is defined from a white noise ðen Þn , a parameter d 20; 1=2½ and two polynomials AðzÞ ¼ zp þ a1 zp1 þ þ ap and BðzÞ ¼ zq þ b1 zq1 þ þ bq non-vanishing on the domain jzjZ1, by Xn ¼ BðLÞA1 ðLÞðI LÞd en ;
ð3:14Þ
where L is the back-shift operator LU n ¼ Un1 . The FARIMA ð0; d; 0Þ Wn ¼ ðI LÞd en introduced by Granger (Gray et al., 1989) is specially popular. It is well known (see Brockwell and Davis, 1991) that the covariance of the FARIMA ðp; d; qÞ process satisfies
sX ðkÞck2d1 as k-1
ð3:15Þ
allowing to apply the results of Sections 3.1 and 3.2. Proposition 9. Sampling a FARIMA process (3.14), with a sampling law S such that, with some g 41 SðjÞ ¼ PðT1 ¼ jÞcjg
as j-1
ð3:16Þ
leads to a process ðYn Þn whose auto-covariance function satisfies the following properties: (i) if g 42,
sY ðhÞCh2d1 ;
ð3:17Þ
(ii) if gr2, C1 hð2d1Þ=ðg1Þe rsY ðhÞrC2 hð2d1Þ=ðg1Þ ;
8e 4 0:
Consequently, the sampled process ðYn Þn has the same memory parameter if gZ2, a reduced long memory parameter if 2ð1 dÞrg o 2, short memory if 1 o g o2ð1 dÞ.
ð3:18Þ
ARTICLE IN PRESS A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
1117
Proof. We omit the proof which is an immediate consequence of Proposition 6 for (i) and of Propositions 7 and 8 with b ¼ g 1 and a ¼ 1 2d for (ii). & We illustrate this last result by simulating and sampling FARIMAð0; d; 0Þ processes. Simulations of the trajectories are based on the moving average representation of the FARIMA (see Bardet et al., 2002). The sampling distribution is PðS ¼ kÞ ¼
Z
kþ1
g
ðg 1Þt g dtCk
;
ð3:19Þ
k
which is simulated using the fact that when u is uniformly distributed on ½0; 1, the integer part of u1=ð1gÞ is distributed according to (3.19). In Fig. 1 are depicted the empirical auto-covariances of a trajectory of the process X and of two sampled trajectories Y1 and Y2 . The memory parameter of X is d ¼ 0:35 and the parameters of the two sampling distributions are, respectively, g1 ¼ 2:8 and g2 ¼ 1:9. According to Proposition 9, the process Y1 has the same memory parameter as X, while the memory intensity is reduced for the process Y2 . Then we estimate the memory parameters of the sampled processes by using the FEXP procedure introduced by Moulines and Soulier (1999) and Bardet et al. (2003). The FEXP estimator is adapted to processes having a spectral density. From Propositions 5 and 4, this is the case for our sampled processes. From Proposition 9 the memory parameter of Y is ( dY ¼
d
if gZ2;
d 1 þ g=2
otherwise:
ð3:20Þ
Fig. 2 shows the estimate and a confidence interval for dY . Two values, d ¼ 0:1 (lower curves) and d ¼ 0:35 (upper curves) of the memory parameter of X are considered. In abscissa, the parameter g of the sampling distribution is allowed to vary between 1.7 and 3.3.
In the case d ¼ 0:1, short memory (corresponding to negative values of dY ) is obtained for gr1:8. In the case d ¼ 0:35, sample distributions leading to short memory are too heavy tailed to allow tractable simulations. Notice that the systematic underestimation of dY seems to confirm Souza’s diagnosis in Souza et al. (2006).
Unobserved process
gamma= 2.8
gamma= 1.9
0.8
0.8
0.8
0.6
0.6
0.6 ACF
1.0
ACF
1.0
ACF
1.0
0.4
0.4
0.4
0.2
0.2
0.2
0.0
0.0
0.0
0
5
10 15 20 25 Lag
0
5
10 15 20 25 Lag
0
5
10 15 20 25 Lag
Fig. 1. Auto-covariance functions of X (left) and of two sub-sampled processes corresponding to g ¼ 2:8 (middle) and g ¼ 1:9 (right). The number of observed values of the sub-sampled processes is equal to 5000.
ARTICLE IN PRESS 1118
A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
estimation of long memory parameter
0.6
d=0.1 d=0.35
0.5 0.4 0.3 0.2 0.1 0 –0.1 1.6
1.8
2
2.2
2.4
2.6 γ
2.8
3
3.2
3.4
3.6
Fig. 2. Estimation of the long memory parameter for d ¼ 0:1 and 0:35 as a function of parameter g. The confidence regions are obtained using 500 independent replications. The circles represent the theoretical values of the parameter dY defined in (3.20). For each g, the estimation of dY is evaluated on 5000 observations.
4. Sampling generalised fractional processes Consider now the family of generalised fractional processes whose spectral density has the form 2 Y M d d ðeil eiyj Þ j ðeil eiyj Þ j ; fX ðlÞ ¼ jFðlÞj2 j ¼ 1
ð4:1Þ
with exponents dj in 0; 1=2½, and where jFðlÞj2 is the spectral density of an ARMA process. 4.1. Decrease of memory by heavy-tailed sampling law See Gray et al. (1989), Leipus and Viano (2000), and Oppenheim et al. (2000) for results and references on this class of processes. Taking M ¼ 1 and y1 ¼ 0, it is clear that this family includes the FARIMA processes, which belong to the framework of Section 3. It is proved in Leipus and Viano (2000) that as h-1 the covariance sequence is a sum of periodic sequences damped by a regularly varying factor X c1;j cosðhyj Þ þc2;j sinðhyj Þ þ oð1Þ ; sX ðhÞ ¼ h2d1 ð4:2Þ where d ¼ maxfdj ; j ¼ 1; . . . ; Mg and where the sum extends over indexes corresponding to dj ¼ d. Hence, these models generally present seasonal long-memory, and for them the regular form sX ðhÞ ¼ h2d1 LðhÞ is lost. Precisely, this regularity remains only when, in (4.1), d ¼ maxfdj ; j ¼ 1; . . . ; Mg corresponds to the unique frequency y ¼ 0. In all other cases, from (4.2), it has to be to be replaced by
sX ðhÞ ¼ Oðh2d1 Þ: From the previous comments it is clear that Propositions 6 and 8 are no longer valid in the case of seasonal longmemory. This means that in this situation, it is not sure that long memory is preserved when the sampling law has a finite expectation. In fact this is an open question. Nevertheless it is true that long-memory can be lost when the sampling law has an infinite expectation. Indeed, Proposition 7 applies with a ¼ 1 2d where d ¼ maxfdj ; j ¼ 1; . . . ; Mg, and the following result holds: Proposition 10. When sampling a process with spectral density (4.1) along a random walk such that the sampling law satisfies (3.16), the obtained process has short memory as soon as 1o g o 2ð1 dÞ where d ¼ maxfdj ; j ¼ 1; . . . ; Mg. 4.2. The effect of deterministic sampling In a first step we just suppose that X has a spectral density fX , which is unbounded in one or several frequencies (we call them singular frequencies). It is clearly the case of the generalised fractional processes. The examples shall be taken in this family. We investigate the effect of deterministic sampling on the number and the values of the singularities. Let us suppose that S ¼ dk , the law concentrated at k. It is well known that if X has a spectral density, the spectral density of the
ARTICLE IN PRESS A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
sampled process Y ¼ ðXkn Þn is l 1 X l 2pj fX fY ðlÞ ¼ if k ¼ 2‘ þ1; 2‘ þ 1 j ¼ l 2‘ þ 1 0 1 ‘1 1 @ X l 2pj l 2p‘ sgnðlÞ A fX þfX fY ðlÞ ¼ 2‘ j ¼ ‘ þ 1 2‘ 2‘
if k ¼ 2‘:
1119
ð4:3Þ
Proposition 11. Let the spectral density of ðXn Þn have NX Z1 singular frequencies. Then, denoting by NY the number of singular frequencies of the spectral density of the sampled process ðYn ¼ Xkn Þn
1rNY rNX . NY o NX if and only if fX has at least two singular frequencies l0 al1 such that, for some integer number j , l0 l1 ¼
2pj : k
Proof. We choose k ¼ 2‘ þ 1 to simplify. The proofs for k ¼ 2‘ are quite similar. Now remark that for fixed j, l 2pj p 2pj p 2pj 2 Ij :¼ ; for l 2 ½p; p: 2‘ þ1 2‘ þ 1 2‘ þ1 S The intervals Ij are non-overlapping and ‘j ¼ ‘ Ij ¼ ½p; p. Now suppose that fX is unbounded at some l0 . Then fY is unbounded at the unique frequency l satisfying l0 ¼ ðl 2pj0 Þ=ð2‘ þ 1Þ for a suitable value j0 of j. In other words, to every singular frequency of fX corresponds a unique singular frequency of fY . As a consequence, the number of singular frequencies of fY is at least 1, and cannot exceed the number of singularities of fX . It is clearly possible to reduce the number of singularities by deterministic sampling. Indeed, two distinct singular frequencies of fX , l0 and l1 produce the same singular frequency l of fY if and only if
l 2pj0 2‘ þ 1
¼ l0
and
l 2pj1 2‘ þ1
¼ l1 ;
which happens if and only if l0 l1 is a multiple of the basic frequency 2p=ð2‘ þ1Þ.
&
Proposition 11 is illustrated by the three following examples showing the various effects of the aliasing phenomenon when sampling generalised fractional processes. Example 1. Consider fX ðlÞ ¼ jeil 1j2d , the spectral density of the FARIMA (0; d; 0) process, and take k ¼ 2l þ 1. From (4.3), the spectral density is fY ðlÞ ¼
l 1 X jeiðl2pjÞ=ð2l þ 1Þ 1j2d : 2l þ1 j ¼ l
Clearly, FY has, on ½p; p½, only one singularity at l ¼ 0, associated with the same memory parameter d as fX ðlÞ. The memories of X and Y have the same intensity. This was already noticed in Chambers (1998) and Hwang (2000). In the three following examples we take k ¼ 3. Example 2. Consider the following spectral density of a seasonal long-memory process (see Oppenheim et al., 2000) fX ðlÞ ¼ jeil e2ip=3 j2d jeil e2ip=3 j2d : The sampled spectral density is fY ðlÞ ¼
1 1 X jeiðl2pjÞ=3 e2ip=3 j2d jeiðl2pjÞ=3 e2ip=3 j2d ; 3 j ¼ 1
and is everywhere continuous on ½p; p, except at l ¼ 0 where fY ðlÞcjlj2d : In other words the memory is preserved, but the seasonal effect disappears after sampling. Example 3. We consider a spectral density having a singularity at zero and two other ones: fX ðlÞ ¼ jeil e2ip=3 j2d1 jeil e2ip=3 j2d1 jeil 1j2d0 :
ARTICLE IN PRESS 1120
A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
The sampled spectral density is now fY ðlÞ ¼
1 1 X jeiðl2pjÞ=3 e2ip=3 j2d1 jeiðl2pjÞ=3 e2ip=3 j2d1 jeiðl2pjÞ=3 1j2d0 ; 3 j ¼ 1
which has only one singularity at l ¼ 0, with exponent 2maxfd0 ; d1 g. More precisely, fY ðlÞ ¼ LðlÞjlj2maxfd0 ;d1 g ; where L is continuous and non-vanishing on ½p; p. Taking d1 4 d0 shows that deterministic sampling can change the local behaviour of the spectral density at zero frequency. This completes Chambers’ result in Chambers (1998). Example 4. Consider now a case of two seasonalities (7p=4 and 73p=4) associated with two different memory parameters d1 and d2 : fX ðlÞ ¼ jeil eip=4 j2d1 jeil eip=4 j2d1 jeil e3ip=4 j2d2 jeil e3ip=4 j2d2 : It is easily checked that fY has the same singular frequencies as fX , with an exchange of the memory parameters: p2d2 fY ðlÞcl7 near 8p=4; 4 2d1 3p fY ðlÞcl7 near 83p=4: 4
Appendix A A.1. Proof of Lemma 2 Let us consider the two z-transforms of the bounded sequence sY ðjÞ:
s^ Y ðzÞ ¼
1 X
zj sY ðjÞ;
jzjo 1
j¼0
and
s^ Yþ ðzÞ ¼
1 X
zj sY ðjÞ;
jzj4 1:
j¼0
On the first hand, from the representation Z p ^ lÞj dl; sY ðjÞ ¼ EðsX ðTj ÞÞ ¼ fX ðlÞSð p
we have
s^ Y ðzÞ ¼
1 X
zj
Z p p
j¼0
s^ Yþ ðzÞ ¼
1 X
zj
^ lÞÞj dl ¼ fX ðlÞðSð
Z p
j¼0
p
Z p
fX ðlÞðSðlÞÞj dl ¼
p
fX ðlÞ dl; ^ lÞ 1 zSð
Z p p
fX ðlÞ dl; ^ lÞ Sð 1 z
jzj o 1;
ðA:1Þ
jzj 41:
ðA:2Þ
On the second hand, let Cr be the circle jzj ¼ r. If 0 o r o 1, for all jZ0 1 2ip
Z Cr
s^ Y ðzÞzj1 dz ¼
1 2p
Z p p
ðreiy Þj s^ Y ðreiy Þ dy ¼
Z 1 lj X r sY ðlÞ p l¼0
2p
p
eiðljÞy dy ¼ sY ðjÞ;
ðA:3Þ
and, similarly, if r 4 1 1 2ip
Z Cr
s^ Yþ ðzÞzj1 dz ¼
1 2p
Z p p
ðreiy Þj s^ Y ðreiy Þ dy ¼ þ
Z 1 jl X r sY ðlÞ p l¼0
2p
p
eiðjlÞy dy ¼ sY ðjÞ:
ðA:4Þ
ARTICLE IN PRESS A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
1121
Gathering (A.1) with (A.3) and (A.2) with (A.4) leads to ! 8 > 1 R p ijy j R p fX ðlÞ > > e r dl dy if r o1; > p > ^ lÞ 2p p > 1 reiy Sð > > < 1 0 sY ðjÞ ¼ C B R > R fX ðlÞ > > 1 p eijy Br j p dlC > > 2p p A dy if r 41: @ p iy ^ > e l Þ Sð > > : 1 r
ðA:5Þ
Changing the integrand in (A.5) into its conjugate when r o 1, and r for 1=r when r 4 1 leads to Z p sY ðjÞ ¼ rj eijy gðr; yÞ dy; 8r 2 ½0; 1½;
ðA:6Þ
p
where gðr; yÞ is defined in (2.4). As the first member does not depend on r, (2.3) is proved. Let us now prove (2.5). Firstly, ! ! 1 1 1 1 1 1 1 þ ¼ þ Im ^ lÞ 1 reiy Sð ^ lÞ ^ lÞ 1 reiy Sð ^ lÞ 1 reiy Sð ^ lÞ 1 reiy Sð ^ lÞ 2 1 reiy Sð 1 reiy Sð is an odd function of l. Hence, the imaginary part of the integrand in (2.4) disappears after integration. Secondly, ! ! ^ lÞj2 ^ lÞj2 1 1 1 1 r 2 jSð 1 r2 jSð þ þ2 ; þ ¼ Re ^ lÞ 1 reiy Sð ^ lÞ ^ lÞj2 j1 reiy Sð ^ lÞj2 2 j1 reiy Sð 1 reiy Sð
ðA:7Þ
and the proof is over. A.2. A few technical results Hereafter we recall some properties used in the paper. Lemma 12. The Poisson kernel (2.2) satisfies 2pPs ðtÞr
1 s2 ð1 sÞ2
0 o d ojtjr
p 2
¼
1 þs 2 r ; 1s Z
8s o 1 Z with s o 1;
ðA:8Þ
¼)Ps ðtÞ o Ps ðdÞ;
2p sup Ps ðtÞ ¼ 0oso1
1 ; jsin tj
ðA:9Þ
8t 2 p=2; p=2½:
ðA:10Þ
Proof. The bound (A.8) is direct. Inequality (A.9) comes from the monotonicity of the kernel with respect to s. Let us prove (A.10): @ s2 cos t 2s þ cos t : ðPs ðtÞÞ ¼ 2 @s ð1 þ s2 2scos tÞ2 It is easily checked that the numerator has one single root s0 ¼ ð1 jsin tjÞ=cos t in [0,1[, and Ps0 ðtÞ ¼ jsin tj1 .
&
The following result comes from spectral considerations. Lemma 13. With r ¼ rðlÞ and t ¼ tðlÞ defined in (2.6), for all r 2 ½0; 1½ and y 2 p; p½, Z 1 p 1 g ðr; yÞ :¼ þ Prr ðt yÞ þPrr ðt þ yÞ dl ¼ 1: 4 p p
ðA:11Þ
Proof. Notice first that if fX 1, the process X is a white noise with VarðX1 Þ ¼ 2p. Hence, the sampled process is also a white noise with variance 2p. Applying (2.4) to a white noise, we get Z p 2pd0 ðjÞ ¼ r j eijy g ðr; yÞ dy ðA:12Þ p
for all r 2 ½0; 1½. Since r j d0 ðjÞ ¼ d0 ðjÞ, we can rewrite (A.12) as Z p 2pd0 ðjÞ ¼ eijy g ðr; yÞ dy: p
This means that g ðr; yÞ is the Fourier transform of ð2pd0 ðjÞÞj . Consequently g ðr; yÞ 1.
&
ARTICLE IN PRESS 1122
A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
A.3. Proof of Proposition 4 As for Proposition 3, the proof consists in finding an integrable function gðyÞ such that 8r 20; 1½; y 2 p; p½:
jgðr; yÞjrgðyÞ;
ðA:13Þ
^ lÞ near zero: For that purpose, we need the following estimation of Sð ijl X X X 1 e sinðj l =2Þ ^ lÞj ¼ ð1 eil Þ SðjÞ ¼ j1 eil j SðjÞeðj1Þl=2 rj1 eil j jSðjÞ j1 Sð i l l =2 sin 1 e jZ1 jZ1 jZ1 X X X ¼ j2i sinðl=2Þeil=2 j jSðjÞ ¼ 2jsinðl=2Þj jSðjÞrjlj jSðjÞ ¼ Cjlj: jZ1
jZ1
ðA:14Þ
jZ1
Now we use the fact that jujru0 o 1¼)j1 uj4 1 u0
and
sinðargð1 uÞÞ ou0 :
Since u0 o p=2, this implies argð1 uÞ o pu0 =2. From this and inequality (A.14) we obtain 1 pCjlj jljr ¼)jtðlÞjr : C 2
ðA:15Þ
For a fixed y 4 0 let l0 be such that f is bounded on ½l0 ; l0 . Denoting
1 y bðyÞ ¼ min l0 ; ; ; C pC we separate ½p; p into four intervals: ½p; bðyÞ½;
½bðyÞ; 0½;
0; bðyÞ;
bðyÞ; p:
In the sequel we only deal with the two last intervals, and concerning the integrand in (2.5) we only treat the part Prr ðt yÞ: Z bðyÞ Z p I1 ðyÞ þ I2 ðyÞ ¼ fX ðlÞPrrðlÞ ðtðlÞ yÞ dl þ fX ðlÞPrrðlÞ ðtðlÞ yÞ dl: 0
bðyÞ
Bounding I1 : From (A.15), since bðyÞry=ðpCÞ, we have jtðlÞjry=2 which implies
y 2
rjtðlÞ yj:
Via (A.9) and (A.10), this leads to Prr ðt yÞrPrr ðy=2Þr
1 : 2y
Consequently I1 ðyÞr
sup½0;bðyÞ ðfðlÞÞ 2y
Z
bðyÞ
l2d dl ¼
0
sup½0;bðyÞ ðfðlÞÞ 2d ðbðyÞÞ2d þ 1 rC1 y 2y
since bðyÞry=ðpCÞ and 2d þ 1 40. 2d 2d Bounding I2 : When l 4bðyÞ, we have l rC2 maxfy ; 1g for some constant C2 . Hence Z p 2d I2 ðyÞrC2 maxfy ; 1g fðlÞPrr ðy tÞ dl:
ðA:16Þ
p
Since f is bounded in a neighbourhood of zero, the arguments used to prove Proposition 3 show that the integral in (A.16) is bounded by a constant. Finally I1 þI2 is bounded by an integrable function gðyÞ and the proposition is proved. A.4. Proof of Proposition 5 ^ lÞ under assumption (2.11). The following lemma gives the local behaviour of Sð Lemma 14. Since SðjÞcjg with 1 o g o2 and c 40, ^ lÞÞ-Z jlj1g ð1 Sð
if l-0;
where ReðZÞ 40 and ImðZÞ o 0.
ðA:17Þ
ARTICLE IN PRESS A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
1123
Proof. From the assumption on SðjÞ, X X SðjÞð1 cosðjlÞÞl-0 c jg ð1 cosðjlÞÞ jZ1
jZ1
and X
SðjÞsinðjlÞl-0 c
jZ1
X jg sinðjlÞ: jZ1
Then, using well known results of Zygmund (1959, pp. 186 and 189) X c jljg1 ¼: c1 jljg1 jg ð1 cosðjlÞÞl-0 g 1 jZ1 and pg X jg sinðjlÞl-0 cGð1 gÞcos jljg1 ¼: c2 jljg1 : 2 jZ1 It is clear that c1 4 0, and c2 o 0 follows from the fact that GðxÞ o 0 for x 2 1; 0½ and cosðpg=2Þ o 0.
&
In the sequel we take l 40. From this lemma, if l is small enough (say 0rlrl0 ), g1
c3 l
g1
rtðlÞrc30 l
ðA:18Þ
and g1
1 c4 l
rrðlÞr1 c40 l
g1
;
ðA:19Þ
where the constants are positive. For a fixed y 40, we deduce from (A.18) ( ) y 1=ðg1Þ g1 implies 0 o y c30 l ry tðlÞ: l omin l0 ; 0 c3
ðA:20Þ
After choosing l1 rl0 such that f is bounded on ½0; l1 define ( ) cðyÞ ¼ min l1 ;
y
1=ðg1Þ
:
c30
Then we split ½p; p into six intervals ½p; l1 ½;
½l1 ; cðyÞ=2½;
½cðyÞ=2; 0½;
0; cðyÞ=2;
cðyÞ=2; l1 ;
l1 ; p:
We only consider the integral on the three last domains and the part Prr ðt yÞ of the integrand in (2.5). When l 2 ½0; cðyÞ=2, inequality (A.20) and properties (A.9) and (A.10) of the Poisson kernel lead to g1
Prr ðy tÞrPrr ðy c30 l
Þr
C
y c30 lg1
;
whence I1 ¼
Z
cðyÞ=2
f ðlÞPrr ðy tÞ dlrC10 0
rC10
Z
cðyÞ=2 0
Z
1=2ðy=c0 3Þ1=ðg1Þ 0
2d
l dl y c30 lg1
ð2d þ 1Þ=ðg1Þ1
¼ C20 y
l2d dl y c30 lg1
Z
0
21=ðg1Þ
uð2d þ 1Þ=ðg1Þ1 du: 1u
Since ð2d þ 1Þ=ðg 1Þ 14 1, the last integral is finite, implying ð2d þ 1Þ=ðg1Þ1
I1 rC30 y
;
which is an integrable function of y. Thanks to (A.8) and to the r.h.s. of (A.19), we have on the interval cðyÞ=2; l1 C30 Prr ðy tÞr g1 :
l
Since f is bounded on this domain, Z l1 C30 sup0 o lrl1 fðlÞ 2d þ 2g I2 rC30 sup fðlÞ l2d þ 1g dl ¼ l1 ðcðyÞ=2Þ2d þ 2g g 2d þ2 cðyÞ=2 0 o lrl1
ARTICLE IN PRESS 1124
A. Philippe, M.-C. Viano / Journal of Statistical Planning and Inference 140 (2010) 1110–1124
C30 sup0 o lrl1 fðlÞ 2d þ 2g l1 þ ðcðyÞ=2Þ2d þ 2g 2d þ 2 g 0 ( )!2d þ 2g 1 y 1=ðg1Þ 0 @ 2d þ 2g A; ¼ C4 l1 þ min l1 ; 0 c3
r
where the function between brackets is integrable because 2d þ2 g 2d þ1 ¼ 1 4 1: g1 g1 Finally, Z p Z p 2d I3 ¼ f ðlÞPrr ðy lÞ dlrl1 fðlÞPrr ðy lÞ dl l1
p
which has already been treated since f is bounded near zero. Gathering the above results on I1 , I2 and I3 completes the proof. References Bardet, J.M., Bertrand, P., 2008. Wavelet analysis of a continuous-time Gaussian process observed at random times and the application to the estimation of the spectral density. Preprint. Bardet, J.M., Lang, G., Oppenheim, G., Philippe, A., Taqqu, M., 2002. Generators of long-range dependent processes: a survey. In: Doukhan, P., Taqqu, M., Oppenheim, G. (Eds.), Long-Range Dependence: Theory and Applications. Birkhauser, Boston. Bardet, J.M., Lang, G., Oppenheim, G., Philippe, A., Stoev, S., Taqqu, M., 2003. Semi-parametric estimation of the long-range dependent processes: a survey. In: Doukhan, P., Taqqu, M., Oppenheim, G. (Eds.), Long-Range Dependence: Theory and Applications. Birkhauser, Boston. Beutler, R.J., 1970. Alias-free randomly timed sampling of stochastic processes. IEEE Trans. Inform. Theory IT-16 (2), 147–152. Bingham, N.H., Goldie, C.M., Teugels, J.L., 1987. Regular variation. In: Encyclopedia of Mathematics. Cambridge University Press, Cambridge. Brewer, K., 1975. Some consequences of temporal aggregation and systematic sampling for ARMA or ARMAX models. J. Econom. 1, 133–154. Brockwell, P.J., Davis, R.A., 1991. Time Series: Theory and Methods. Springer, New York. Chambers, M., 1998. Long-memory and aggregation in macroeconomic time-series. Internat. Econom. Rev. 39, 1053–1072. Feller, W., 1971. An Introduction to Probability Theory and Applications, vol. II. Wiley, New York. Gray, H.L., Zhang, N.F., Woodward, W.A., 1989. On generalized fractional processes. J. Time Ser. Anal. 10, 233–257. Greenshtein, E., Ritov, Y., 2000. Sampling from a stationary process and detecting a change in the mean of a stationary distribution. Bernoulli 6, 679–697. Hwang, S., 2000. The effects of systematic sampling and temporal aggregation on discrete time long memory processes and their finite sample properties. Econometric Theory 16, 347–372. Kadi, A., Oppenheim, G., Viano, M.-C., 1994. Random aggregation of uni and multivariate linear processes. J. Time Ser. Anal. 15, 31–44. Leipus, R., Viano, M.-C., 2000. Modelling long-memory time series with finite or infinite variance: a general approach. J. Time Ser. Anal. 21, 61–74. Masry, E., 1971. Random sampling and reconstruction of spectra. Inform. and Control 19 (4), 275–288. Moulines, E., Soulier, Ph., 1999. Log-periodogram regression of time series with long range dependence. Ann. Statist. 27 (4), 1415–1439. Niemi, H., 1984. The invertibility of sampled and aggregate ARMA models. Metrika 31, 43–50. Oppenheim, G., 1982. E´chantillonnage ale´atoire d’un processus ARMA. C. R. Acad. Sci. Paris Se´r. I Math. 295 (5), 403–406. Oppenheim, G., Ould Haye, M., Viano, M.-C., 2000. Long-memory with seasonal effects. Statist. Inform. Stochastic Process. 3, 53–68. Shapiro, H.S., Silverman, R.A., 1960. Alias-free sampling of random noise. J. Soc. Indust. Appl. Math. 8 (2), 225–248. Souza, L.R., Smith, J., 2002. Bias in the memory parameter for different sampling rates. Internat. J. Forecast. 18, 299–313. Souza, L.R., 2005. A note on Chamber’s ‘‘long-memory and aggregation in macroeconomic time series’’. Internat. Econom. Rev. 46, 1059–1062. Souza, L.R., Smith, J., Souza, R.C., 2006. Convex combinations of long-memory estimates from different sampling rates. Comput. Statist. 21, 399–413. Sram, D.O., Wei, W.S., 1986. Temporal aggregation in the ARIMA processes. J. Time Ser. Anal. 7, 279–292. Stout, W.F., 1974. Almost Sure Convergence. Academic Press, New York. Zygmund, A., 1959. Trigonometric Series, vol. 1. Cambridge University Press, Cambridge.