Limit theorems for the pre-averaged Hayashi–Yoshida estimator with random sampling

Limit theorems for the pre-averaged Hayashi–Yoshida estimator with random sampling

Available online at www.sciencedirect.com ScienceDirect Stochastic Processes and their Applications 124 (2014) 2699–2753 www.elsevier.com/locate/spa ...

500KB Sizes 0 Downloads 17 Views

Available online at www.sciencedirect.com

ScienceDirect Stochastic Processes and their Applications 124 (2014) 2699–2753 www.elsevier.com/locate/spa

Limit theorems for the pre-averaged Hayashi–Yoshida estimator with random sampling Yuta Koike ∗ University of Tokyo, Graduate School of Mathematical Sciences, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan Received 2 February 2013; received in revised form 24 October 2013; accepted 15 March 2014 Available online 28 March 2014

Highlights • We propose a modification of the pre-averaged Hayashi–Yoshida estimator. • We show the consistency under a general sampling setup. • We show the asymptotic mixed normality allowing some dependence between variables. Abstract We will focus on estimating the integrated covariance of two diffusion processes observed in a nonsynchronous manner. The observation data is contaminated by some noise, which possibly depends on the time and the latent diffusion processes, while the sampling times also possibly depend on the observed processes. In a high-frequency setting, we consider a modified version of the pre-averaged Hayashi–Yoshida estimator, and we show that such a kind of estimator has the consistency and the asymptotic mixed normality, and attains the optimal rate of convergence. c 2014 Elsevier B.V. All rights reserved. ⃝

Keywords: Hayashi–Yoshida estimator; Integrated covariance; Market microstructure noise; Nonsynchronous observations; Pre-averaging; Stable convergence; Strong predictability

1. Introduction In financial econometrics, measuring the covariation of two assets is the central problem because it serves as a basis for many areas of finance, such as risk management, portfolio allocation and hedging strategies. In recent years there has been a considerable development of the statistical approaches to this problem using high frequency data. Such approaches were pioneered ∗ Tel.: +81 354657079.

E-mail addresses: [email protected], [email protected]. http://dx.doi.org/10.1016/j.spa.2014.03.008 c 2014 Elsevier B.V. All rights reserved. 0304-4149/⃝

2700

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

by Andersen and Bollerslev [2] and Barndorff-Nielsen and Shephard [5], and their methods are based on the semimartingale theory. In fact, no-arbitrage based characterizations of asset prices suggest that price processes must follow a semimartingale (see [14] for instance). Recently, however, it has become a common recognition that at ultra-high frequencies the financial data is contaminated by market microstructure noise such as rounding errors, bid–ask bounds and misprints. Motivated by this, the statistical inference for semimartingales observed at a high frequency with additive observation noise has become an active research area during the past decade. On the other hand, since in this paper we are interested in the statistical inference for two assets observed at a high frequency, we face another important problem. That is, we may observe the data in a nonsynchronous manner. The classical theory of stochastic calculus suggests that the so-called realized covariance can be used for measuring the covariation of two assets if the sampling is synchronous. Therefore, it is a naive idea that first we fix a sampling frequency (e.g. per five minutes) and generate new data sampled at this fixed grid by the previous-tick interpolation scheme and then we compute the realized covariance from the synchronized data. However, Hayashi and Yoshida [19] show that this method suffers from a serious bias known as the Epps effect described in [15], so we need a different approach to deal with this problem. [19] proposed the so-called Hayashi–Yoshida estimator, which is identical with the realized covariance in the synchronous case and a consistent estimator for the quadratic covariation of two discretely observed continuous semimartingales even in the nonsynchronous case. The asymptotic theory of the Hayashi–Yoshida estimator has further been developed in [18,20,21,13]. Another important theoretical approach to nonsynchronicity, a Fourier analytic approach, has been developed in Malliavin and Mancino [28,29] and Cl´ement and Gloter [11]; besides Ogihara and Yoshida [31] have recently developed the quasi-likelihood analysis of nonsynchronously observed diffusions in a parametric setting. In this paper we consider two diffusion processes observed in a nonsynchronous manner as well as contaminated by microstructure noise. Our aim is to estimate the integrated covariance of the diffusion processes in a high-frequency setting by coping with both the observation noise and the nonsynchronous sampling simultaneously. Recently, various authors proposed hybrid approaches combining a method to de-noise the data with another method to deal with the nonsynchronicity in order to attack this problem. One direction in such approaches is that we first use the refresh sampling method for synchronizing the data and then construct a noise-robust estimator. This method was first applied in Barndorff-Nielsen et al. [4] in which the realized kernel method proposed in [3] was used for de-noising, and further developed by [1,8,22,37] and [39] with using other de-noising methods. Another direction is using a Hayashi–Yoshida type approach to deal with the nonsynchronicity. Bibinger [6] proposed to synchronize the data by applying a Hayashi–Yoshida type synchronization called the pseudo-aggregation algorithm first. In a second step, a multiscale type estimator like in [38] is constructed from this synchronized data. The obtained estimator is called the generalized multiscale estimator. Christensen et al. [8] proposed to de-noise the data by applying the pre-averaging method introduced in [34] (and further studied in [24]) first. After that, they construct a Hayashi–Yoshida type estimator called the pre-averaged Hayashi–Yoshida estimator from the pre-averaged data. On the other hand, recently Corsi et al. [12] proposed a new estimator which is not the hybrid one. Our estimation approach is based on the pre-averaged Hayashi–Yoshida estimator in the above, but we slightly modify this estimator for a technical reason. In Christensen et al. [9] the associated central limit theorem for that estimator has been shown, but it is restricted to the case when observation times are deterministic (or random but independent of the observed processes) and some important sampling schemes in practice, like the Poisson sampling schemes,

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2701

are excluded. In fact, the asymptotic variance given in their theorem has a quite complex form which depends on the special forms of the sampling times considered in that paper, so that the author guesses one cannot expect to extend this result to more general sampling involving the Poisson sampling schemes. For this reason, we first synchronize the sampling times partly. After that, we construct the pre-averaged Hayashi–Yoshida estimator from this new data. Then, we can compute the asymptotic variance due to Lemma 10.2 in Section 3. This procedure is done in the same spirit as that discussed in Section 6.3 of [7]. In addition to the above problems, we also consider the dependency between the sampling times and the observed processes. This is motivated by the recent studies on this topic in the absence of microstructure noise. See [16,17,21,27,32] for details. Robert and Rosenbaum [36] also consider this type of problem in a framework different from the model with additive observation noise, which they call the model with uncertainty zones. In this paper, we will show a central limit theorem for the estimation error of the proposed estimator in the situation explained above (see Theorem 3.1). Usually a certain blocking technique is used in the proofs of the central limit theorems for pre-averaging estimators (see [24,33,25,9]). In this paper, however, we do not rely on such a technique but a technique used in [21] for the proof of the central limit theorem for the Hayashi–Yoshida estimator. This is based on Lemma 4.2 which tells us that in the first order the estimation error process of our estimator is asymptotically equivalent to the process Mn defined in Section 4, which has a structure similar to that of the estimation error process of the Hayashi–Yoshida estimator. This enables us to apply arguments that mimic those in [21]. The only thing different from [21] is the computation of the asymptotic variance process, but we can pass through this problem due to the modification explained above. Lemma 4.2 has another important implication for the asymptotic theory of our estimator. That is, we can deduce a law of large numbers for our estimator, which can be regarded as a counterpart to Theorem 2.3 in [18]. This will be presented in Section 5.1. The organization of this paper is the following. In Section 2 we introduce the mathematical model and explain the construction of our estimator. In Section 3 the main result in this paper is stated. Section 4 provides a brief sketch of the proof of the main result, while in Section 5 we deal with some topics related to statistical application of our estimator. Most of the proofs will be put in Sections 6–14. 2. The setting We start by introducing an appropriate stochastic basis on which our observation data is (0) defined. Let B (0) = (Ω (0) , F (0) , F(0) = (Ft )t∈R+ , P (0) ) be a stochastic basis. For any t ∈ R+ (0) we have a transition probability Q t (ω(0) , dz) from (Ω (0) , Ft ) into R2 , which satisfies  z Q t (ω(0) , dz) = 0. (2.1) We endow the space Ω (1) = (R2 )[0,∞) with the product Borel σ -field F (1) and with the probability Q(ω(0) , dω(1) ) which is the product ⊗t∈R+ Q t (ω(0) , ·). We also call (ϵt )t∈R+ the (1) “canonical process” on (Ω (1) , F (1) ) and the filtration Ft = σ (ϵs ; s ≤ t). Then we consider the stochastic basis B = (Ω , F, F = (Ft )t∈R+ , P) defined as follows: Ω = Ω (0) × Ω (1) , F = F (0) ⊗ F (1) , Ft = ∩s>t Fs(0) ⊗ Fs(1) , (0) (1) (0) (0) (0) (1) P(dω , dω ) = P (dω )Q(ω , dω ).

2702

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Any variable or process which is defined on either Ω (0) or Ω (1) can be considered in the usual way as a variable or a process on Ω . Next we introduce our observation data. Let X and Y be two continuous semimartingales on B (0) . Also, we have two sequences of F(0) -stopping times (S i )i∈Z+ and (T j ) j∈Z+ that are increasing a.s., S i ↑ ∞ and

T j ↑ ∞.

(2.2)

As a matter of convenience we set S −1 = T −1 = 0. These stopping times implicitly depend on a parameter n ∈ N, which represents the frequency of the observations. Denote by (bn ) a sequence of positive numbers tending to 0 as n → ∞. Let ξ ′ be a constant satisfying 0 < ξ ′ < 1. In this paper, we will always assume that rn (t) := sup (S i ∧ t − S i−1 ∧ t) ∨ sup (T j ∧ t − T j−1 ∧ t) = o p (bnξ ) ′

i∈Z+

(2.3)

j∈Z+

as n → ∞ for any t ∈ R+ . The processes X and Y are observed at the sampling times (S i ) and (T j ) with observation errors (ϵ SXi )i∈Z+ and (ϵTY j ) j∈Z+ respectively, where ϵt = (ϵtX , ϵtY ) for each t. After all, we have the observation data X = (X S i )i∈Z+ and Y = (YT j ) j∈Z+ of the form YT j = YT j + ϵTY j .

X S i = X S i + ϵ SXi ,

Now we explain the construction of our estimator. First we introduce some notation. We choose a sequence kn of integers and a number θ ∈ (0, ∞) satisfying −1/2

kn = θ bn

−1/4

+ o(bn

)

(2.4)

−1/2

(for example kn = ⌈θ bn ⌉). We also choose a continuous function g : [0, 1] → R which is piecewise C 1 with a piecewise Lipschitz derivative g ′ and satisfies ψ H Y :=

g(0) = g(1) = 0,



1

g(x)dx ̸= 0

(2.5)

0

(for example g(x) = x ∧ (1 − x)). We associate the random intervals I i = [S i−1 , S i ) and J j = [T j−1 , T j ) with the sampling scheme (S i ) and (T j ) and refer to I = (I i )i∈N and J = (J j ) j∈N as the sampling designs for X and Y . We introduce the pre-averaging observation data of X and Y based on the sampling designs I and J respectively as follows:   k n −1  p  i X(I) = g X Si+ p − X S i+ p−1 , k n p=1 Y(J ) = j

k n −1 q=1



q g kn





 YT j+q − YT j+q−1 ,

i, j = 0, 1, . . . .

The following quantity was introduced in Christensen et al. [8]: Definition 2.1 (Pre-averaged Hayashi–Yoshida Estimator). The pre-averaged Hayashi–Yoshida estimator, or pre-averaged HY estimator of X and Y associated with sampling designs I and J

2703

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

is the process P H Y (X, Y; I, J )nt =

∞ 

1 (ψ H Y kn )2

X(I)i Y(J ) j 1{[S i ,Si+kn )∩[T j ,T j+kn )̸=∅} ,

i, j=0 S i+kn ∨T j+kn ≤t

t ∈ R+ . Remark. In order to improve the performance of the above estimator in finite samples, it will  n −1 g( p/kn ) in the above definition. Such a be efficient to replace the quantity ψ H Y with k1n kp=1 kind of adjustments often appears in the literature on pre-averaging estimators. As mentioned in Section 1, we modify the above pre-averaged HY estimator by applying an interpolation method similar to the refresh sampling method for the technical reason. The following notion was introduced to this area in [4]: Definition 2.2 (Refresh Time). The first refresh time of sampling designs I and J is defined as R 0 = S 0 ∨ T 0 , and then subsequent refresh times as R k := min{S i |S i > R k−1 } ∨ min{T j |T j > R k−1 },

k = 1, 2, . . . .

We introduce a new sampling scheme by a kind of the next-tick interpolations to the refresh 0 := T 0 , and times. That is, we define  S 0 := S 0 , T  S k := min{S i |S i > R k−1 },

k := min{T j |T j > R k−1 }, T

k = 1, 2, . . . .

Then, we create new sampling designs as follows:  I k := [ S k−1 ,  S k ),

k−1 , T k ), Jk := [T

 I := ( I i )i∈N ,

J := ( Jj ) j∈N .

For the sampling designs  I and J obtained in such a manner, we will consider the pre-averaged Hayashi–Yoshida estimator  P H Y (X, Y)n := P H Y (X, Y;  I, J)n . 3. Main results We start with introducing some notation and conditions in order to state our main result. We write the canonical decompositions of X and Y as follows: X = AX + M X ,

Y = AY + M Y .

(3.1)

Here, A X and AY are continuous F(0) -adapted processes with locally finite variations, while M X (0) and M Y are continuous ∞ F -local martingales. n Next, let Nt = k=1 1{R k ≤t} for each t ∈ R+ and Γ k = [R k−1 , R k ) for each k ∈ N. Let Hn = (Htn )t∈R+ be a sequence of filtrations of F to which N n is adapted, and for each n and each ρ ≥ 0 we define the processes χ n and G(ρ)n by  ρ    k Hn k−1 ), χsn = P( Sk = T G(ρ)ns = E bn−1 |Γ k | HnR k−1 R when s ∈ Γ k . The following condition is necessary to compute the asymptotic variance of the estimation error of our estimator explicitly. In fact, this condition ensures quantities that appearing in the asymptotic variance indeed converge. For a sequence (X n ) of c`adl`ag processes and a c`adl`ag

2704

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753 Sk.p.

process X , we write X n −−→ X if (X n ) converges to X in probability for the Skorokhod topology. Recall that ξ ′ is given in (2.3). [A1′ ] (i) For each n, we have a c`adl`ag Hn -adapted process G n and a random subset Nn0 of N such that (#Nn0 )n∈N is tight, G(1)nR k−1 = G nR k−1 for any k ∈ N − Nn0 , and there exists a c`adl`ag Sk.p.

F(0) -adapted process G satisfying that G and G − do not vanish and that G n −−→ G as n → ∞.   (ii) There exists a constant ρ ≥ 1/ξ ′ such that sup0≤s≤t G(ρ)ns n∈N is tight for all t > 0. (iii) For each n, we have a c`adl`ag Hn -adapted process χ ′n and a random subset Nn′ of N such that (#Nn′ )n∈N is tight, χ Rn k−1 = χ R′nk−1 for any k ∈ N − Nn′ , and there exists a c`adl`ag Sk.p.

F(0) -adapted process χ such that χ ′n −−→ χ as n → ∞. Remark 3.1. A kind of conditions such as [A1′ ](i)–(ii) appears in [4,17,32]. The condition [A1′ ](iii) is satisfied when (S i ) = (T j ) (a synchronous case) with χ ≡ 1 or when S i ̸= T j for all i, j ≥ 1 (a completely nonsynchronous case) with χ ≡ 0, for example. Next, we introduce the following strong predictability condition for the sampling designs, which is an analog to the condition [A2] in [21]. Let ξ be a positive constant satisfying 1 2 < ξ < 1. (n)

[A2] For every n, i ∈ N, S i and T i are G(n) -stopping times, where G(n) = (Gt )t∈R+ is the (n) (0) filtration given by Gt = F for t ∈ R+ . ξ −1/2 (t−bn

)+

Remark 3.2. When S i and T i are independent of X and Y , we may assume S i and T i are (0) F0 -measurable without loss of generality. This is because X and Y are still continuous semimartingales with the canonical decompositions (3.1) with respect to the filtration generated by F(0) and S 0 , T 0 , S 1 , T 1 , . . . (for all n). Note that the condition [A1′ ] uses the filtration Hn different from F(0) , so that expansions of the filtration F(0) cause no problem on [A1′ ]. This implies that we can always assume that sampling times independent of X and Y satisfy the condition [A2]. Another example of sampling times satisfying [A2] (and [A1′ ]) is given as follows. Let (S i ) and (T j ) be arbitrarily sequences of F(0) -stopping times satisfying [A1′ ]. Then the sequences ξ −1/2 ξ −1/2 (S i +bn ) and (T j +bn ) obviously satisfy [A1′ ] and [A2]. This example implies that [A2] ξ −1/2 intuitively means sampling times are determined the short time (= bn ) later than changes of latent processes. In application, this type of delay will occur while a trader is asking the broker to trade in a financial market. On the other hand, [A2] rules out some naturally interesting situations. For example, it rules out the case that the S i is the successive hitting times of a spatial grid by X , a case considered in [16,27] with a univariate pure diffusion setting. For a process f and a stopping time σ , we denote by f σ the process ( f σ ∧t )t∈R+ . The following conditions are analogs to the conditions [A3] and [A4] in [21]: [A3] For each V, W = X, Y , [V, W ] is absolutely continuous with a c`adl`ag derivative, and for the density process f = [V, W ]′ there is a sequence (σk ) of F(0) -stopping times such that

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2705

σk ↑ ∞ as k → ∞ and for every k and any λ > 0, we have a positive constant Ck,λ satisfying       (3.2) E | f τσ1k − f τσ2k |2 Fτ1 ∧τ2 ≤ Ck,λ E |τ1 − τ2 |1−λ Fτ1 ∧τ2 for any bounded F(0) -stopping times τ1 and τ2 , and f is adapted to Hn . 9 [A4] ξ ∨ 10 < ξ ′ and (2.3) holds for every t ∈ R+ . The following conditions, which are analogs to the conditions [A5] and [A6] in [21], are necessary to deal with the drift parts. For a (random) interval I and a time t, we write I (t) = I ∩ [0, t). [A5] A X and AY are absolutely continuous with c`adl`ag derivatives, and there is a sequence (σk ) of F(0) -stopping times such that σk ↑ ∞ as k → ∞ and for every k we have a positive constant Ck and λk ∈ (0, 3/4) satisfying       E | f tσk − f τσk |2 Fτ ∧t ≤ Ck E |t − τ |1−λk Fτ ∧t (3.3) for every t > 0 and any bounded F(0) -stopping time τ , for the density processes f = (A X )′ and (AY )′ .  k 2 [A6] For each t ∈ R+ , bn−1 Hn (t) = O p (1) as n → ∞, where Hn (t) = ∞ k=1 |Γ (t)| . The following condition is a regularity condition for the exogenous noise process:  [N] ( |z|8 Q t (dz))t∈R+ is a locally bounded process, and the covariance matrix process  Ψt (ω(0) ) = zz ∗ Q t (ω(0) , dz)

(3.4)

is c`adl`ag, quasi-left continuous and adapted to Hn for every n. Furthermore, there is a sequence (σ k ) of F(0) -stopping times such that σ k ↑ ∞ as k → ∞ and for every k and any λ > 0, we have a positive constant Ck,λ satisfying    ij ij (3.5) E |Ψσ k ∧t − Ψσ k ∧(t−h) |2 F(t−h)+ ≤ Ck,λ h 1−λ +

for every i, j ∈ {1, 2} and every t, h > 0. 1

Remark 3.3. The inequalities (3.2), (3.3) and (3.5) are satisfied when w( f ; h, t) = O p (h 2 −λ ) as h → ∞ for every t, λ ∈ (0, ∞), for example. Here, for a real-valued function x on R+ , the modulus of continuity on [0, T ] is denoted by w(x; δ, T ) = sup{|x(t) − x(s)|; s, t ∈ [0, T ], |s − t| ≤ δ} for T, δ > 0. This is the original condition in [21]. Another such example is the case that there exist an F(0) -adapted process B with a locally integrable variation and a locally square-integrable martingale L such that f = B + L and both of the predictable compensator of the variation process of B and predictable quadratic variation of L are absolutely continuous with locally bounded derivatives. This type of condition is familiar in the context of the estimation of volatility-type quantities; see [17,25] for instance. Furthermore, in both of the cases f is c`adl`ag and quasi-left continuous. For two real-valued bounded measurable functions α, β on R, define the function ψα,β on R by  1  x+u+1 ψα,β (x) = α(u)β(v)dvdu. 0

x+u−1

2706

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

We extend the functions g and g ′ to the whole real line by setting g(x) = g ′ (x) = 0 for x ̸∈ [0, 1]. Then we put  2  2  2 κ := ψg,g′ (x)2 dx. κ := ψg,g (x)2 dx,  κ := ψg′ ,g′ (x)2 dx, −2

−2

−2

We denote by D(R+ ) the space of c`adl`ag functions on R+ equipped with the Skorokhod topology. A sequence of random elements X n defined on a probability space (Ω , F, P) is said ˜ P) ˜ to converge stably in law to a random element X defined on an appropriate extension (Ω˜ , F, of (Ω , F, P) if E[Y g(X n )] → E[Y g(X )] for any F-measurable and bounded random variable Y and any bounded and continuous function g. We then write X n →ds X . A sequence (X n ) of stochastic processes is said to converge to a process X uniformly on compacts in probability (abbreviated ucp) if, for each t > 0, sup0≤s≤t |X sn − X s | → p 0 as n → ∞. We then write ucp

X n −−→ X . Now we are ready to state our main result. Theorem 3.1. Suppose [A1′ ], [A2]–[A6] and [N] are satisfied. Then  · −1/4 s in D(R+ ) P H Y (X, Y)n − [X, Y ]} →ds ws dW bn { 0

as n → ∞, where W˜ is a one-dimensional standard Wiener process (defined on an extension of B) independent of F and w is given by  2 −4 ′ ′ ′ 2 −3 11 22 12 ws2 = ψ H [θ κ{[X ] [Y ] + ([X, Y ] ) }G + θ  κ {Ψ Ψ + Ψ χ }G −1 s s s s s s s s s Y + θ −1 κ{[X ]′s Ψs22 + [Y ]′s Ψs11 + 2[X, Y ]′s Ψs12 χs }].

(3.6)

A sketch of the proof is given in the next section. 4. Stable convergence of the estimation error In this section we briefly sketch the proof of the main theorem. First we introduce some notation. For processes V and W , V • W denotes the integral (either stochastic or ordinary) of V with respect to W . For any semimartingale V and any (random) interval I , we define the t processes V (I )t and It by V (I )t = 0 1 I (s−)dVs and It = 1 I (t) respectively. We denote by Φ the set of all real-valued piecewise Lipschitz functions α on R satisfying α(x) = 0 for any x ̸∈ [0, 1]. For a function α on R we write α np = α( p/kn ) for each n ∈ N and p ∈ Z. For any semimartingale V , any sampling design D = (D i )i∈N and any α ∈ Φ, we define the process V¯ (D)it for each i ∈ N by V¯α (D)it =

k n −1

α np V (D i+ p )t .

p=0

We introduce the following auxiliary regularity conditions: [C1] For each of the processes V = A X , AY , [X ], [Y ] and [X, Y ], V is absolutely continuous and V ′ − V0′ is a locally bounded process.  [C2] ( |z|2 Q t (dz))t∈R+ is a locally bounded process. [C3] bn Ntn = O p (1) as n → ∞ for every t.

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2707

We define Ψ by (3.4) whenever we have [C2]. Note that the conditions [C1]–[C3] are satisfied under [A1′ ], [A2]–[A6] and [N] as follows. [C2] obviously follows from [N], while [A1′ ] yields [C3] (see Lemma 11.2). Moreover, [A3] and [A5] imply the condition [C1]. In fact, we can prove the following result. The proof is given in Section 6. Lemma 4.1. Let f be a c`adl`ag F(0) -adapted process and suppose that there is a sequence (σ k ) of F(0) -stopping times such that σ k ↑ ∞ as k → ∞ and that for every k we have a positive constant Ck and λk ∈ [0, 1] satisfying       E | f tσk − f sσk |2 Ft∧s ≤ Ck E |t − s |1−λk Ft∧s for every t, s ≥ 0. Then f − f 0 is a locally bounded process. Let EtX = −

∞ 1  ϵ SXp 1{ S p ≤t} , kn p=1 

EtY = −

∞ 1  ϵ Y 1 q . kn q=1 Tq {T ≤t}

E X and EY are obviously purely discontinuous locally square-integrable martingales on B if  j ) are F(0) -stopping times). Furthermore, if Ψ is c`adl`ag, [C2] holds (note that both ( S i ) and (T i quasi-left continuous and both (S ) and (T j ) are F(0) -predictable times, then we have ⟨E X ⟩t =

∞ 1  Ψ S11p 1{ S p ≤t} , kn2 p=1 

⟨E X , EY ⟩t =

⟨EY ⟩t =

∞ 1  Ψ 22 1 q , kn2 q=1 Tq {T ≤t}

∞ 1  Ψ S12p 1{ q ≤t} . S p =T kn2 p,q=1 

For each i, j ∈ Z+ , let I¯i = [ Si ,  S i+kn ),

j , T  j+kn ), J¯ j = [T

K¯ i j = 1{ I¯i ∩ J¯ j ̸=∅} ,

ij

ij

and define the process K¯ t by K¯ t = 1{ I¯i (t)∩ J¯ j (t)̸=∅} (recall that I (t) denotes I ∩ [0, t) for a random interval I and a time t). Furthermore, for any semimartingales V, W and any α, β ∈ Φ, set L¯ α,β (V, W )i j = V¯α ( I i )− • W¯ β (Jj ) + W¯ β (Jj )− • V¯α ( Ii ) for each i, j ∈ N. We define the process M(k)nt (k = 1, 2, 3, 4) by M(1)nt = M(2)nt = M(3)nt = M(4)nt = and set

Mnt

=

1 (ψ H Y kn )2 1 (ψ H Y kn )2 1 (ψ H Y kn )2 1 (ψ H Y kn )2

4

∞  i, j=1 ∞  i, j=1 ∞  i, j=1 ∞ 

ij ij K¯ t L¯ g,g (X, Y )t ,

ij ij K¯ t L¯ g′ ,g′ (E X , EY )t ,

ij ij K¯ t L¯ g,g′ (X, EY )t ,

ij ij K¯ t L¯ g′ ,g (E X , Y )t ,

i, j=1

n k=1 M(k)t .

Then we have the following lemma.

2708

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Lemma 4.2. Suppose that (2.3) and [C1]–[C3] are satisfied. Then   ucp −γ  bn P H Y (X, Y)n − [X, Y ] − Mn −−→ 0 as n → ∞ for any γ < ξ ′ − 1/2. We give a proof of Lemma 4.2 in Section 7. The above lemma implies that we may consider Mn instead of the estimation error of our estimator as far as ξ ′ > 3/4. Furthermore, we remark the following result (shown in Section 6): Lemma 4.3. Let V, W be two semimartingales and g, h be two real functions on R. For any ij ij i, j ∈ Z+ and any t ∈ R+ , K¯ t L¯ g,h (V, W )t = K¯ − • L¯ g,h (V, W )t . We momentarily assume that X and Y are continuous local martingales. Lemma 4.3 implies that Mn is a locally square-integrable martingale. Therefore, we can define the quantities Vnt := ⟨Mn ⟩t and VnN ,t := ⟨Mn , N ⟩t for a locally square-integrable martingale N . Then we consider the following condition. −1/2

[A1∗ ] There exists an F-adapted, nondecreasing, continuous process (Vt )t∈R+ such that bn → p Vt as n → ∞ for every t.

Vnt

The next two lemmas imply that the above two conditions are sufficient for our stable convergence problem. They are proved in Sections 8 and 9, respectively. Lemma 4.4. Suppose that [A1∗ ], [A4] and [C1]–[C3] are satisfied. Then for any square−1/4 integrable martingale N orthogonal to (X, Y ) we have bn ⟨Mn , N ⟩t → p 0 as n → ∞ for every t.  −1/4 Lemma 4.5. Suppose that [A4], [C1]–[C3] and [N] are satisfied. Then s:0≤s≤t |bn ∆Mns |4 → p 0 as n → ∞ for any t > 0. We consider the following conditions: −1/4

[B1] bn VnN ,t → p 0 as n → ∞ for every t and any N ∈ {X, Y }. · [W] There exists an F-predictable process w such that V· = 0 ws2 ds. · −1/4 [SC] bn Mn →ds M in D(R+ ) as n → ∞, where M = 0 ws dW˜ s , w is some predictable process, and W˜ is a one-dimensional standard Wiener process (defined on an extension of B) independent of F. Then, we obtain the following result (shown in Section 6): Proposition 4.1. Suppose that [A1∗ ], [B1], [A4], [C1]–[C3], [N] and [W] are fulfilled. Then [SC] holds. The left problem is to check the conditions [A1∗ ], [B1] and [W]. Following [21], we introduce some notation and conditions. For any locally square-integrable martingales M, N , M ′ , N ′ and any α, β, α ′ , β ′ ∈ Φ, let ′ ′

′ ′ i ji j I)i ⟩t ⟨ N¯ β (J) j , N¯ β′ ′ (J) j ⟩t Vα,β;α ′ ,β ′ (M, N ; M ′ , N ′ )t := ⟨ M¯ α ( I)i , M¯ α′ ′ ( ′



+ ⟨ M¯ α ( I)i , N¯ β′ ′ (J) j ⟩t ⟨ M¯ α′ ′ ( I)i , N¯ β (J) j ⟩t .

2709

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Then we introduce the following condition: [B2] For any M, M ′ ∈ {X, E X }, any N , N ′ ∈ {Y, EY } and any α, β, α ′ , β ′ ∈ Φ,  −1/2 i j i′ j′ ij i′ j′ bn ( K¯ − K¯ − ) • ⟨ L¯ (M, N ), L¯ ′ ′ (M ′ , N ′ )⟩t α,β

i, j,i ′ , j ′ −1/2



= bn

i, j,i ′ , j ′

α ,β

  i ji ′ j ′ i j i′ j′ ( K¯ − K¯ − ) • Vα,β;α ′ ,β ′ (M, N ; M ′ , N ′ )t + o p kn4

as n → ∞ for every t ∈ R+ . Let   q − p 2 ψg,g [X ]( I p )t [Y ]( Jq )t + [X, Y ]( I p )t [X, Y ]( Jq )t , 4 k ψ H Y p,q n    1 q−p 2 = ψg′ ,g′ kn (ψ H Y kn )4 p,q    2 11 22 12 × Ψ Ψ q 1{ 1{ q ≤t} + Ψ R p  p ≤t} 1{ q ≤t} , S p ∨T S p =T S q =T Sp T   q−p 2 1  ψg,g′ [X ]( I p )t ΨT22q 1{Tq ≤t} , = 4 kn ψ H Y kn2 p,q   1  q−p 2 11 = 4 ψg,g′ [Y ]( Jq )t Ψ 1 S p ≤t} S p { kn ψ H Y kn2 p,q

V¯tn,1 = V¯tn,2

V¯tn,3 V¯tn,4

1





and V¯tn,34 =

1





q−p kn

2

ψg,g′ [X, Y ]( I p )t Ψ R12q 1{ q ≤t} , S q =T 4 k2 ψH p,q n Y 4 and set V¯tn = l=1 V¯tn,l + 2V¯tn,34 . Then we have the following proposition, which enables us to n work with V¯ , a more tractable process than Vn . The proof is given in Section 10. Proposition 4.2. Suppose that [A4], [N], [C1]–[C3] and [B2] hold. Suppose also that S i and T i 1/2 are F(0) -predictable times for all i. Then Vnt = V¯tn + o p (bn ) as n → ∞ for all t ∈ R+ . We modify [A1∗ ]. −1/2

[A1] There exists an F-adapted, nondecreasing, continuous process (Vt )t∈R+ such that bn → Vt as n → ∞ for every t.

V¯tn

By Proposition 4.2, we can rephrase Proposition 4.1 as follows. Proposition 4.3. Suppose that [A1], [A4], [B1], [B2], [C1]–[C3], [N] and [W] for V in [A1] are satisfied. Then [SC] holds. The condition [W] for V in [A1] can be checked by the following lemma, which is proved in Section 11. Lemma 4.6. Suppose that [A1′ ], [A3], [A4] and [N] hold. Then 2  t −1/2 ∞ q− p ⟨X ⟩( I p )t ⟨Y ⟩( Jq )t → p θ κ 0 ⟨X ⟩′s ⟨Y ⟩′s G s ds, (a) bn p,q=1 ψg,g kn

2710

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753 −1/2 ∞ p,q=1 ψg,g



2

(d) bn

t ⟨X, Y ⟩( I p )t ⟨X, Y ⟩( Jq )t → p θ κ 0 (⟨X, Y ⟩′s )2 G s ds, 2   ∞ q− p 11 Ψ 22 1 p −3 κ t Ψ 11 Ψ 22 G −1 ds, ′ ,g ′ Ψ ψ q ≤t} → θ  g s s p,q=1 S p ∨T q { 0 s kn Sp T 2   t  12 2 ∞ q− p 12 21 p −3 Ψ 1 S p =Tp ≤t} ΨTq 1{ κ 0 Ψs χs q ≤t} → θ  p,q=1 ψg ′ ,g ′ S q =T kn S p {

(e)

∞



∞



∞



(b) bn

−1/2 1 kn4

(c) bn

−1/2 1 kn4 G −1 ds, s −1/2 1 bn kn2 −1/2 1 kn2

(f) bn

−1/2 1 kn2

(g) bn

q− p kn

p,q=1 ψg,g ′ p,q=1 ψg,g ′ p,q=1 ψg,g ′

q− p kn p−q kn q− p kn

2 2 2

⟨X ⟩( I p )t ΨT22q 1{Tq ≤t} → p θ −1 κ

t

11 1 p −1 ⟨Y ⟩( Jq )t Ψ S p ≤t} → θ κ S p {

t

′ 22 0 ⟨X ⟩s Ψs ds,

′ 11 0 ⟨Y ⟩s Ψs ds

p −1 ⟨X, Y ⟩( I p )t Ψ R12q 1{ q ≤t} → θ κ S q =T

t

′ 12 0 ⟨X, Y ⟩s Ψs χs ds

as n → ∞ for every t. The following proposition, which is an analog to Proposition 5.1 in [21], gives a sufficient condition for [B2]. The proof is given in Section 12. Proposition 4.4. [B2] holds true under [A2]–[A4], [N] and [C3]. It still remains to check the asymptotic orthogonality condition [B1]. However, it will be shown that it is the same kind of task as solving [B2]. This phenomenon is also seen in [21]. Theorem 4.1. Suppose that X and Y are continuous semimartingales given by (3.1). (a) If [A1]–[A6], [N], [C3] and [W] are satisfied, then [SC] holds. (b) If [A1′ ], [A2]–[A6] and [N] are satisfied, then [SC] holds for w given by (3.6). It is worthy of remark that neither [A5] nor [A6] is necessary for local martingales as seen in [21]. Theorem 4.2. Suppose that X and Y are continuous local martingales. (a) If [A1]–[A4], [N], [C3] and [W] are satisfied, then [SC] holds. (b) If [A1′ ], [A2]–[A4] and [N] are satisfied, then [SC] holds for w given by (3.6). Theorems 4.1 and 4.2 are proved in Section 13. Proof of Theorem 3.1. The desired result follows from Lemma 4.2 and Theorem 4.1(b).



5. Some related topics for statistical application 5.1. Consistency In order to obtain our main theorem, we need to impose a kind of predictability such as [A2] on the sampling scheme. In fact, it is still an active research area to seek asymptotic theories of estimators for volatility-type quantities when the sampling scheme is random and depends on observed processes even if neither nonsynchronicity nor microstructure noise is present; see [16,27] for instance. Such a situation, however, dramatically changes when we restrict our attention to the consistency of the estimators. As is well known, the classical realized covariance is a consistent estimator for the integrated covariance whenever the sampling scheme consists of stopping times and the mesh size of sampling times tends to 0. Furthermore, Hayashi and Kusuoka [18] verified such a result for the Hayashi–Yoshida estimator in the presence of the

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2711

nonsynchronicity of the sampling scheme. The following theorem, which is a by-product of Lemma 4.2, tells us that such a result is still valid for our estimator: ucp Theorem 5.1. Suppose (2.3) and [C1]–[C3] are satisfied. Then  P H Y (X, Y)n −−→ [X, Y ] as n → ∞, provided that ξ ′ > 1/2.

The proof is given in Section 14. Remark. Our estimator has the consistency even if the microstructure noise is m-dependent as well as the original pre-averaged Hayashi–Yoshida estimator of [9]. In fact, our estimator is identical to theirs if one regards the sequences (X  j ) j∈Z+ as a set of observation S i )i∈Z+ and (YT data and one does not need any special structure for sampling times in their proof of the consistency. Therefore, we can apply the same argument as Remark 3.3 of [9] for the proof of the consistency of our estimator under the m-dependent noise case. To prove the asymptotic mixed normality, we will need some additional conditions for identifying the asymptotic variance of the estimator. This is similar to the case of the original one, as stated in Remark 4.5 of [9]. 5.2. Poisson sampling with a random change point As an illustrative example of the sampling scheme satisfying the conditions [A1′ ], [A2], [A4] and [A6], we shall discuss a Poisson sampling with a random change point, which was also discussed in [21]. First we construct the stochastic basis B (0) which is appropriate for the present situation. Let (Ω ′ , F ′ , (Ft′ ), P ′ ) be a stochastic basis, and suppose that the semimartingales X and Y are defined on this basis. Suppose also that Ψ is (Ft′ )-adapted. Furthermore, on an auxiliary probability space (Ω ′′ , F ′′ , P ′′ ), there are mutually independent standard Poisson processes k (0) (N kt ), (N t ) (k = 1, 2). Then we construct B (0) = (Ω (0) , F (0) , F(0) = (Ft )t∈R+ , P (0) ) by Ω (0) = Ω ′ × Ω ′′ ,

(0)

F (0) = F ′ ⊗ F ′′ ,

Ft

= Ft′ ⊗ F ′′ ,

P (0) = P ′ × P ′′ .

Next we construct our sampling schemes. For each k = 1, 2, let p k , p k ∈ (0, ∞) and let τ k i

be an (Ft′ )-stopping time. Define (S i ) and (S ) each as the arrival times of the point processes N n,1 = (N 1n p1 t ) and N

n,1

1

= (N n p1 t ) respectively. Let η ∈ (0, 12 ) and set τn1 = τ 1 + n −η . Then,

we define (S i ) sequentially by S 0 = 0 and   m S i = inf Sl{Si−1
n

n

i = 1, 2, . . . .

Here, for a stopping time T with respect to filtration (Ft ) and a set A ∈ FT , we define T A by T A (ω) = T (ω) if ω ∈ A; T A (ω) = ∞ otherwise. (T j ) is defined in the same way using n,2 2 n,1 N n,2 = (N 2n p2 t ), N = (N n p2 t ) and τn2 = τ 2 + n −η instead of N n,1 , N and τn1 respectively. Roughly speaking, (S i ) (resp. (T j )) is the Poisson sampling with the intensity n p 1 (resp. n p 2 ) independent of X and Y until the time τn1 (resp. τn2 ), while its intensity changes to n p 1 (resp. n p 2 ) after the time τn1 (resp. τn2 ). Thus, (S i ) (resp. (T j )) depends on X and Y only via τn1 (resp. τn2 ) and it is determined n −η later than X and Y . Therefore, (S i ) and (T i ) seem to satisfy [A2]. Note that sampling times independent of X and Y can always be assumed to satisfy [A2] as stated in Remark 3.2 (the construction of the stochastic basis in this subsection reflects this fact). In fact, [A2] can be verified with ξ = η + 1/2 in a similar manner to the proof of Lemma 8.1 of [21].

2712

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Now we verify the conditions [A1′ ], [A4] and [A6]. First, [A6] is obviously satisfied. Next, since rn (t) = O p (log n/n) as n → ∞ for any t > 0 by Corollary 1 in [35], (2.3) holds for any n ′ ξ ′ ∈ (0, 1), hence  [A4] holds true.  Finally, let H be ′the filtration generated by the σ -field F and the processes i 1{S i ≤t} and j 1{T j ≤t} . Then [A1 ] is verified by the following proposition. Proposition 5.1. We have [A1′ ] with χ ≡ 0 and     1 1 1 1 1 1 Gs = + 2− 1 1{s<τ 1 ∧τ 2 } + + 2− 1 1{τ 1 ≤s<τ 2 } p1 p p + p2 p p1 p + p2   1 1 1 1{τ 2 ≤s<τ 1 } + + 2− p1 p p1 + p2   1 1 1 + + 2− 1 1{τ 1 ∨τ 2 ≤s} . p1 p p + p2

(5.1)

Proof. First, it is evident that [A1′ ](ii)–(iii) and (v) with χ ≡ 0 hold true. t ) with intensity λ1 + λ2 . Next, let λ1 , λ2 ∈ (0, ∞), and consider a Poisson process ( N  with P(η1 = 1) = Moreover, let (ηk )k∈N be an i.i.d. random variable independent of N  tn N t1 = 2  1 1 − P(η1 = 0) = λ1 /(λ1 + λ2 ), and set N k=1 ηk and Nt = Nt − Nt . Then, a short k  calculation shows that for each k = 1, 2 N is a Poisson process with intensity λk . Furthermore, 1 and N 2 are independent. Using this fact, we can show that Theorem 2 in [10] implies that N n n 0 G(1) R k−1 = G R k−1 for any k ∈ N − Nn . Here, G n denotes the process defined by the right hand side of (5.1) with τ 1 and τ 2 replaced by τn1 and τn2 , and Nn0 = {k|R k−1 ≤ τn1 < R k } ∪ {k|R k−1 ≤ Sk.p.

τn2 < R k }. Since #Nn0 ≤ 2 and G n −−→ G as n → ∞, we conclude that [A1′ ](i) with (5.1) holds true.  5.3. A round-off error model In this subsection we illustrate an example of the microstructure noise model involving rounding effects. It is a version of Example 2 in [24]. The round off error is known as one of the sources of the Epps effect; see [30]. Suppose that the observation data is given as follows: X S i = γ X ⌈(X Si + u iX )/γ X ⌋,

YT j = γ Y ⌈(YT j + u Yj )/γ Y ⌋.

(5.2)

Here, γ X , γ Y > 0, (u iX ) and (u Yj ) are mutually independent i.i.d. sequences of random variables independent of X and Y , and for a real number x we denote by ⌈x⌋ the unique integer a such that a − 1/2 ≤ x < a + 1/2. Suppose that u iX and u Yj each are uniform over [−γ X /2, γ X /2] and [−γ Y /2, γ Y /2] respectively. Then, this model can be accommodated to our framework in the following way: For a real number x, let µxX be the Bernoulli distribution taking values γ X (δxX +sign(−δxX )) and γ X δxX with probabilities |δxX | and 1−|δxX |, where δxX = ⌈x/γ X ⌋−x/γ X and sign(a) is equal to 1 if a ≥ 0 and −1 otherwise. Similarly we define µYx with replacing X by Y . After that, we define Q t (ω(0) , dxdy) = µ XX (ω(0) ) (dx)µYY (ω(0) ) (dy). We can easily confirm t t (2.1) and (5.2). Moreover, since the function x → |⌈x⌋ − x| is Lipschitz, the condition [N] holds if we have [C1].

2713

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

6. Proof of some auxiliary results Proof of Lemma 4.1. Without loss of generality,   we may assume that f 0 = 0. Define the process (0)  H = (Ht )t∈R+ by Ht = sup0≤s≤t E | f s | F0 . Evidently H is non-decreasing and satisfies E[| f T |] ≤ E[|HT |] for every bounded F(0) -stopping time T . Further, H is F(0) -predictable (0) because Ht is F0 -measurable for every t. Since the above inequality implies the locally boundedness of H , so that f is locally bounded because of the Lenglart inequality.  Proof of Lemma 4.3. By integration by parts we have ij ij ij K¯ t L¯ g,h (V, W )t = K¯ − • L¯ g,h (V, W )t + L¯ g,h (V, W )− • K¯ t + [ K¯ i j , L¯ g,h (V, W )]t , ij ij hence it is sufficient to show that L¯ g,h (V, W )− • K¯ t = [ K¯ i j , L¯ g,h (V, W )]t = 0. K¯ t is a step ∨ i j function starting from 0 at t = 0 and jumps to +1 at t = R (i, j) when I¯ ∩ J¯ ̸= ∅, where  j , for K¯ ti j can be rewritten as K¯ ti j = K¯ i j 1(R ∨ (i, j),∞) (t). So, L¯ g,h (V, W )− • R ∨ (i, j) =  Si ∨ T ij ij ij K¯ t = L¯ g,h (V, W ) R ∨ (i, j)∧t− K¯ t and [ K¯ i j , L¯ g,h (V, W )]t = K¯ t ∆ L¯ g,h (V, W ) R ∨ (i, j)∧t . ∨ However, L¯ g,h (V, W )t = 0 for t ≤ R (i, j) by its definition. This implies that L¯ g,h (V, W ) does not jump at R ∨ (i, j). 

Proof of Proposition 4.1. We apply Theorem 2-2 of [23]. Eqs. (2.8)–(2.10) and (2.12) in [23] are satisfied with B = 0, F = V and G = 0 due to the assumptions and Lemma 4.4. Moreover, Lemma 4.5 yields Eq. (2.26) in [23] due to the definition of the compensator of a random measure. Consequently, we complete the proof.  7. Proof of Lemma 4.2 Throughout this section, we fix a constant γ such that γ < ξ ′ − 1/2. First we remark some elementary properties of refresh times. Proposition 7.1. The following statements are true. k = R k for every k. (a)  Sk ∨ T i  j ) ⇒ (i ≤ j) and (  j ) ⇒ (i ≥ j) for every i, j. (b) ( S T Proof. (a) Obvious. j ≤ R j <   j ) implies  (b) Since T S j+1 , ( Si < T Si <  S j+1 , hence i ≤ j. Consequently, we obtain the former statement. By symmetry we also obtain the latter statement.  Since ( I¯i ∩ J¯ j = ̸ ∅) ⇒ (|i − j| ≤ kn ) by Proposition 7.1(b), we have ∞  j=0

K¯ i j ≤ 2kn + 1,

∞ 

K¯ i j ≤ 2kn + 1

(7.1)

i=0

for any i, j ∈ Z+ . (0) Next, note that for the proof we can assume the boundedness of the F0 -measurable random (0) variable V0′ in the condition [C1]. In fact, for any F0 -measurable random variable ζ0 and any positive number K , 1{|ζ0 |≤K } X and 1{|ζ0 |≤K } Y are also continuous semimartingales on B (0) and satisfy [C1] as far as X and Y do so. Moreover, P(X ̸= 1{|ζ0 |≤K } X or Y ̸= 1{|ζ0 |≤K } Y ) ≤ P(|ζ0 | > K ) → 0 as K → ∞. Therefore, a standard localization argument allows us to assume

2714

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

the boundedness of V0′ . In addition, we can use another standard localization procedure, and which allows us to systematically replace the conditions [C1]–[C3] by the following strengthened versions: [SC1] [SC2] [SC3]

[C1] and (A X )′ , (AY )′ , [X ]′ , [Y ]′ and [X, Y ]′ are bounded.  holds, 2 ( |z| Q t (dz))t∈R+ is a bounded process. There is a positive constant K such that bn Nn (t) ≤ K for all n and t.

ξ We write r¯n = bn . Next, let υn = inf{t|rn (t) > r¯n }, and define a sequence ( S i )i∈Z+ sequentially by  i S if S i < υn ,  S i = i−1 S + r¯n otherwise. ′

Then, ( S i ) is obviously a sequence of F(0) -stoppingtimes satisfying (2.2) and supi∈N ( Si − i−1 i i  S ) ≤ r¯n . Furthermore, for any t > 0 we have P( i { S ∧ t ̸= S ∧ t}) ≤ P(υn < t) → 0  j ) in a similar as n → ∞ by (2.3). By replacing (S i ) with (T j ), we can construct a sequence (T manner. This argument implies that we may also assume that sup rn (t) ≤ r¯n

(7.2)

t∈R+

by an appropriate localization procedure. Set ∆(g)np = g np+1 − g np for every n, p. For a process V = (Vt )t∈R+ , let g ( V I)it =

k n −1

kn ∆(g)np V ( I i+ p )t ,

g (J)tj = V

k n −1

kn ∆(g)qn V ( Jj+q )t

q=0

p=0

for each t ∈ R+ and i, j ∈ N. Lemma 7.1. Suppose A X , AY , [X ] and [Y ] are absolutely continuous with locally bounded derivatives. Suppose also (7.2) holds. Then a.s. we have lim sup sup  n→∞ i∈N

| X¯ g ( I)it | 2kn r¯n log

lim sup sup  n→∞ j∈N

1 r¯n

j |Y¯g (J)t |

2kn r¯n log r¯1n

≤ ∥g∥∞ sup |[X ]′s |, 0≤s≤t

(7.3) ≤ ∥g∥∞ sup 0≤s≤t

|[Y ]′s |

for any t > 0, where L is a positive constant which only depends on g. Proof. Combining a representation of a continuous local martingale as a time-changed Brownian motion and L´evy’s theorem on the uniform modulus of continuity of Brownian motion, we obtain |X s − X u | lim sup sup  ≤ sup |[X ]′s |. 1 δ→+0 s,u∈[0,t] 0≤s≤t 2δ log δ |s−u|≤δ  n −1 n Since X¯ g ( I)it = − kp=0 ∆(g)np (X  S i+ p ∧t − X  S i ∧t ) and |∆(g) p | ≤ inequality in (7.3). By symmetry we also obtain the second inequality of (7.3). 

1 kn ∥g∥∞ ,

we obtain the first

2715

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

We can strengthen Lemma 7.1 by a localization if we assume that (7.2) and [SC1] hold, so that in the remainder of this section we always assume that we have a positive constant K and a positive integer n 0 such that j | X¯ g ( I)it (ω)| |Y¯g (J)t (ω)| sup  + sup  ≤K 2kn r¯n | log bn | j∈N 2kn r¯n | log bn | i∈N

(7.4)

for all t > 0 and ω ∈ Ω if n ≥ n 0 . Moreover, we only consider sufficiently large n such that n ≥ n0. Let ∞ ∞   1 1 j ij i ¯  j ¯ ij  ¯ g ( , II := K X I) Y ( J ) E X ( I)it  EYg (J)t K¯ t , It := t g t t t (ψ H Y kn )2 i, j=1 (ψ H Y kn )2 i, j=1 g IIIt :=

1 (ψ H Y kn )2

∞ 

j ij X¯ g ( I)it  EYg (J)t K¯ t , IVt :=

i, j=1

1

∞ 

(ψ H Y kn )2

i, j=1

j ij  EgX ( I)it Y¯g (J)t K¯ t .

The following lemma tells us that the edge effects are negligible. Throughout the discussions, for (random) sequences (xn ) and (yn ), xn . yn means that there exists a (non-random) constant C ∈ [0, ∞) such that xn ≤ C yn for large n. We denote by E 0 a conditional expectation given F (0) , i.e. E 0 [·] := E[·|F (0) ]. −γ

Lemma 7.2. Suppose that [SC1]–[SC3] and (7.2) are satisfied. Then we have bn { P H Y (X, Y)n ucp − (I + II + III + IV)} −−→ 0 as n → ∞. Proof. We can rewrite  P H Y (X, Y)nt and It + IIt + IIIt + IVt as  ∞ 1 j ij n  P H Y (X, Y)t = X¯ g ( I)it Y¯ g (J)t K¯ t 1{  j+kn ≤t} S i+kn ∨T (ψ H Y kn )2 i, j=1   ¯  ¯ J) j K¯ i j 1 i+kn  j+kn + X( I)i Y( {S ∨T ≤t} i, j:i=0 or j=0

and It + IIt + IIIt + IVt =

∞ 

1 (ψ H Y kn

)2

¯ g ( ¯ g (J) j K¯ i j 1 i  j , X I)it Y t t { S ∨T ≤t}

i, j=1

j j j where X¯ g ( I)it = X¯ g ( I)it +  EgX ( I)it and Y¯ g (J)t = Y¯g (J)t +  EYg (J)t . Hence we can decompose the target quantity as

(It + IIt + IIIt + IVt ) −  P H Y (X, Y)nt  ∞ 1 j ij X¯ g ( I)it Y¯ g (J)t K¯ t 1{ =  j ≤t<  j+kn } S i ∨T S i+kn ∨T (ψ H Y kn )2 i, j=1 −

∞ 

¯ J)0 K¯ i0 1 i+kn kn X¯ g ( I)it Y( t {S ∨T ≤t}

i=1



∞ 

j 0j ¯  ¯ I)0 Y( ¯ J)0 1 kn kn X( I)0 Y¯ g (J)t K¯ t 1{  j+kn ≤t} − X( S kn ∨ T { S ∨T ≤t}

j=1

=: A1,t + A2,t + A3,t + +A4,t .



2716

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Consider A1,t . The Schwarz inequality yields   sup |A1,s |

E0

0≤s≤t

1



(ψ H Y kn )2

sup

∞ 



 E0

1/2        ¯  j 2  ¯  i 2 sup Xg (I)u  E 0 sup Yg (J )u  0≤u≤t

0≤u≤t

0≤s≤t i, j=1

ij × K¯ s 1{  j ≤s<  j+kn } . S i ∨T S i+kn ∨T

Since both E X and EY are martingales on (Ω (1) , F (1) , F(1) , Q(ω(0) , dz)) for each ω(0) , we have     2     X  i 2  j (7.5) E 0 sup E (I)  . k −1 , E 0 sup  EY (J)u  . k −1 0≤u≤t

g

u

n

0≤u≤t

g

n

by the Doob inequality and [SC2]. Combining this with (7.4), we obtain   ∞  r¯n | log bn | ij E 0 sup |A1,s | . K¯ s 1{ sup  j ≤s<  j+kn } . kn r¯n | log bn |, S i ∨T S i+kn ∨T k n 0≤s≤t 0≤s≤t i, j=1 −γ

and thus we conclude that bn sup0≤s≤t |A1,s | → p 0 as n → ∞. −γ Similarly we can show that bn sup0≤s≤t |Al,s | → p 0 as n → ∞ for l = 2, 3, 4. After all, we complete the proof of the lemma.  tpq = 1 p Let K { I (t)∩ Jq (t)̸=∅} for each p, q, t. Then Proposition 7.1 yields ∞ 

tpq ≤ 3, K

q=1

∞ 

tpq ≤ 3 K

(7.6)

p=1

for every p, q, t. Furthermore, we have the following result (recall that Φ is the set of all realvalued piecewise Lipschitz functions α on R satisfying α(x) = 0 for any x ̸∈ [0, 1]): Lemma 7.3. Suppose that [SC1]–[SC3] and (7.2) are satisfied. Then we have   k ∞   n −1   γ i j i+ p, j+q i+ p, j+q s sup  K K¯ s α np βqn L(V, W )s  = o p (kn2 · bn )   0≤s≤t i, j=1 p,q=0

(7.7)

as n → ∞ for any α, β ∈ Φ, V ∈ {M X , E X , A X }, W ∈ {M Y , EY , AY } and t > 0, where pq L(V, W )t = V ( I p )− • W ( Jq )t + W ( Jq )− • V ( I p )t for each p, q, t. Proof. First, by a localization procedure based on a representation of a continuous local martingale with Brownian motion and L´evy’s theorem on the uniform modulus of continuity of Brownian motion, we may assume that there exist positive constants K and δ0 such that sup s,u∈[0,t] |s−u|≤δ

|MsX − MuX | + |MsY − MuY |  ≤K 2δ| log δ|

whenever 0 < δ < δ0 . Moreover, we only consider sufficiently large n such that r¯n < δ0 .

(7.8)

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2717

First we consider the case that V ∈ {M X , E X } and W ∈ {M Y , EY }. When 1 ≤ p ≤ kn − 1 j ≤   j+q−1 <   j+q ≤   j+kn and 1 ≤ q ≤ kn − 1, we have  Si ∨ T S i+ p−1 ∨ T S i+ p ∧ T S i+kn ∧ T p q   on { I ∩ J ̸= ∅}, hence we can decompose the target quantity as ∞  i, j=1

=

k n −1

ij K¯ s

i+ p, j+q

α np βqn L(V, W )s

si+ p, j+q K

p,q=0 ∞  i, j=1



 



+

p,q>0

ij K¯ s

i+ p, j+q

α np βqn L(V, W )s

si+ p, j+q =: A1,s + A2,s . K

p=0 or q=0

Consider A1,s first. By an argument similar to the proof of Lemma 4.3, we can rewrite it as A1,s =

∞ k n −1 

i+ p, j+q

− α np βqn K

i+ p, j+q

• L(V, W )s

,

i, j=1 p,q=1

and thus A1,· is a locally square-integrable martingale because both V and W are locally square2γ integrable martingales. Therefore, it is sufficient to prove ⟨A1,· ⟩s = o p (bn · kn4 ) as n → ∞ for any s > 0 due to the Lenglart inequality. Since   p−1 q−1 ∞    n n −p,q • L(V, W )sp,q , A1,s = α β K p−i q− j

p,q=1

i=( p−kn +1)∨1 j=(q−kn +1)∨1

we have ⟨A1,· ⟩s .

kn4 +

  ∞

  ′ ′ −pq K −p q • V ( K I p )− V ( I p )− • ⟨W ⟩( Jq )

p, p ′ ,q=1 ∞ 

s

  ′ −pq • W ( Jq )− W ( Jq ′ )− • ⟨V ⟩( −pq K I p) K s

p,q,q ′ =1

=: I + II. If V ̸= E X or W ̸= EY , (7.8), [SC2], the Schwarz inequality and (7.6) yield E 0 [I] . kn4r¯n | log bn |E 0 [⟨W ⟩s ] . ′ X ϵX 1 Y q −2 Y )2 ′ On the other hand, since E X ( I p )t E X ( I p )t = kn−2 ϵ ′ q S p∨ p ≤t} and [E ]( J )t = kn (ϵT Sp  S p { 1{Tq ≤t} , we have



X X Y 2 ′ q E X ( I p )− E X ( I p )− • [EY ]( Jq )s = kn−4 ϵ ϵ S p′ (ϵT q ) 1{ S p∨ p
(7.9)

Therefore, [SC2]–[SC3], the Schwarz inequality and (7.6) yield   ∞    ′q ′ pq p − K − • E X ( E0  K I p )− E X ( I p )− • [E X ]( Jq )  . kn−2 . p, p ′ ,q=1

s

Note that Proposition 4.50 in [26], [SC1]–[SC3] and the above estimates imply that I = o p (kn4r¯n ). By symmetry we also obtain II = o p (kn4r¯n ). Consequently, we conclude that ⟨A1,· ⟩s = 2γ o p (bn · kn4 ) because ξ ′ /2 − (ξ ′ − 1/2) = (1 − ξ ′ )/2 > 0.

2718

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

i+ p, j+q Next consider A2,s . Since integration by parts yields L(V, W )s = V ( I i+ p )s W ( Jj+q )s i+ p j+q − I− J− • [V, W ]s , (7.8), [SC1]–[SC3], the Schwarz inequality and (7.6) imply that   ∞   ti+ p, j+q K E 0 sup |A2,s | . r¯n | log bn | 0≤s≤t

i, j=1

0≤ p,q≤kn −1 p=0 or q=0

ξ ′ −1/2

. r¯n | log bn | · kn bn−1 = kn2 · bn

| log bn |.

γ

Hence we obtain sup0≤s≤t |A2,s | = o p (kn2 · bn ). Consequently, we conclude that (7.7) holds. Next we consider the case that V = A X . Since integration by parts yields i+ p, j+q

L(V, W )s

= V ( I i+ p )s W ( Jj+q )s ,

(7.8), [SC1]–[SC3], the Schwarz inequality and (7.6) imply that    k ∞   n −1   i j i+ p, j+q i+ p, j+q n n  K¯ s α p βq L(V, W )s Ks E 0 sup    0≤s≤t i, j=1 p,q=0 .

∞ k n −1   ti+ p, j+q | K I i+ p (t)| r¯n | log bn |

.

ξ ′ /2  bn | log bn | · kn2 .

i, j=1 p,q=0

Since ξ ′ /2 > ξ ′ − 1/2, we conclude that (7.7). By symmetry we also obtain (7.7) in the case that W = AY . Consequently, we complete the proof of the lemma.  Lemma 7.4. Suppose that (7.2) and [SC1]–[SC3] are satisfied. Then ucp

−γ

(b) bn {II − M(2)n } −−→ 0,

ucp

−γ

ucp

−γ

(a) bn {I − [X, Y ] − M(1)n } −−→ 0,

ucp

−γ

(c) bn {III − M(3)n } −−→ 0,

(d) bn {IV − M(4)n } −−→ 0

as n → ∞. Proof. (a) By integration by parts we have It − M(1)nt =

1

∞ 

(ψ H Y kn )2

i, j=1

ij K¯ t [ X¯ g ( I)i , Y¯g (J) j ]t .

Since [ X¯ g ( I)i , Y¯g (J) j ]t =

k n −1

i+ p j+q I− J− ) • [X, Y ]t g np gqn (

p,q=0

=

i+k n −1 n −1 j+k   p=i

n p q g np−i gq− j ( I− J− ) • [X, Y ]t ,

q=i

we obtain It − M(1)nt

=

1 (ψ H Y kn )2

∞ 



p 

q 

p,q=1 i=( p−kn +1)∨1 j=(q−kn +1)∨1 p q × ( I− J− ) • [X, Y ]t .

 n ¯ ij g np−i gq− j Kt

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2719

q and T q−1 <  On { I p ∩ Jq ̸= ∅} we have  S p−1 < T S p , hence for i ∈ {( p−kn +1)∨1, . . . , p−1} i   j <  and j ∈ {(q −kn +1)∨1, . . . , q −1} we have S < T j+kn −1 and T S i+kn −1 , so that K¯ i j = 1. Therefore, for p, q ≥ kn we have  2 p q k n −1   i j n n n g g K¯ t = g p−i q− j

i

i=1

i=( p−kn +1)∨1 j=(q−kn +1)∨1

p q p q on { I p ∩ Jq ̸= ∅} because g(0) = 0. Since ( I− J− ) • [X, Y ]t = 1{ I p ∩ Jq ̸=∅} ( I− J− ) • [X, Y ]t , we obtain

k n −1

1

It − M(1)nt =

(ψ H Y kn )2 



+

2

∞ 

gin

p,q=kn

i=1 p 

p q 1{ I p ∩ Jq ̸=∅} ( I− J− ) • [X, Y ]t 

q 

 p,q: p∧q
i=( p−kn +1)∨1 j=(q−kn +1)∨1

n K¯ i j  1 p q g np−i gq− j t { I p ∩ Jq ̸=∅} ( I− J− ) • [X, Y ]t



=: B1,t + B2,t .

Since

 kn −1 n 2 g i i=1



= 1 + O(kn−1 ) by the Lipschitz continuity of g and   ∞     γ   p q  J ) • [X, Y ] − [X, Y ] 1{  ( I sup   = O p (kn r¯n ) = o p bn p q  s s − − I ∩ J = ̸ ∅}  0≤s≤t  p,q=k 1

(ψ H Y kn )2

n

γ

by [SC1] and (7.2), we have sup0≤s≤t |B1,s − [X, Y ]s | = o p (bn ). Moreover, [SC1] and (7.2) also γ yield sup0≤s≤t |B2,s | = o p (bn ). Consequently, we complete the proof of (a). (b) Since IIt =

=

1

∞ k n −1 

(ψ H Y kn )2

i, j=1 p,q=0

(ψ H Y kn )2

q 

p 

∞ 

1

ij

kn ∆(g)np kn ∆(g)qn E X ( I p )t EY ( Jq )t K¯ t

n kn ∆(g)np−i kn ∆(g)q− j

p,q=1 i=( p−kn +1)∨1 j=(q−kn +1)∨1

ij × E ( I p )t EY ( Jq )t K¯ t , X

we can decompose it as IIt =



1 (ψ H Y kn )2 +

∞ 



+

p,q: p∧q≥kn

  tpq E X (  c( p, q)t K I p )t EY ( Jq )t p∧q
tpq )E X (  c( p, q)t (1 − K I p )t EY ( Jq )t



p,q=1

=: II1,t + II2,t + II3,t , where  c( p, q)t =

p 

q 

i=( p−kn +1)∨1 j=(q−kn +1)∨1

ij

n ¯ kn ∆(g)np−i kn ∆(g)q− j Kt .

2720

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

When ( p − kn + 1) ∨ 1 ≤ i ≤ p − 1 and (q − kn + 1) ∨ 1 ≤ j ≤ q − 1, we have  j ≤  q−1 <  q ≤   j+kn on { Si ∨ T S p−1 ∨ T Sp ∨ T S i+kn ∨ T I p ∩ Jq ̸= ∅}, hence we obtain p−1 q−1     1 n II1,t = kn ∆(g)np−i kn ∆(g)q− j (ψ H Y kn )2 p,q: p∧q≥k i=( p−k +1)∨1 j=(q−k +1)∨1 n

n

q−1 

pq + (kn ∆(g)n0 )2 K¯ t +

n

n ¯ pj kn ∆(g)n0 kn ∆(g)q− j Kt

j=(q−kn +1)∨1 p−1 

+

iq kn ∆(g)np−i kn ∆(g)n0 K¯ t



tpq E X ( K I p )t EY ( Jq )t .

i=( p−kn +1)∨1

v

 n −1 Note that w=(v−kn +1)∨1 ∆(g)nv−w = kw=0 ∆(g)nw = g(1) − g(0) = 0 when v ≥ kn and g is Lipschitz continuous, we have  1  pq |E X ( |II1,t | . K I p )t EY ( Jq )t | kn p,q: p∧q≥k t n   1  X p 2  Y q 2 . |E ( I )t | + |E ( J )t | , kn p q hence we obtain E[sup0≤s≤t |II1,s |] . kn−1 by the Doob inequality and [SC2]–[SC3]. Therefore, we conclude that 1/2

sup |II1,s | = O p (bn ).

(7.10)

0≤s≤t

On the other hand, since g is Lipschitz continuous and ( I p ∩ Jq ̸= ∅) ⇒ (| p − q| ≤ 1) by k n k n X p 2 Y q 2 Proposition 7.1(b), we have |II2,t | . p=1 |E ( I )t | + q=1 |E ( J )t | , hence the Doob   ξ ′ −1/2 inequality and [SC2] yield E sup0≤s≤t |II2,s | . bn . Therefore, we obtain γ

sup |II2,s | = o p (bn ).

(7.11)

0≤s≤t

Now we estimate II3,t . Since II3,t =

1

∞ 

(ψ H Y kn )2

i, j=1

ij K¯ t

k n −1

kn ∆(g)np kn ∆(g)qn E X ( I i+ p )EY ( Jj+q )t 1{ I i+ p ∩ Jj+q =∅} ,

p,q=0

we can decompose it as II3,t =

1

∞ 

(ψ H Y kn )2

i, j=1

+

(g ′ )np ∆2 (g)qn

ij K¯ t

k n −1

 (g ′ )np (g ′ )qn

p,q=0

+ ∆2 (g)np (g ′ )qn + ∆2 (g)np ∆2 (g)qn

× E X ( I i+ p )t EY ( Jj+q )t 1{ I i+ p ∩ Jj+q =∅} (1)

(2)

(3)

(4)

=: II3,t + II3,t + II3,t + II3,t , where ∆2 (g)np = kn ∆(g)np − (g ′ )np .



2721

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753 (1)

Consider II3,t first. Integration by parts yields X Y i+ p, j+q E X ( I i+ p )t EY ( Jj+q )t 1{ 1{ I i+ p ∩ Jj+q =∅} = L(E , E )t I i+ p ∩ Jj+q =∅} , γ

(1)

hence we obtain sup0≤s≤t |II3,s − M(2)ns | = o p (bn ) by Lemma 7.3 and linearity of integration. (2) Next we consider II3,t . Since K¯ i j = 0 if |i − j| > kn due to Proposition 7.1(b), we have   k  n −1  1  (2) ′ n X i+ p  (g ) E ( I ) sup |II3,s | ≤ sup  s p  (ψ H Y kn )2 i, j:|i− j|≤k 0≤s≤t  p=0 0≤s≤t n   k  n −1   ∆2 (g)qn EY ( Jj+q )s  , × sup   0≤s≤t  q=0 hence the Schwarz inequality and the Doob inequality yield    1/2 k n −1  4 (2) ′ n 2 X i+ p 2 E sup |II3,s | ≤ |(g ) p | E[|E ( I )t | ] (ψ H Y kn )2 i, j:|i− j|≤k p=0 0≤s≤t n  1/2 k n −1 2 n 2 Y j+q 2 × |∆ (g)q | E[|E ( J )t | ] . q=0

|∆2 (g)qn |

because of the piecewise Lipschitz continuity of g ′ , we obtain     n −1 n −1  k 1  k (2) Y j+q 2 X i+ p 2 E[|E ( J )t | ] , E sup |II3,s | . 2 E[|E ( I )t | ] + kn 0≤s≤t j q=0 i p=0   (2) hence [SC2]–[SC3] imply that E sup0≤s≤t |II3,s | . kn−1 . Consequently, we conclude that Since

.

(2)

kn−1

(3)

1/4

1/4

sup0≤s≤t |II3,s | = o p (bn ). Similarly we can show sup0≤s≤t |II3,s | = o p (bn ) and sup0≤s≤t (4)

1/4

|II3,s | = o p (bn ), and thus we conclude that −γ

ucp

bn {II3,· − M(2)n } −−→ 0.

(7.12)

Consequently, (7.10)–(7.12) yield −γ

ucp

bn {II − M(2)n } −−→ 0

(7.13)

as n → ∞. q−1  p−1 n (c) Note that i=( p−kn +1)∨1 j=(q−kn +1)∨1 g np−i kn ∆(g)q− j = 0 when p ∧ q ≥ kn , we can adopt an argument similar to the proof of (b). (d) Similar to the proof of (c).  Proof of Lemma 4.2. The claim of Lemma 4.2 follows immediately from Lemmas 7.2 and 7.4.  8. Proof of Lemma 4.4 By a localization procedure, we may assume [SC1]–[SC3] instead of [C1]–[C3] respectively. We will follow the strategy used in [24,25]. Fix a t ∈ R+ and let N be the set of all square−1/4 integrable martingales orthogonal to (X, Y ) and satisfying bn ⟨Mn , N ⟩t → p 0 as n → ∞.

2722

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Then N is a closed subset of the Hilbert space M⊥ 2 of all square-integrable martingales orthogonal to (X, Y ) by [A1∗ ] and the Kunita–Watanabe inequality. Let N be in the set N 0 of all square-integrable martingales on B (0) orthogonal to (X, Y ). Then it is easy to check that ⟨E X , N ⟩ = ⟨EY , N ⟩ = 0. Hence we have ⟨Mn , N ⟩ = 0 because N is orthogonal to (X, Y ), so that N ∈ N . Consequently, we conclude that N 0 ⊂ N . Let N be in the set N 1 of all square-integrable martingales having N∞ = f (ϵt1 , . . . , ϵtq ),

(8.1)

where f is any bounded Borel function on R2q , t1 < · · · < tq and q ≥ 1. Then it is easy to check that N takes the following form (by convention t0 = 0 and tq+1 = ∞): tl ≤ t < tl+1 ⇒ Nt = M(l; ϵt1 , . . . , ϵtl )t for l = 0, . . . , q, and where M(l; z 1 , . . . , zl ) is a version of the martingale   (0) M(l; z 1 , . . . , zl )t = E (0) f (z 1 , . . . , zl , zl+1 , . . . , z q )Q tl+1 (dzl+1 ) · · · Q tq (dz q )|Ft (with obvious conventions when l = 0 and l = q), which is measurable in (z 1 , . . . , zl , ω(0) ). In particular N has a locally finite variation, hence N is a purely discontinuous local martingale because of Lemma I-4.14 of [26] and  ij ij j [ K¯ • {V¯α ( I)i • W¯ β (J) j }, N ]t = K¯ s V¯α ( I)i ∆W¯ β (J)s ∆Ns −

s



s:0≤s≤t

=



j ij I)itl ∆W¯ β (J)tl ∆Ntl K¯ tl V¯α (

l:tl ≤t

for any semimartingales V, W , α, β ∈ Φ and t ∈ R+ . Therefore, for any V ∈ {X, E X }, we have ij

[ K¯ − • {V¯α ( I)i− • Y¯β (J) j }, N ]t = 0, and note that from the boundedness of N we also have      k n −1  i j   ij ′ n Y i  1 i Y  j   ¯ ¯ ¯ ¯ ¯ (g )k ϵtl 1{Tˆ j+k =tl }  . |[ K − • {Vα (I)− • Eg′ (J ) }, N ]t | . K tl Vα (I)tl     k n l:t ≤t k=0 l

Hence, the Schwarz inequality and (7.1) yield   ∞     i j i Y j E  [ K¯ − • {V¯α ( I)− • E¯ g′ (J) }, N ]t  i, j=1    2 1/2  1/2  ∞ ∞   1 k     n −1    ij  ij ≤ E K¯ tl |V¯α ( I)itl |2 E  K¯ tl  (g ′ )nk ϵtYl 1{Tˆ j+k =tl }       k n k=0 l:tl ≤t i, j=1 i, j=1     1/2  2   ∞ ∞ k     1/2  n −1     i 2 ′ n Y  ¯ . E |Vα (I)tl | E  (g )k ϵtl 1{Tˆ j+k =tl }   .  j=1  k=0   l:t ≤t i=1 l

  ∞ [SC1]–[SC3] imply that i=1 E |V¯α ( I)itl |2 . kn . Moreover, since  2  n −1 n −1  k  k  (g ′ )nk ϵtYl 1{Tˆ j+k =tl }  = |(g ′ )nk ϵtYl |2 1{Tˆ j+k =tl } ≤ kn ∥g ′ ∥∞ |ϵtYl |2 ,   k=0  j j k=0

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2723

we conclude that   ∞     ij i Y  j  ¯ ¯ ¯ [ K − • {Vα (I)− • Eg′ (J ) }, N ]t  . qkn . E   i, j=1 Consequently, we obtain ∞ 

ij [ K¯ − • { L¯ α ( I)i− • M¯ β (J) j }, N ]t = O p (kn )

i, j=1

for any (L , α) ∈ {(X, g), (E X , g ′ )} and (M, β) ∈ {(Y, g), (EY , g ′ )}. By symmetry we also obtain ∞ 

j ij I)i }, N ]t = O p (kn ) [ K¯ − • { M¯ β (J)− • L¯ α (

i, j=1

for any (L , α) ∈ {(X, g), (E X , g ′ )} and (M, β) ∈ {(Y, g), (EY , g ′ )}. Consequently, we obtain 1/4 [Mn , N ] = O p (kn−1 ) = o p (bn ). Since ⟨Mn , N ⟩ is the predictable compensator of [Mn , N ] by Proposition I-4.50 of [26], we conclude that N ∈ N . ⊥ Since N 0 ∪ N 1 is a total subset of M⊥ 2 , we conclude that N = M2 . This completes the proof of the lemma.  9. Proof of Lemma 4.5 Exactly as in Section 7, we can use a localization procedure for the proof, and which allows us to replace the conditions [A4] and [N] by the following strengthened versions: 9 [SA4] ξ ∨ 10 < ξ ′ and (7.2) holds.  [SN] We have [N], and the process ( |z|8 Q t (dz))t∈R+ is bounded. Furthermore, for any λ > 0 there exists a positive constant Cλ such that    ij ij E |Ψt − Ψ(t−h)+ |F(t−h)+ ≤ Cλ h 1/2−λ

for every i, j ∈ {1, 2} and every t, h > 0. Proof of Lemma 4.5. By a localization procedure, we may assume that [SC1]–[SC3], [SA4], [SN] and (7.4) hold. Since Lemma 4.1 and Eq. I-4.36 in [26] yield    ij 1 i i ¯  ¯  ¯ Y′ (J)sj + K¯ si j Y( ¯ X′ (J)sj , ¯ s X( ∆Mns = K I) ∆ E I) ∆ E s s g g (ψ H Y kn )2 i, j ¯  ¯ J)sj = Y¯g (J)sj + E¯ Y′ (J)sj , it is sufficient to prove that where X( I)is = X¯ g ( I)is + E¯ gX′ ( I)is and Y( g  4 bn−1   ¯ i j ¯  i ¯ Y  j  K s X(I)s ∆Eg′ (J )s  → p 0,   kn8 0≤s≤t  i, j  4 bn−1   ¯ i j ¯  j ¯ X  i  K s Y(J )s ∆Eg′ (I)s  → p 0   kn8 0≤s≤t  i, j

(9.1)

2724

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

as n → ∞ for any t > 0. Since  i, j

n −1 1  k ij ¯  i j ij ¯  i Y K¯ s X( I)s ∆E¯ Yg′ (J)s = − I)s ϵT j+q 1{T j+q =s} (g ′ )qn K¯ s X( kn i, j q=0

=−

q ∞ ∞   1  n Y ¯  ¯ i jq X( (g ′ )q− I)iTq , ϵT 1 q  j KT  kn q=1 q {T =s} i=1 j=(q−k +1)∨1 n

we have  4     i j j i Y ¯  K¯ s X( I)s ∆E¯ g′ (J)s    i, j  4  q ∞  ∞   4  1   ij ¯  i  Y ′ n ¯ = 4 ϵTq 1{Tq =s}  (g )q− j K Tq X(I)Tq    i=1 j=(q−k +1)∨1 kn q=1 n

q′

q ̸= T  if q ̸= q ′ . Moreover, since K¯ i j ≡ 0 when |i − j| > kn due to because T Proposition 7.1(b), we have 4      j i j i Y ¯  K¯ s X( I)s ∆E¯ g′ (J)s     i, j ≤ ∥g ′ ∥4∞ kn2

∞  4  Y ϵT 1{Tq =s} q q=1



= ∥g ′ ∥4∞ kn2

k n −1 

i, j:|i− j|≤kn q=0

q 



j=(q−kn +1)∨1 i:|i− j|≤kn

Y ϵT j+q

   ¯  i 4 X(I)Tq 

4 4  ¯  i  X(I)T j+q  1{T j+q =s} ,

hence we obtain  4    i j  j ¯  K¯ s X( I)is ∆E¯ Yg′ (J)s     0≤s≤t i, j 

≤ ∥g ′ ∥4∞ kn2

k n −1 

i, j:|i− j|≤kn q=0

Y ϵT j+q

4 4  ¯  i  X(I)T j+q ∧t  1{T j+q ≤t} .

Therefore, the Schwarz inequality and [SN] yield   4     i j  j ¯  E0  K¯ s X( I)is ∆E¯ Yg′ (J)s      0≤s≤t i, j  k n −1



. kn2

i, j:|i− j|≤kn q=0

 8 1/2 ¯  i  E 0 X( I)T j+q ∧t  1{T j+q ≤t} .

Now, the Burkholder–Davis–Gundy inequality and [SN] imply   k 8   n −1     ′ n X 2 E 0 E¯ X′ ( I)i j+q  . k −8 E 0  (g ) ϵ i+q  1 i+ p g

 T

∧t

n

p=0

p  S

{S

4  ≤t}

 . kn−4 .

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2725

Combining this with (7.4) and [SC3], we conclude that  4      i j ′ j ¯  K¯ s X( I)is ∆E¯ Yg′ (J)s   . kn4 bn−1 (kn r¯n | log bn |)2 = kn6 bn2ξ −1 | log bn |2 , E0     0≤s≤t i, j and thus we obtain  4   bn−1    ¯ i j ¯  i ¯ Y  j   ′ K s X(I)s ∆Eg′ (J )s  . bn2ξ −1 | log bn |2 = o(1) E0    kn8 0≤s≤t i, j because ξ ′ > 9/10. Consequently, we have proved the first equation of (9.1). By symmetry we also obtain the second equation of (9.1), hence we complete the proof.  10. Proof of Proposition 4.2 First we remark some elementary properties of the function ψα,β . Lemma 10.1. Let α, β ∈ Φ. (a) (b) (c) (d)

ψβ,α (x) = ψα,β (−x) for all x ∈ R. ψα,β (x) = 0 for x ̸∈ [−2, 2]. ′ ψα,β is differentiable, and ψα,β = ψα,β ′ if β is piecewise C 1 . ψα,α is an even function. Furthermore, ψα,α ′ and ψα ′ ,α are odd functions if α is piecewise C 1.

Proof. Since (x+u−1 ≤ v ≤ x+u+1) ⇔ (−x+v−1 ≤ u ≤ −x+v+1) and α(w) = β(w) = 0 if w ̸∈ [0, 1], Fubini’s theorem yields (a). (b) and (c) are obvious. (d) immediately follows (a) and (b).  Next, set 1 cα,β ( p, q) = 2 kn

p 

q 

i=( p−kn +1)∨1 j=(q−kn +1)∨1

α



p−i kn

   q − j ¯ ij β K , kn

for α, β ∈ Φ. The following lemma, which is a counterpart of Lemma 7.2 of Christensen et al. [9], is the key to calculation of the asymptotic variance of the estimation error of our estimator: Lemma 10.2. Let α, β : R → R be two piecewise Lipschitz functions satisfying α(x) = β(x) = 0 with x ̸∈ [0, 1]. Then we have       q − p  1/2 sup cα,β ( p, q) − ψα,β = O p bn  kn p,q: p,q≥kn as n → ∞. Proof. For p, q ≥ kn , we can rewrite cα,β ( p, q) as     n −1 1 k i j cα,β ( p, q) = 2 α β K¯ p−i,q− j . kn kn kn i, j=0

2726

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

By definition, we have K¯ p−i,q− j = 1{ q− j+kn , q− j } . Moreover, by ProposiS p−i T tion 7.1(b) we have q− j ) ⇒ (q − p + i − kn ≤ j ≤ q − p + i + kn ), q− j+kn ,  S p−i+kn > T ( S p−i < T q− j+kn ,  q− j ). (q − p + i − kn ≤ j ≤ q − p + i + kn ) ⇒ ( S p−i ≤ T S p−i+kn ≥ T Hence we obtain   [(q− p+i+k   n )∨0]∧(kn −1) n −1 1 k i j 1/2 cα,β ( p, q) = 2 α + O p (bn ) β kn j=[(q− p+i−k )∧(k −1)]∨0 kn kn i=0 n

n

uniformly in p, q. Note that from q − p + i − kn ≤ q − p + i + kn and α(x) = β(x) = 0 if x ̸∈ [0, 1], we have   q− p+i+k   n −1  n 1 k i j 1/2 cα,β ( p, q) = 2 α β + O p (bn ) kn j=q− p+i−k kn kn i=0 n

uniformly in p, q. Therefore, the piecewise Lipschitz continuity of α and β completes the proof.  Now we show the main body of the proof of Proposition 4.2. Lemma 10.3. Let α, β, α ′ , β ′ ∈ Φ and let (M n ), (N n ), (M ′n ) and (N ′n ) be four sequences of locally square-integrable martingales such that ⟨M n ⟩t = O p (1),

⟨N n ⟩t = O p (1),

⟨M ′n ⟩t = O p (1),

⟨N ′n ⟩t = O p (1) (10.1)

and ′ sup ⟨M n ⟩( I p )t = o p (bnξ ),

′  sup ⟨N n ⟩( Jq )t = o p (bnξ ),  

p∈N

q∈N

p∈N

q∈N

′ sup ⟨M ′n ⟩( I p )t = o p (bnξ ),

′ sup ⟨N ′n ⟩( Jq )t = o p (bnξ ) 

(10.2)

as n → ∞ for any t ∈ R+ . Then, we have 1  ¯ i j ¯ i′ j′ i ji ′ j ′ ( K − K − ) • Vα,β;α ′ ,β ′ (M n , N n ; M ′n , N ′n )t 4 kn i, j,i ′ , j ′     ∞  q−p q−p ψα ′ ,β ′ ⟨M n , M ′n ⟩( I p )t ⟨N n , N ′n ⟩( Jq )t = ψα,β k k n n p,q=1     ∞  q−p p−q 1/2 + ψα,β ψα ′ ,β ′ ⟨M n , N ′n ⟩( I p )t ⟨M ′n , N n ⟩( Jq )t + o p (bn ) k k n n p,q=1 as n → ∞ for any t ∈ R+ . ′ ′

ij i j Proof. First note that K¯ t K¯ t is a step function starting from 0 at t = 0 and jumps to +1 at t = ′ ′ i ji ′ j ′ R ∨ (i ∨i ′ , j ∨ j ′ ) when I¯i ∩ J¯ j ̸= ∅, I¯i ∩ J¯ j ̸= ∅ and that Vα,β;α ′ ,β ′ (M n , N n ; M ′n , N ′n )t =0 if t ≤ R ∨ (i ∨ i ′ , j ∨ j ′ ), so we have ′ ′

ij i j i ji ( K¯ − K¯ − ) • Vα,β;α ′ ,β ′ (M n , N n ; M ′n , N ′n )t

′ j′

′ ′ i ji = K¯ i j K¯ i j Vα,β;α ′ ,β ′ (M n , N n ; M ′n , N ′n )t

′ j′

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2727

by integration by parts. Therefore, we have ′ ′ 1  ¯ i j ¯ i′ j′ n n ′n ′n i ji j ′ ,β ′ (M , N ; M , N )t K ) • V ( K α,β;α − − kn4 i, j,i ′ , j ′

=

k n −1 1  ′ ′ K¯ i j K¯ i j α np βqn (α ′ )np′ (β ′ )qn ′ kn4 i, j,i ′ , j ′ p,q, p′ ,q ′ =1  j+q j ′ +q ′ i+ p i ′ + p ′ • ⟨N n , N ′n ⟩t ) • ⟨M n , M ′n ⟩t )( J− J− × ( I−  I−  i ′ + p ′ j+q i+ p j ′ +q ′ • ⟨M n , N ′n ⟩t )( I− J− • ⟨M ′n , N n ⟩t ) + ( I− J− ∞ 

=

 q q′ p p′ I− • ⟨M n , M ′n ⟩t )( J− J− • ⟨N n , N ′n ⟩t ) cα,β ( p, q)cα ′ ,β ′ ( p ′ , q ′ ) ( I− 

p,q, p ′ ,q ′ =1

 p′ q p q′ I− J− • ⟨M ′n , N n ⟩t ) + ( I− J− • ⟨M n , N ′n ⟩t )( =: D1,t + D2,t . ′ ′ Since  Ip ∩ I p = Jq ∩ Jq = ∅ if p ̸= p ′ , q ̸= q ′ , we have

D1,t =

∞ 

cα,β ( p, q)cα ′ ,β ′ ( p, q)⟨M n , M ′n ⟩( I p )t ⟨N n , N ′n ⟩( Jq )t .

p,q=1

Moreover, since |cα,β ( p, q)|, |cα ′ ,β ′ ( p, q)| . 1 and cα,β ( p, q) = cα ′ ,β ′ ( p, q) = 0 if | p − q| > 2kn due to Proposition 7.1(b), (10.2) and the Kunita–Watanabe inequality imply that  1/2 D1,t = cα,β ( p, q)cα ′ ,β ′ ( p, q)⟨M n , M ′n ⟩( I p )t ⟨N n , N ′n ⟩( Jq )t + o p (bn ), p,q: p,q≥kn | p−q|≤2kn

hence Lemma 10.2, (10.1), (10.2) and the Kunita–Watanabe inequality yield     ∞  q−p q−p D1,t = ψα,β ψα ′ ,β ′ kn kn p,q=1 1/2

× ⟨M n , M ′n ⟩( I p )t ⟨N n , N ′n ⟩( Jq )t + o p (bn ). On the other hand, an argument similar to the above yields    ′  ∞  q−p q − p′ D2,t = ψα,β ψα ′ ,β ′ kn kn p,q, p ′ ,q ′ =1 ′ ′ 1/2 × ⟨M n , N ′n ⟩( I p ∩ Jq )t ⟨M ′n , N n ⟩( I p ∩ Jq )t + o p (bn ).

Since  I p ∩ Jq = ∅ if | p − q| > 1 by Proposition 7.1(b), we obtain    ′    q−p q − p′ D2,t = ψα,β ψα ′ ,β ′ kn kn p,q ′ :|q ′ − p|≤1 p ′ ,q:| p ′ −q|≤1 ′ ′ 1/2 × ⟨M n , N ′n ⟩( I p ∩ Jq )t ⟨M ′n , N n ⟩( I p ∩ Jq )t + o p (bn ),

2728

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

however, since ψα,β and ψα ′ ,β ′ are Lipschitz continuous, an argument similar to the above yields      ∞   q−p p−q D2,t = ψα,β ψα ′ ,β ′ kn kn p,q=1 p ′ :| p ′ −q|≤1 q ′ :|q ′ − p|≤1 ′ ′ 1/2 I p ∩ Jq )t + o p (bn ). × ⟨M n , N ′n ⟩( I p ∩ Jq )t ⟨M ′n , N n ⟩(

Therefore, we conclude that     ∞  p−q q−p ψα ′ ,β ′ D2,t = ψα,β kn kn p,q=1 1/2

× ⟨M n , N ′n ⟩( I p )t ⟨M ′n , N n ⟩( Jq )t + o p (bn ), and thus we complete the proof of the lemma.



⟨V, EW ⟩

Proof of Proposition 4.2. Note that = 0 for every V, W ∈ {X, Y }. Then, [B2], Lemma 10.3 and the fact that both ψg,g and ψg′ ,g′ are even functions and ψg,g′ is an odd function 1/2 1/2 yield ⟨M(l)n ⟩t = V¯tn,l + o p (bn ) for l = 1, 2, 3, 4, ⟨M(3)n , M(4)n ⟩t = V¯tn,34 + o p (bn ) as n → ∞ for any t ∈ R+ and 1/2

⟨M(1)n , M(l)n ⟩t = o p (bn ), ⟨M(2)n , M(l ′ )n ⟩t =

1/2 o p (bn ),

l = 2, 3, 4

and

l ′ = 3, 4

as n → ∞ for any t ∈ R+ . Consequently, we obtain the desired result.



11. Proof of Lemma 4.6 Before starting the proof, we strengthen the condition [A3] as follows (note that Lemma 4.1 and the argument in the beginning of Section 7): [SA3] For each V, W = X, Y , [V, W ] is absolutely continuous with a c`adl`ag bounded derivative adapted to Hn , and for any λ > 0 there exists a positive constant Cλ such that       E | f τ1 − f τ2 |2 Fτ1 ∧τ2 ≤ Cλ E |τ1 − τ2 |1−λ Fτ1 ∧τ2 for any bounded F(0) -stopping times τ1 and τ2 , for the density process f = [V, W ]′ . First we prove some lemmas. The following lemma is a version of Lemma 2.3 of Fukasawa [16]: Lemma 11.1. Suppose that for each n ∈ N we have a sequence (τkn ) of Hn -stopping times and a sequence (ζkn ) of random variables such that ζkn is adapted to Hτn n for every k. Let ρ > 1 and k  n t ≥ 0, and set N (τ )nt = ∞ k=1 1{τk ≤t} .     N (τ )nt +1  n 2∧ρ  n  p  N (τ )nt +1  n n H n H n → 0 as n → ∞, then ζ − E ζ (a) If E |ζ | n k k k k=1 k=1 τ τ k−1

k−1

→ p 0 as n → ∞.    N (τ )n +1  (b) If bn N (τ )nt = O p (1) as n → ∞ and bn k=1 t E |ζkn |ρ Hτn n = O p (1) as n → ∞, k−1 then N (τ )nt +1      bn ζkn − E ζkn Hτn n →p 0 k=1

as n → ∞.

k−1

2729

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Proof. Let ϖ = 2 ∧ ρ and let T be a bounded stopping time with respect to the filtration (Hτn n )k∈Z+ . Then, the Burkholder–Davis–Gundy inequality and the C p -inequality yield k      T  T      ϖ  ϖ        n n n ϖ n n n ≤ CE |ζk | + E ζk Hτ n  ζ − E ζk H τ n E   k−1 k−1  k=1 k k=1      T n |ϖ = E  T for some positive constant C independent of n. Since E |ζ E |ζkn |ϖ  k=1 k k=1      ϖ     by the Hτn n by the optional stopping theorem and E ζkn Hτn n  ≤ E |ζkn |ϖ Hτn n k−1 k−1 k−1 H¨older inequality, we obtain      T T     ϖ       n n n n ϖ n ζ − E ζk Hτ n ≤ 2C E E |ζk | Hτ n . E   k−1 k−1  k=1 k k=1 Therefore, note that N (τ )nt + 1 is a stopping time with respect to the filtration (Hτn n ), (a) holds k due to the Lenglart inequality. On the other hand, since  ϖ /ρ N (τ )nt +1  )nt +1   N (τ      E |ζkn |ϖ Hτn n ≤ (N (τ )nt + 1)1−ϖ /ρ E |ζkn |ρ Hτn n k−1 k−1   k=1 k=1 by the H¨older inequality, (b) holds due to the Lenglart inequality and the fact that ϖ > 1.



The following lemma is a version of Lemma 2.2 of Hayashi et al. [17]. Lemma 11.2. Suppose that [A1′ ](i)–(ii) holds. Suppose also that there are a sequence Sk.p.

(H n )n∈R+ of c`adl`ag Hn -adapted processes and a c`adl`ag process H such that H n −−→ H as n → ∞. Then we have n  t N t +1 Hs bn H Rn k−1 → p ds G s 0 k=1 t as n → ∞ for any t ∈ R+ . In particular it holds that bn Ntn → p 0 G1s ds as n → ∞ for any t ∈ R+ . Proof. First we show that [C3] holds. Take a positive number L arbitrarily. Then we have   ∞  E 1{R k ≤t,G(1)n ≤L ,G n ≥L −1 ,#N n ≤L} Rk

k=1

 =E

≤E

∞ Gn  Rk

k=1  ∞  k=1

 = bn−1 E ≤

Lbn−1 E

G nR k

Rk

 1{R k ≤t,G(1)n

Rk

G(1)nR k G nR k

k=1  ∞ 

≤L ,G n k ≥L −1 ,#N0n ≤L} R

 1{R k ≤t,G(1)n

Rk

∞  |Γ k+1 |

k=1

0

G nR k

≤L ,G n k ≥L −1 }

 1{R k ≤t,G(1)n

Rk

≤L ,G n k ≥L −1 } R

 |Γ

+L

R

k+1

|1{R k ≤t,G(1)n

Rk

≤L}

+L

+L

2730

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

   n    Nt   n |Γ k | + E |Γ Nt +1 |1{G(1)n n ≤L} +L ≤ Lbn−1 E  N   R t k=1 ≤ Lbn−1 t + L 2 + L . Therefore, by Chebyshev’s inequality we obtain   lim sup lim sup P bn Ntn > K K →∞ n→∞   ∞    k n n −1 n c ≤ lim sup P {R ≤ t} ∩ {G(1) R k ≤ L , G R k ≥ L , #N0 ≤ L} . n→∞

k=1

On the other hand, since ∞  

{R k ≤ t} ∩ {G(1)nR k ≤ L , G nR k ≥ L −1 , #N0n ≤ L}c



k=1



 sup



0≤s≤t

G(1)ns

>L ∪

 inf

0≤s≤t

G ns


−1



∪ {#Nn0 > L},

note that G(1)nt ≤ {G(ρ)nt }1/ρ

(11.1)

by the H¨older inequality, [A1′ ](i)–(ii) yield   ∞    lim sup lim sup P {R k ≤ t} ∩ {G(1)nR k ≤ L , G nR k ≥ L −1 , #N0n ≤ L}c = 0. L→∞

n→∞

k=1

Consequently, we obtain [C3].

   Ntn +1  H Rn k−1 ρ  −1 k ρ  n  Next, since and [C3] imply that bn k=1  G n  E bn |Γ | H R k−1 = R k−1   Ntn +1 H Rn k−1  −1 k O p (1), Lemma 11.1(b) yields bn k=1 G n bn |Γ | − G(1)nR k−1 = o p (1). Therefore, [A1′ ](i)–(ii)

R k−1

note that the fact that (#Nn0 )n∈N is tight, we conclude that bn

n N t +1

H Rn k−1

k=1



n N t +1

H Rn k−1

k=1

G nR k−1

|Γ k | → p 0

as n → ∞. Since Theorem VI-6.22 of [26] implies that n N t +1

H Rn k−1

G nR k−1 k=1

|Γ k | → p

t

 0

Hs ds Gs

as n → ∞, we complete the proof.



Proof of Lemma 4.6. By a localization procedure, we may assume that [SA3] and [SN] hold instead of [A3] and [N] respectively (note Lemma 4.1 and the argument in the beginning of Section 7). Recall that for a function α on R we write α np = α( p/kn ) for each n ∈ N and p ∈ Z.

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2731

(a) Without loss of generality, we may assume that G(1)n → G a.s. as n → ∞ in D(R+ ). First we show that ∞ 

n 2 p q |(ψg,g )q− p | [X ]( I )t [Y ]( J )t

p,q=1 ∞ 

=

1/2 n 2 q p |(ψg,g )q− p | [X ]( I )t [Y ](Γ )t + o p (bn ).

(11.2)

p,q=1

We have ∞ 

n 2 q p q |(ψg,g )q− p | [X ]( I )t {[Y ]( J )t − [Y ](Γ )t }

p,q=1 ∞ 

=

=

n 2 p |(ψg,g )q− q ∧t − [Y ] R q ∧t ) − ([Y ]T q−1 ∧t − [Y ] R q−1 ∧t )} p | [X ]( I )t {([Y ]T

p,q=1 ∞  

 n 2 n 2 |(ψg,g )q− | − |(ψ ) | [X ]( I p )t ([Y ]Tq ∧t − [Y ] R q ∧t ). g,g q+1− p p

p,q=1

Therefore, [SA3], [A4] and the fact that ψg,g is Lipschitz continuous and equal to 0 outside [−2, 2] by Lemma 10.1(b) imply the desired claim. By symmetry, we also obtain ∞ 

n 2 p q |(ψg,g )q− p | [X ]( I )t [Y ]( J )t

p,q=1 ∞ 

=

1/2

n 2 p q |(ψg,g )q− p | [X ](Γ )t [Y ](Γ )t + o p (bn ),

(11.3)

p,q=1

hence we have ∞ 

n 2 p q |(ψg,g )q− p | [X ]( I )t [Y ]( J )t

p,q=1



=

n 2 p q |(ψg,g )q− p | [X ](Γ )t [Y ](Γ )t

p,q: p
+



1/2

n 2 p q |(ψg,g )q− p | [X ](Γ )t [Y ](Γ )t + o p (bn ).

p,q: p>q

Consider the first term on the right hand side of the above equation. Let υn = (t + 1) ∧ k = [ R k = R k ∧ υn and Γ k−1 , R k ). Then obviously υn is F(0) -stopping time inf{s|rn (s) > r¯n }, R k  and supk |Γ (t)| ≤ 2¯rn . Therefore, we have    q )t − [Y ]′ q−1 |Γ q (t)|F q−1 E [Y ](Γ  ∧t R ∧t R  ( Rq−1 +2¯rn )∧t   3 ′   ξ −λ ≤ E [Y ]′u − [Y ]′Rq−1 ∧t |F Rq−1 ∧t du . bn2 q−1 ∧t R

2732

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

for any λ > 0 by the Schwarz inequality and [SA3]. Hence we obtain    ∞      p n 2 q ′ q |(ψg,g )q− p | [Y ](Γ )t − [Y ] R q−1 |Γ (t)|  ; υn > t E  [X ](Γ )t   p=1 q: p
3 ′ 2 ξ −λ

. bn

 R

q: p
p=1

 E

∞ 

  p )t [X ](Γ



n 2 |(ψg,g )q− p|



∧t

3 ′ 1 2 ξ −λ− 2

R

∧t



= o bn

q: p
p=1

by Lemma 10.1(b). Since P(υn ≤ t) → 0 as n → ∞ by [A4], we conclude that  n 2 p q |(ψg,g )q− p | [X ](Γ )t [Y ](Γ )t p,q: p
=

1/2



n 2 p ′ q |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 |Γ (t)| + o p (bn ).

p,q: p
(11.4)

Next, [SC1] yields      q   n 2 p ′ q |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 |Γ (t)| − |Γ |1{R q−1 ≤t}    p,q: p
.

Nt 

n

|(ψg,g )nN n +1− p |2 |Γ p (t)||Γ Nt +1 |. t

p=1

Take L > 0 arbitrarily. Then, we have  n  Nt  n −1/2 bn E  |(ψg,g )nN n +1− p |2 |Γ p (t)||Γ Nt +1 |; G(1)nt ≤ L  t

p=1

 1/2

= bn E 



n

Nt  p=1

|(ψg,g )nN n +1− p |2 |Γ p (t)|G(1)nt ; G(1)nt ≤ L  t

1/2

≤ 2kn bn ∥ψg,g ∥∞ L E[rn (t)] by Lemma 10.1(b). Since E[rn (t)] → 0 as n → ∞ by the bounded convergence theorem, −1/2  Ntn n 2 p Ntn +1 | > η) ≤ for any η > 0 we obtain lim supn P(bn p=1 |(ψg,g ) Ntn +1− p | |Γ (t)||Γ lim supn P(G(1)nt > L). Since (11.1) and [A1′ ](ii) imply that lim supn P(G(1)nt > L) → 0 as L → ∞, we conclude that n

Nt 

n

1/2

|(ψg,g )nN n +1− p |2 |Γ p (t)||Γ Nt +1 | = o p (bn ), t

p=1

and thus we obtain  n 2 p ′ q |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 |Γ (t)| p,q: p
=

 p,q: p
1/2

n 2 p ′ q |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 |Γ |1{R q−1 ≤t} + o p (bn ).

(11.5)

2733

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

On the other hand, [SC1], [SC3] and Lemma 10.1(b) yield ϖ  ∞    ϖ     1/2  n 2 p  |(ψg,g )q− p | [X ](Γ )t  E bn−1 |Γ q | HnR q−1 1{R q−1 ≤t} bn   p: p
where ϖ = 2 ∧ ρ. Hence by [A1′ ](ii), the H¨older inequality, (2.3) and the fact that ϖ ξ ′ ≥ 1 we obtain  ϖ ∞     ϖ    1/2  n 2 p  |(ψg,g )q− bn−1 |Γ q | HnR q−1 1{R q−1 ≤t} → p 0 bn p | [X ](Γ )t  E   p: p
= bn

 p,q: p
1/2

n 2 p ′ n |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 G(1) R q−1 1{R q−1 ≤t} + o p (bn ),

and thus [A1′ ](i)–(ii), (11.1), [SC1], (2.3) and Lemma 10.1(b) imply that  n 2 p ′ q |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 |Γ |1{R q−1 ≤t} p,q: p
= bn

 p,q: p
1/2

n 2 p ′ n |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 G R q−1 1{R q−1 ≤t} + o p (bn ).

(11.6)

Now we show that  n 2 p ′ n bn |(ψg,g )q− p | [X ](Γ )t [Y ] R q−1 G R q−1 1{R q−1 ≤t} p,q: p
= bn

∞ 

[X ](Γ p )t [Y ]′R p−1 G nR p−1

p=1



1/2

2 n |(ψg,g )q− p | 1{R q−1 ≤t} + o p (bn ).

(11.7)

q: p
For a c`adl`ag function x on R+ and an interval I ⊂ R+ , set w(x; I ) = sups,t∈I |x(s) − x(t)|. Moreover, define    ′  w (x; δ, T ) = inf max w(x; [ti−1 , ti )) 0 = t0 < · · · < tr = T, inf (ti − ti−1 ) ≥ δ i≤r

i
for each δ, T > 0. Note that the quantity w′ is the same one defined in Eq. VI-1.8 of [26]. w′ (x; δ, T ) measures the modulus of continuity of x in consideration of discontinuous points of x, and it can be considered as an analog of the modulus of continuity for continuous functions; see VI-1 of [26] for details. Then we have limδ↓0 supn∈N w′ (F n ; δ, T ) = 0 a.s. for any T > 0 by Theorem VI-1.14 b) of [26], where F n = [Y ]′ G n . Therefore, for any η > 0 we can take a positive (random) number δ such that a.s. supn∈N w′ (F n ; δ, t) < η. Then we can take (random) n ) ≥ δ and that points ξin such that 0 = ξ0n < ξ1n < · · · < ξmn¯ n = t, infi
2734

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

n , ξ n )) < η. Let Ξ n = {ξ n |m = 1, . . . , m maxm≤m¯ n w(F n ; [ξm−1 ¯ n }. Then m m   ∞        1/2 n 2 n n |(ψg,g )q− | F − F 1 bn  [X ](Γ p )t q−1 q−1 p−1 {R ≤t}  p R R   p=1 q: p
≤ bn

∞ 



n 2 |(ψg,g )q− p | · 2η

q: p
p=1 1/2 + bn



[X ](Γ p )t



[X ](Γ )t

p∈In

p

n 2 n ∗ |(ψg,g )q− p | · 2(F )t ,

q: p
where In = { p ∈ N|[R p , R p+2kn ) ∩ Ξ n ̸= ∅} and (F n )∗t = supn∈N sups∈[0,t] |Fsn |, due to Lemma 10.1(b). Hence there exists a positive constant C such that   ∞        1/2 n n 2 n |(ψg,g )q− bn  [X ](Γ p )t p | FR q−1 − FR p−1 1{R q−1 ≤t}    p=1 q: p
by [SC1] and Lemma 10.1(b). Now [A1′ ](i) implies that supn∈N (F n )∗t < ∞, while m¯ n < t/δ + 1 n ) ≥ δ. Moreover, for sufficiently large n we have #In ≤ 4k m because infi
∞ 

|Γ p (t)|



1/2

n 2 |(ψg,g )q− rn ) = o p (bn ), p | = o p (¯

q: p
p:R p−1 ≤t
we have bn

∞ 

[X ](Γ p )t [Y ]′R p−1 G nR p−1

p=1

= bn

∞ 



n 2 |(ψg,g )q− p | 1{R q−1 ≤t}

q: p
[X ](Γ p )t [Y ]′R p−1 G nR p−1

p=1



1/2

n 2 |(ψg,g )q− p | 1{R p−1 ≤t} + o p (bn ).

q: p
Therefore, by an argument similar to the above we obtain bn

∞ 

[X ](Γ p )t [Y ]′R p−1 G nR p−1

p=1

= bn2

∞ 



n 2 |(ψg,g )q− p | 1{R q−1 ≤t}

q: p
  2 1/2 n 2 [X ]′R p−1 [Y ]′R p−1 G nR p−1  1{R p−1 ≤t} |(ψg,g )q− p | + o p (bn ).

p=1

q: p
(11.8)

2735

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Consequently, we obtain  n 2 p q |(ψg,g )q− p | [X ](Γ )t [Y ](Γ )t p,q: p
= bn2

∞ 

[X ]′R p−1 [Y ]′R p−1 |G nR p−1 |2 1{R p−1 ≤t}

p=1 3/2

= bn θ



2

ψg,g (x)2 dx

0 1/2

= bn θ



∞ 



1/2

n 2 |(ψg,g )q− p | + o p (bn )

q: p
[X ]′R p−1 [Y ]′R p−1 |G nR p−1 |2 1{R p−1 ≤t} + o p (bn )

p=1 2

ψg,g (x)2 dx

0

t

 0

1/2

[X ]′s [Y ]′s G s ds + o p (bn )

due to Lemma 11.2. By symmetry we also obtain  n 2 p q |(ψg,g )q− p | [X ](Γ )t [Y ](Γ )t p,q: p>q

=

1/2 bn θ



0

ψg,g (x) dx 2

 0

−2

t

1/2

[X ]′s [Y ]′s G s ds + o p (bn ).

After all, we complete the proof of (a). (b) Similar to the proof of (a). l and T k <  k = R k by Proposition 7.1, we have (c) Since  Sk < T Sl if k < l and  Sk ∨ T ∞ 1  n 2 11 22 |(ψg′ ,g′ )q− Ψ q 1{ q ≤t} p | Ψ S p ∨T Sp T kn4 p,q=1

=

∞  1  n 2 11 ΨT22q 1{Tq ≤t} |(ψg′ ,g′ )q− p | Ψ 4 Sp kn q=2 p: p
+

∞ ∞  ψg′ ,g′ (0)2  1  11 n 2 22 11 22 ′ ,g ′ ) Ψ 1 |(ψ | Ψ + Ψ Ψ k 1{R k ≤t} . p ≤t}  g p q q− p   { S T Sk T kn4 p=2 S kn4 q:q< p k=1

Note that Ψ is c`adl`ag, by an argument similar to the above we obtain ∞  1  2 11 n ΨT22q 1{Tq ≤t} |(ψg′ ,g′ )q− p | Ψ 4 Sp kn q=2 p: p
=

1 kn4

∞  q=2kn +1 ∞ 

ΨT11q ΨT22q 1{Tq ≤t}

q−1 

1/2

n 2 |(ψg′ ,g′ )q− p | + o p (bn )

p=q−2kn

2 1 1/2 11 22 Ψ Ψ 1 ψg′ ,g′ (x)2 dx + o p (bn ). q ≤t}  q q   { T kn3 q=2k +1 T T 0 n ∞ ∞ Note that bn q=1 1{Tq ≤t} = bn k=1 1{R k ≤t} + O p (bn ), Lemma 11.2 yields

=



∞  1  2 11 22 n Ψ 1 |(ψg′ ,g′ )q− q ≤t}  q p | Ψ  { T Sp kn4 q=2 T p: p
0

2736

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

By symmetry we also obtain ∞  1  11 n 2 22 Ψ 1 |(ψg′ ,g′ )q− p  p p | ΨT  q kn4 p=2 S { S ≤t} q:q< p  0  t 1/2 1/2 = bn θ −3 ψg′ ,g′ (x)2 dx Ψs11 Ψs22 G −1 s ds + o p (bn ). −2

Since

∞

11 Ψ 22 1 k=1 Ψ k {R k ≤t} Sk T

0

= O p (bn−1 ), we conclude that

 t ∞ 1  1/2 −3 1/2 n 2 11 22 ′ ,g ′ ) |(ψ | Ψ Ψ 1 = b θ  κ Ψs11 Ψs22 G −1 p q   n g p q q− p s ds + o p (bn ).   { S ∨T ≤t} S T kn4 p,q=1 0 (d) First, Proposition 7.1(a), [SC2]–[SC3], Lemma 10.1(b) and an argument similar to the proof of (11.7) imply that ∞ 1  n 2 12 |(ψg′ ,g′ )q− 1 S p =Tp ≤t} ΨT21q 1{ q ≤t} p | Ψ S q =T S p { kn4 p,q=1

=

1 kn4



1/2

n 2 12 21 |(ψg′ ,g′ )q−  p } Ψ R q−1 1{ q } 1{R p∧q−1 ≤t} + o p (bn ). p | Ψ R p−1 1{ S p =T S q =T

p,q: p̸=q

On the other hand, note that 0 ≤ 1{ k } ≤ 1, by an argument similar to that in the proof of (a) S k =T we can show that ∞  1  n 2 12 21 |(ψg′ ,g′ )q−  p } Ψ R q−1 1{ q } 1{R p−1 ≤t} p | Ψ R p−1 1{ S p =T S q =T kn4 q=2 p: p
=

∞   2 1  1/2 n 2 12 ′n ′ ,g ′ ) |(ψ | Ψ χ 1{R p−1 ≤t} + o p (bn ) g p−1 p−1 q− p R R 4 kn q=2 p: p
using [SC2] and [A1′ ](iii) instead of [SC1] and [A1′ ](i) respectively. By symmetry we also obtain ∞  1  2 12 n 21 |(ψg′ ,g′ )q−  p } Ψ R q−1 1{ q } 1{R q−1 ≤t} p | Ψ R p−1 1{ S p =T S q =T 4 kn p=2 q:q< p

=

∞   2 1  1/2 2 12 n ′n ′ ,g ′ ) | Ψ |(ψ χ 1{R q−1 ≤t} + o p (bn ). g q−1 q−1 q− p R R kn4 p=2 q:q< p

Therefore, Lemma 11.2 yields the desired result. (e) By an argument similar to the proof of (11.2), we can show ∞ 1  n 2 22 p |(ψg,g′ )q− q ≤t} p | [X ]( I )t ΨT q 1{T kn2 p,q=1

=

∞ 1  1/2 n 2 p 22 |(ψg,g′ )q− q ≤t} + o p (bn ). p | [X ](Γ )t ΨT q 1{T kn2 p,q=1

2737

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

On the other hand, we have   ∞  1    n 2 p 22 ′ |(ψ ) | [X ](Γ ) Ψ 1  q  t g,g q− p q {T >t}  T kn2  p,q=1 ≤

1 kn2



[X ](Γ p )t

p:R p−1
 q:|q− p|≤2kn

n 2 22 |(ψg,g′ )q− q >t} , p | ΨT q 1{T

hence by [A4] and [SC1] we obtain   ∞  1    n 2 p 22 ′) |(ψ | [X ](Γ ) Ψ 1 rn ).  q  t g,g q− p q {T >t}  = O p (¯ T kn2  p,q=1 Therefore, we conclude that ∞ 1  n 2 22 p |(ψg,g′ )q− q ≤t} p | [X ]( I )t ΨT q 1{T kn2 p,q=1

=

∞ ∞  1  1/2 p n 2 22 [X ](Γ ) |(ψg,g′ )q− t p | ΨT q + o p (bn ), kn2 p=1 q=1

hence, note that Ψ is c`adl`ag, an argument similar to the proof of (11.7) yields ∞ 1  n 2 22 p |(ψg,g′ )q− q ≤t} p | [X ]( I )t ΨT q 1{T kn2 p,q=1

=

1 kn2

∞ 

[X ](Γ p )t Ψ R22p−1

p=2kn +1

1/2

= bn θ −1 κ

p+2k n

1/2

n 2 |(ψg,g′ )q− p | + o p (bn )

q= p−2kn

 0

t

1/2

[X ]′s Ψs22 ds + o p (bn ).

This completes the proof of (e). (f) Similar to the proof of (e). (g) An argument similar to that in the proof of (e) and the fact that Ψ is c`adl`ag yield ∞ 1  n 2 12 p |(ψg,g′ )q− q ≤t} p | [X, Y ]( I )t Ψ R q 1{ S q =T 2 kn p,q=1

=

∞ 1  1/2 n 2 12 p |(ψg,g′ )q− q } + o p (bn ). p | [X, Y ]( I )t Ψ R q−1 1{ S q =T kn2 p,q=1

Therefore, we can show the desired result by an argument similar to the proof of (a).



12. Proof of Proposition 4.4 First note that an argument similar to the one in the first part of Section 12 of [21] allows 9 us to assume that (7.2) and 10 < ξ < ξ ′ < 1. Furthermore, in the following we only consider sufficiently large n such that ξ −1/2

kn r¯n < bn

.

(12.1)

2738

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

 j for each i, j. In addition to this notation, we further introduce Recall that R ∨ (i, j) =  Si ∨ T ∧ i   the notation R (i, j) = S ∧ T j . Lemma 12.1. Suppose that [A2] and [SA4] are satisfied. Let i, j ∈ Z+ and let τ be a G(n) (n) stopping time. Then for any A ∈ Gτ we have     A ∩ τ ≤ R ∨ (i + kn , j + kn ) ∩ I¯i (τ ) ∩ J¯ j (τ ) ̸= ∅ ∈ F R ∧ (i, j) . Proof. Let   B = τ ≤ R ∨ (i + kn , j + kn ) ,

  C = I¯i (τ ) ∩ J¯ j (τ ) ̸= ∅ .

It is sufficient to show that A ∩ B ∩ C ∩ D ∈ Fu for any u ∈ R+ , where D = {R∧ (i, j) ≤ u}. On C we have R ∨ (i + kn , j + kn ) − R ∧ (i, j) ≤ | I¯i | ∨ | J¯ j | ∨ ( Sˆ i+kn − Tˆ j ) ∨ (Tˆ j+kn − Sˆ i ) ≤ kn r¯n , hence R ∨ (i + kn , j + kn ) = {R ∨ (i + kn , j + kn ) − R ∧ (i, j)} + R ∧ (i, j) ≤ R ∧ (i, j) + kn r¯n , and thus we have C ∩ D = C ∩ D ∩ {R ∨ (i + kn , j + kn ) ≤ u + kn r¯n }. (n)

Since A ∈ Gτ

(n)

(n)

and C ∈ Gτ , we have (A ∩ C) ∩ B ∈ G R ∨ (i+kn , j+kn ) . Therefore, we obtain

A ∩ B ∩ C ∩ {R ∨ (i + kn , j + kn ) ≤ u + kn r¯n } ∈ Gu+kn r¯n , (n)

however, Gu+kn r¯n = F

ξ− 1 2) +

(u+kn r¯n −bn

⊂ Fu by (12.1). This together with the fact that

{R ∧ (i, j) ≤ u} ∈ Fu implies A ∩ B ∩ C ∩ D ∈ Fu .



Lemma 12.2. Suppose that [A2] and [SA4] are satisfied. Let Z = (Z t )t∈R+ be a G(n) -adapted process. Let i, j ∈ Z+ , p, q ∈ {0, 1, . . . , kn − 1} and let τ be a G(n) -stopping time. Then both i j i+ p ij j+q K¯ τ −  Iτ − Z τ and K¯ τ − Jτ − Z τ are F R ∧ (i, j) -measurable.   i j i+ p Proof. On { I¯i (τ ) ∩ J¯ j (τ ) = ∅} ∪ τ > R ∨ (i + kn , j + kn ) we have K¯ τ −  Iτ − = 0. Therefore, for any Borel measurable set B we have     i j i+ p { K¯ τ −  Iτ − Z τ ∈ B} = {0 ∈ B} ∩ { I¯i (τ ) ∩ J¯ j (τ ) = ∅} ∪ τ > R ∨ (i + kn , j + kn )  i j i+ p ∩ { K¯ τ −  Iτ − Z τ ∈ B} ∩ { I¯i (τ ) ∩ J¯ j (τ ) ̸= ∅}   ∩ τ ≤ R ∨ (i + kn , j + kn ) , i j i+ p i j i+ p so we obtain { K¯ τ −  Iτ − Z τ ∈ B} ∈ F R ∧ (i, j) by Lemma 12.1 because K¯ τ −  Iτ − Z τ is ij j+q Gτ -measurable by construction. By symmetry we can also show that K¯ τ − Jτ − Z τ is Gτ measurable.  kn −1 n ′n i j i ji ′ j ′ ¯ In the remainder of this section, we fix α, β, α ′ , β ′ ∈ Φ. Let Ξt := q,q ′ =0 βq βq ′ K t ′ ′ ′ ′ ′ ′ ′ ′ ′ ′  i j j+q ˆ j +q i ji j −1 n ¯ i j ¯ i j ˆi + p Jˆ j+q for each t ∈ R . K¯ t Jˆt Jt and Λt = kpn′ ,q=0 α ′n + t p βq K t K t It

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2739

We introduce an auxiliary condition. [H] For each n ∈ N we have four square-integrable martingales M n , N n , M ′n and N ′n satisfying the following conditions: (i) There exists C such that  a positive constant   n ˆi 4 E [M ]( I )t |F Sˆ i−1 + E [M ′n ]( Iˆi )4t |F Sˆ i−1     +E [N n ]( Jˆ j )4t |FTˆ j−1 + E [N ′n ]( Jˆ j )4t |FTˆ j−1 ≤ C r¯n4 for any t ∈ R+ , n ∈ N and i, j ∈ N. (ii) For each n ∈ N we have ⟨M n , M ′n ⟩ = H (1)D(1)n • B(1)n , ⟨N n , N ′n ⟩ = H (2)D(2)n • B(2)n , ⟨M n , N ′n ⟩ = H (3)D(3)n • B(3)n , ⟨M ′n , N n ⟩ = H (4)D(4)n • B(4)n , (0) where H (k) is a c´adl´ag bounded F -adapted process, D(k)n is a bounded G(n) -adapted process and B(k)n is a deterministic nondecreasing process or a G(n) -adapted point process for each k ∈ {1, 2, 3, 4}. (iii) For any λ > 0, we have a positive constant K λ satisfying   max sup E |H (k)s − H (k)(s−h)+ |2 F(s−h)+ ≤ K λ h 1−λ k∈{1,2,3,4} 0≤s≤t

for any t, h > 0. (iv) For each T ∈ R+ there is a positive constant C T such that maxk∈{1,2,3,4} B(k)nT ≤ C T for all n. Moreover, there exists a positive constant C such that B(1)n ( Iˆ p )t ∨ B(2)n ( Jˆq )t ∨ B(3)n ( Iˆ p )t ∨ B(3)n ( Jˆq )t ∨B(4)n ( Iˆ p )t ∨ B(4)n ( Jˆq )t ≤ C r¯n for any p, q, n ∈ N and any t > 0. For simplicity of notation, set ′ ˆ it M¯ ′n′ (I) ˆ it ′ − ⟨ M¯ αn (I) ˆ i , M¯ ′n′ (I) ˆ i ′ ⟩t , Uii := M¯ αn (I) α α

ˆ it N¯ ′n′ (Jˆ )tj − ⟨ M¯ αn (I) ˆ i , N¯ ′n′ (Jˆ ) j ⟩t Vi j := M¯ αn (I) β β for each i, i ′ , j ∈ N when we assume the condition [H]. In addition, for an F-adapted process Z , we write  Z t := Z (t−bξ −1/2 ) . Then  Z t is clearly G(n) -adapted. n

+

Lemma 12.3. Suppose that [A2], [SA4] and [H] hold. Let r, s ∈ R+ and i, j, i ′ , j ′ , k, l, k ′ , l ′ ∈ Z+ . (a) For |i − k| ≥ kn − 1, |i ′ − k| ≥ kn − 1, |i − k ′ | ≥ kn − 1 and |i ′ − k ′ | ≥ kn − 1,   t  t ′ l ′ ii ′ i ji ′ j ′ n n n kk ′  n dB(2) dB(2) H (2) D(2) E Ξs− Ξrklk U U H (2) D(2) s r r = 0. s − s− r − s r 0

0

(b) For |i − k| ≥ kn − 1, |i − l ′ | ≥ kn − 1, | j ′ − k| ≥ kn − 1 and | j ′ − l ′ | ≥ kn − 1,  t  t  i ji ′ j ′ klk ′ l ′ i j ′ kl ′  n n n n E Λs− Λr − Vs− Vr − H (4)s D(4)s H (4)r D(4)r dB(4)s dB(4)r = 0. 0

0

Proof. (a) Without loss of generality, we may assume that i ≥ i ′ ∨ k ∨ k ′ , so we have i ≥ k ∨ k ′ + kn − 1. Since B(2)n is a deterministic nondecreasing process or a G(n) -adapted point process, it is sufficient to show that   ′ l ′ ii ′ i ji ′ j ′ kk ′  n  n E Ξσ − Ξτklk (12.2) − Uσ − Uτ − H (2)σ D(2)σ H (2)τ D(2)τ 1{σ ∨τ ≤t} = 0

2740

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753 ′ ′

i ji j  for any bounded G(n) -stopping times σ, τ . Lemma 12.2 implies that Ξσ − H (2)σ D(2)nσ 1{σ ≤t} ′ ′ l  n is F R ∧ (i ′ , j∧ j ′ ) -measurable and Ξτklk − H (2)τ D(2)τ 1{τ ≤t} is F R ∧ (k∧k ′ ,l∧l ′ ) -measurable. Moreover, ′ kk Ur is F Sˆ k∨k ′ +kn −1 -measurable by definition. Since R ∧ (i, j ∧ j ′ ) ≤ Sˆ i and R ∧ (k ∧ k ′ , l ∧ l ′ ) ≤ Sˆ k ≤ Sˆ i , we obtain   ′ l ′ ii ′ i ji ′ j ′ n n  kk ′  H (2) D(2) 1 H (2) D(2) E Ξσ − Ξτklk U U {σ ∨τ ≤t} τ σ τ σ − σ− τ−  ′   i ji ′ j ′ klk ′ l ′ kk ′  n  = E Ξσ − Ξτ − Uτ − H (2)σ D(2)σ H (2)τ D(2)nτ 1{σ ∨τ ≤t} E Uiiσ − |F Sˆ i . ′

Since Uii is a martingale by the definition, the optional sampling theorem provides  ′   ′  ′ E Uiiσ − |F Sˆ i = lim E Uii(σ −δ)+ |F Sˆ i = lim Uiiˆ i = 0, δ↓0

δ↓0

which concludes the proof of (a). (b) Similar to the proof of (a).

S ∧(σ −δ)+



Lemma 12.4. Suppose that [A2], [SA4] and [H] are satisfied. Let t ∈ R+ . Then: (a) There is a positive constant C such that   t i ji ′ j ′ Ξs− dB(2)ns ≤ Ckn4 , i,i ′ , j, j ′

t

 

0

i ′ , j, j ′

(b) There is a positive constant C such that   t i ji ′ j ′ Λs− dB(4)ns ≤ Ckn4 ,

0

t

 

i,i ′ , j, j ′ 0

i ji ′ j ′

Ξs

i ′ , j, j ′ 0

dB(2)ns ≤ C r¯n kn4 .

i ji ′ j ′

Λs− dB(4)ns ≤ C r¯n kn4 .

′ ′   kn −1  ij i ji ′ j ′ ˆ j+q ˆ j +q . ¯ . kn2 j, j ′ q,q Proof. Since ′ =0 Js− Js− i,i ′ , j, j ′ Ξs− i K s− . kn , we have Therefore, we obtain k k n −1  t  n −1   t i ji ′ j ′ j+q j ′ +q ′ Jˆs− Jˆs− dB(2)ns ≤ kn2 B(2)nt . kn4 Ξs− dB(2)ns . kn2

q,q ′ =0 0

i,i ′ , j, j ′ 0

j, j ′

q,q ′ =0

by [H](iv). On the other hand, since 

i ji ′ j ′

Ξs−

i ′ , j, j ′

.

k n −1





ij j+q j +q K¯ s− Jˆs− Jˆs−





q,q ′ =0 j, j ′

′ ′

i j K¯ s− . kn

i′

k n −1



q,q ′ =0 j, j ′

we have   i ′ , j, j ′ 0

t

i ji ′ j ′ Ξs dB(2)ns

. kn

k n −1



q,q ′ =0 j, j ′

≤ kn2

k n −1  q=0

j

0

t





ij j+q j +q K¯ s− Jˆs− Jˆs− dB(2)ns

ij K¯ t B(2)n ( Jˆ j+q )t





ij j+q j +q K¯ s− Jˆs− Jˆs− ,

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2741

′ ′  j+q ij ij ˆ j+q ˆ j +q ≤ Jˆs− because and K¯ s− ≤ K¯ t for s ∈ [0, t]. Therefore, we obtain j ′ Js− Js−  t i ji ′ j ′  dB(2)ns . r¯n kn4 due to [H](iv). Consequently, we complete the proof of (a). i ′ , j, j ′ 0 Ξs Similarly we can also prove (b). 

Lemma 12.5. Suppose that [A2], [SA4] and [H] are Letr ∈ [2, 4].  satisfied. ′ r ii r  ′ (a) There exists a positive constant Cr such that E |U |t F S i∧i ≤ Cr (kn r¯n ) for any t ∈ R+ and any i, i ′ ∈ N.    (b) There exists a positive constant Cr′ such that E |Vi j |rt F R ∧ (i, j) ≤ Cr′ (kn r¯n )r for any t ∈ R+ and any i, j ∈ N. kn −1 n 2 n i+ p  )t , we have ⟨ M¯ αn (I)i ⟩t . kn r¯n . Proof. (a) First, since ⟨ M¯ αn (I)i ⟩t = p=0 (α p ) ⟨M ⟩( I ′ Similarly we can show ⟨ M¯ α′n′ (I)i ⟩t . kn r¯n . Therefore, by the Kunita–Watanabe inequality we obtain    ′ r   r ′ E ⟨ M¯ αn (I)i , M¯ α′n′ (I)i ⟩t  F S i∧i . (kn r¯n ) . Next, the Burkholder–Davis–Gundy inequality and [H](i) yield  r    k  n −1   ¯ n i 2r  n 2 n i+ p  (α p ) [M ]( I )t E  Mα (I)  F F Si . E Si p=0

≤ knr −1

k n −1

   (α np )2r E [M n ]( I i+ p )rt F Si

p=0 r

. (kn r¯n ) .     ¯ ′n i ′ 2r  ′ Similarly we can also show E  Mα ′ (I)  F . (kn r¯n )r . Therefore, by the Schwarz Si inequality we obtain    ′ r   r ′ . (k n r¯n ) . E  M¯ αn (I)i M¯ α′n′ (I)i  F i S Consequently, we complete the proof of (a). (b) Similar to the proof of (a).  Lemma 12.6. Suppose [A2], [SA4] and [H]. Then we have  −1/2 i j i′ j′ ij i′ j′ bn ( K¯ − K¯ − ) • ⟨ L¯ α,β (M n , N n ), L¯ α ′ ,β ′ (M ′n , N ′n )⟩t i, j,i ′ , j ′ −1/2

= bn

 i, j,i ′ , j ′

  i j i′ j′ i ji ′ j ′ ( K¯ − K¯ − ) • Vα,β;α ′ ,β ′ (M n , N n ; M ′n , N ′n )t + o p kn4

as n → ∞ for every t ∈ R+ . Proof. We decompose the target quantity as  i j i′ j′ ij i′ j′ ( K¯ − K¯ − ) • ⟨ L¯ α,β (M n , N n ), L¯ α ′ ,β ′ (M ′n , N ′n )⟩t i, j,i ′ , j ′



 i, j,i ′ , j ′

′ ′

′ ′

ij i j i ji j ( K¯ − K¯ − ) • Vα,β;α ′ ,β ′ (M n , N n ; M ′n , N ′n )t

= ∆1,t + ∆2,t + ∆3,t + ∆4,t ,

2742

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

where ∆1,t =

i, j,i ′ , j ′

i, j,i ′ , j ′

∆2,t =

i, j,i ′ , j ′

i′ j′

j

j′



ˆ i , M¯ ′n′ (I) ˆ i ⟩)t ( K¯ − K¯ − ) • ({ N¯ βn (Jˆ )− N¯ β′n′ (Jˆ )− } • ⟨ M¯ αn (I) α





′ ′

ij i j ˆ i , M¯ ′n′ (I) ˆ i ′ ⟩− • ⟨ N¯ n (Jˆ ) j , N¯ ′n′ (Jˆ ) j ′ ⟩)t , ( K¯ − K¯ − ) • (⟨ M¯ αn (I) β β α ij







ˆ i− } • ⟨ N¯ n (Jˆ ) j , N¯ ′n′ (Jˆ ) j ⟩)t ˆ i− M¯ ′n′ (I) ( K¯ − K¯ − ) • ({ M¯ αn (I) β β α





i′ j′

ij



i, j,i ′ , j ′

ij

i′ j′





ˆ i , M¯ ′n′ (I) ˆ i ⟩)t ( K¯ − K¯ − ) • (⟨ N¯ βn (Jˆ ) j , N¯ β′n′ (Jˆ ) j ⟩− • ⟨ M¯ αn (I) α

and ∆3,t =

i, j,i ′ , j ′

i, j,i ′ , j ′

∆4,t =

i, j,i ′ , j ′

i′ j′

j





ˆ i− } • ⟨ M¯ αn (I) ˆ i , N¯ ′n′ (Jˆ ) j ⟩)t ( K¯ − K¯ − ) • ({ N¯ βn (Jˆ )− M¯ α′n′ (I) β





′ ′

ij i j ˆ i , N¯ ′n′ (Jˆ ) j ′ ⟩− • ⟨ N¯ n (Jˆ ) j , M¯ ′n′ (I) ˆ i ′ ⟩)t , ( K¯ − K¯ − ) • (⟨ M¯ αn (I) β β α ij





ˆ i ⟩)t ˆ i− N¯ ′n′ (Jˆ )− } • ⟨ N¯ n (Jˆ ) j , M¯ ′n′ (I) ( K¯ − K¯ − ) • ({ M¯ αn (I) β α β





j′

i′ j′

ij



i, j,i ′ , j ′

ij

i′ j′





ˆ i ⟩− • ⟨ M¯ αn (I) ˆ i , N¯ ′n′ (Jˆ ) j ⟩)t . ( K¯ − K¯ − ) • (⟨ N¯ βn (Jˆ ) j , M¯ α′n′ (I) β

Consider ∆1,t first. By the use of associativity and linearity of integration, we can rewrite ∆1,t as ∆1,t =

   i j i′ j′ ′ ′ ( K¯ − K¯ − )Uii− • ⟨ N¯ βn (Jˆ ) j , N¯ β′n′ (Jˆ ) j ⟩t .

i, j,i ′ , j ′

Moreover, we have ′ ⟨ N¯ βn (Jˆ ) j , N¯ β′n′ (Jˆ ) j ⟩t =

k n −1









j+q j +q βqn βq′n′ ( Jˆ− Jˆ− ) • ⟨N n , N ′n ⟩t

q,q ′ =0 k n −1

=

j+q j +q βqn βq′n′ ( Jˆ− Jˆ− H (2)D(2)n ) • B(2)nt ,

q,q ′ =0

hence we obtain   t i ji ′ j ′ ′ ∆1,t = Ξs− Uiis− H (2)s D(2)ns dB(2)ns . i, j,i ′ , j ′ 0

 Let Rt := H (2)t − H (2)t . Then we have (∆1,t )2 = I + II + III + IV, where    t  t i ji ′ j ′ ′ l ′ ii ′ kk ′  n n n n I= Ξs− Ξrklk − Us− Ur − H (2)s D(2)s H (2)r D(2)r dB(2)s dB(2)r , i,i ′ , j, j ′ k,k ′ ,l,l ′ 0

II =



0

  t

i,i ′ , j, j ′ k,k ′ ,l,l ′ 0

0

t

′ ′

′ l ′ ii ′ i ji j kk ′  n n n n Ξs− Ξrklk − Us− Ur − H (2)s D(2)s Rr D(2)r dB(2)s dB(2)r ,

2743

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

III =

  t



i,i ′ , j, j ′ k,k ′ ,l,l ′ 0

IV =

0

  t



i,i ′ , j, j ′ k,k ′ ,l,l ′ 0

t

t

0

i ji ′ j ′

′ ′





i ji ′ j ′

′ ′





n n n l ii kk n Ξs− Ξrklk − Us− Ur − Rs D(2)s H (2)r D(2)r dB(2)s dB(2)r ,

l ii kk n n n n Ξs− Ξrklk − Us− Ur − Rs D(2)s Rr D(2)r dB(2)s dB(2)r .

The condition [H](ii) and the Schwarz inequality yield      t   t i ji ′ j ′ ′ ′ l ′ ii ′ klk n ii n |Ξr − Ur − Rr |dB(2)r |Ξs− Us− |dB(2)s  |II| .  k,k ′ ,l,l ′ 0

i,i ′ , j, j ′ 0



t

 

≤

i,i ′ , j, j ′



0

1/2  i ji ′ j ′ |Ξs− |dB(2)ns 

×



i,i ′ , j, j ′ 0

 ′ i ji ′ j ′ |Ξs− ||Uiis− |2 dB(2)ns 

1/2

t

 

t

 

′l ′ 2 n |Ξrklk − ||Rr | dB(2)r

k,k ′ ,l,l ′ 0

,

hence by Lemma 12.4(a) and the Schwarz inequality we obtain   2     t ′    i ji ′ j ′ |Ξs− ||Uiis− |2 dB(2)ns   E[|II|] . kn2 E   ′ ′ 0  i,i , j, j

 × E

 

k,k ′ ,l,l ′ 0

t

′ ′

l 2 n |Ξrklk − ||Rr | dB(2)r

  1/2 

.

 

By the Schwarz inequality and Lemma 12.4(a) we have  2   t ′    i ji ′ j ′ |Ξs− ||Uiis− |2 dB(2)ns   E  i,i ′ , j, j ′ 0

 ≤ E 

. kn4 E 

 

i ji ′ j ′ |Ξs− |dB(2)ns   0 i,i ′ , j, j ′ 0

i,i ′ , j, j ′





t

 

t

 

i,i ′ , j, j ′ 0

t

 ′ i ji ′ j ′ |Ξs− ||Uiis− |4 dB(2)ns 

 ′ i ji ′ j ′ |Ξs− ||Uiis− |4 dB(2)ns  .

Moreover, since B(2)n is a deterministic nondecreasing process or a G(n) -adapted point process, by Lemma 12.2 we have     t i ji ′ j ′ ′ E |Ξs− ||Uiis− |4 dB(2)ns  i,i ′ , j, j ′ 0

 = E

 

i,i ′ , j, j ′ 0

t

 i ji ′ j ′ |Ξs− |E



′ |Uiis− |4 F Si∧i ′





dB(2)ns  ,

2744

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

hence by Lemma 12.5(a) and Lemma 12.4(a) we obtain  2   t ′    i ji ′ j ′ E  |Ξs− ||Uiis− |2 dB(2)ns   . kn8 (kn r¯n )4 . i,i ′ , j, j ′ 0

On the other hand, since B(2)n is a deterministic nondecreasing process or a G(n) -adapted point process, by Lemma 12.2 we obtain         t   t  klk ′ l ′ 2 n klk ′ l ′ 2  (n) n E |Ξr − ||Rr | dB(2)r = E |Ξr − |E |Rr | Gr dB(2)r , k,k ′ ,l,l ′ 0

k,k ′ ,l,l ′ 0

and thus [H](iii) and Lemma 12.4(a) yield       t ξ − 12 (1−λ) klk ′ l ′ 2 n kn4 E |Ξr − ||Rr | dB(2)r . bn k,k ′ ,l,l ′ 0

for any λ > 0. Consequently, we conclude that E[|II|] . (kn r¯n obtain        bn−1 |II| = O p

−1+2 ξ ′ − 21 + ξ − 12 bn

1−λ 2

kn8

= Op

)2 b

  ξ − 12 1−λ 2 kn8 , n

and thus we

     9 2(ξ ′ −ξ )+ 52 ξ − 10 − λ2 ξ − 12 bn kn8 .

Since ξ ′ > ξ > 9/10 and λ > 0 can be taken arbitrarily small, we conclude that bn−1 II = o p (kn8 ). In a similar manner, we can show that bn−1 III = o p (kn8 ) and bn−1 IV = o p (kn8 ). Next, we evaluate E[I]. In light of Lemma 12.3(a), the terms contribute to the sum only when |i − k| ∧ |i ′ − k| < kn or |i − k ′ | ∧ |i ′ − k ′ | < kn . Therefore, [H](ii) and the Schwarz inequality yield            + + |E[I]| . +   i,i ′ ,k,k ′ i,i ′ ,k,k ′ i,i ′ ,k,k ′  j, j ′ ,l,l ′  i,i ′ ,k,k ′ |i−k|
 t  × E 0

0

t

|i ′ −k|
|i−k ′ |
|i ′ −k ′ |
′ l ′ ii ′ i ji ′ j ′ n kk ′ n |Ξs− Ξrklk − Us− Ur − |dB(2)s dB(2)r



=: A1 + A2 + A3 + A4 . Consider A1 . We rewrite it as       t i ji ′ j ′ ′  t klk ′ l ′ kk ′ A1 = E  |Ξs− Uiis− |dB(2)ns  |Ξs− Us− |dB(2)ns  . i,k:|i−k|
i ′ , j, j ′ 0

k ′ ,l,l ′ 0

Then by the Schwarz inequality and the inequality of arithmetic and geometric means we obtain  2      t i ji ′ j ′ ′  A1 ≤ k n E  |Ξs− Uiis− |dB(2)ns   . i

i ′ , j, j ′ 0

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2745

Therefore, by the Schwarz inequality and the second inequality of Lemma 12.4(a) we obtain     t i ji ′ j ′ ′ A1 . kn r¯n kn4 E  |Ξs− ||Uiis− |2 dB(2)ns  . i,i ′ , j, j ′ 0

Since B(2)n is a deterministic nondecreasing process or a G(n) -adapted point process, by Lemma 12.2 we have     t i ji ′ j ′ ′ |Ξs− ||Uiis− |2 dB(2)ns  E i,i ′ , j, j ′ 0

 = E

 

i,i ′ , j, j ′ 0

t

i ji ′ j ′

|Ξs−

   |E |Us− |2 F S i∧i ′ dB(2)ns  , 

ii ′

hence by Lemma 12.5(a) and Lemma 12.4(a) we obtain A1 . kn r¯n kn8 (kn r¯n )2 = kn8 (kn r¯n )3 . Similarly we can also show that Al . kn8 (kn r¯n )3 for l ∈ {2, 3, 4}. Consequently, we have |E[I]| . kn8 (kn r¯n )3 , so that we conclude that bn−1 |E[I]| = o(kn8 ) because bn−1 (kn r¯n )3 = 3ξ ′ − 5

O(bn 2 ) = o(1). −1/2 −1/2 After all, we conclude that bn ∆1,t = o p (kn4 ). By symmetry, we also obtain bn ∆2,t = 4 o p (kn ). Next we consider ∆3,t . By the use of associativity and linearity of integration, we have  i j i′ j′ i j′ ˆ i ′ , N¯ n (Jˆ ) j ⟩t . ∆3,t = ( K¯ − K¯ − V− ) • ⟨ M¯ α′n′ (I) β i, j,i ′ , j ′

Moreover, we have ′

ˆ i , N¯ n (Jˆ ) j ⟩t = ⟨ M¯ α′n′ (I) β

k n −1

i ′+ p

j+q Jˆ− ) • ⟨M ′n , N n ⟩t

i ′+ p

j+q Jˆ− H (4)D(4)n } • B(4)nt ,

n ˆ α ′n p βq ( I−

p,q=0

=

k n −1

n ˆ α ′n p βq { I−

p,q=0

hence we obtain   t i ji ′ j ′ i j ′ ∆3,t = Λs− Vs− H (4)s D(4)ns dB(4)ns . i, j,i ′ , j ′ 0

Therefore, using Lemma 12.3(b), Lemma 12.4(b) and Lemma 12.5(b) instead of Lemma 12.3(a), Lemma 12.4(a) and Lemma 12.5(a) respectively, we can adopt an argument similar to the −1/2 above one. After all, we conclude that bn ∆3,t = o p (kn4 ). By symmetry, we also obtain −1/2 bn ∆4,t = o p (kn4 ).  Proof of Proposition 4.4. By a localization procedure, we may assume that [SA3], [SA4], [SN] and [SC3] hold instead of [A3], (7.2), [N] and [C3] respectively. Lemma 12.6 implies that it is sufficient to prove that the condition [H] holds for M n , M ′n ∈ {X, E X } and N n , N ′n ∈ {Y, EY }, but it immediately follows from [SA3], [SA4], [SN] and [SC3]. 

2746

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

13. Proof of Theorems 4.1 and 4.2 Before starting the proof, we strengthened the conditions [A5] and [A6] as follows (note Lemma 4.1 and the argument in the beginning of Section 7): [SA5] We have [A5], and the processes A X , AY , (A X )′ and (AY )′ are bounded. Furthermore, there exist a positive constant C and λ ∈ (0, 3/4) satisfying    (13.1) E | f t − f τ ∧t |2 Fτ ∧t ≤ C|t − τ |1−λ for every t > 0 and any bounded F(0) -stopping time τ , for the density processes f = (A X )′ and (AY )′ . [SA6] There exists a positive constant C such that bn−1 Hn (t) ≤ C for every t. The following lemma can be shown by arguments analogous to those used in the proofs of Lemmas 13.1 and 13.2 in [21]. Lemma 13.1. Suppose that [A2] and [SA4] are satisfied. Let (M n ) be a sequence of squareintegrable martingales such that there exists a positive constant C1 satisfying sup ⟨M n ⟩( Jq )t ≤ C1r¯n

(13.2)

q∈N

for any t ∈ R+ and any n ∈ N. Let α, β ∈ Φ and let A be an F(0) -adapted process with a bounded derivative such that there are a positive constant C2 and a constant λ ∈ (0, 3/4) satisfying    E (A′t − A′τ ∧t )2 Fτ ∧t ≤ C2 |t − τ |1−λ (13.3) for any t ∈ R+ and any bounded F(0) -stopping time τ . Let D n be a c`adl`ag G(n) -adapted process for each n and suppose that supn |D n | is a bounded process. Set An = D n • A and define It =

∞ 

ij I)i− • M¯ βn (J) j }t , K¯ − • { A¯ nα (

IIt =

∞ 

j ij I)i }t . K¯ − • { M¯ βn (J)− • A¯ nα (

i, j=1

i, j=1

for each t ∈ R+ . Then −1/4

(a) bn sup0≤s≤t |Is | = o p (kn2 ) for every t. −1/4 (b) bn |IIt | = o p (kn2 ) for every t. −1/4 (c) Suppose that [SC1]–[SC2] and [SA6] are fulfilled. Then bn sup0≤s≤t |IIs | = o p (kn2 ) for every t if M n ∈ {M Y , EY }. Proof. (a) The process I is clearly a locally square-integrable martingale with the predictable quadratic variation ⟨I⟩t =

∞  i,i ′ , j, j ′ =1

′ ′

′ ′ ij i j K¯ − K¯ − A¯ nα ( I)i− A¯ nα ( I)i− • ⟨ M¯ βn (J) j , M¯ βn (J) j ⟩t .

Since ′ ⟨ M¯ βn (J) j , M¯ βn (J) j ⟩t =

k n −1 q,q ′ =0





j+q j +q βqn βqn′ J− J− • ⟨M n ⟩t ,

2747

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

we have k n −1

∞ 

⟨I⟩t =

i,i ′ , j, j ′ =1 q,q ′ =0 q ∞  

=

n n n ¯ i j ¯ i j ¯ n  i ¯ n  i ′ q βq− j βq− j ′ K − K − Aα (I)− Aα (I)− J− • ⟨M ⟩t .

kn −1

Moreover, since | A¯ nα ( I)is | .

p=0

q 

∞ 



′ ′

i,i ′ ,q=1 j, j ′ =(q−kn +1)∨1

⟨I⟩t .



′ ′

′ j+q j +q ij i j I)i− J− J− • ⟨M n ⟩t I)i− A¯ nα ( βqn βqn′ K¯ − K¯ − A¯ nα (

ij ij | I i+ p (t)| and K¯ s ≤ K¯ t if s ≤ t, we have k n −1

′ ′

′ ′ ij i j K¯ t K¯ t | I i+ p (t)|| I i + p (t)|⟨M n ⟩( Jq )t ,

i,i ′ ,q=1 j, j ′ =(q−kn +1)∨1 p, p ′ =0

and thus [SA4] and (13.2) yield q 

∞ 

⟨I⟩t . kn r¯n2

k n −1

′ ′

ij i j K¯ t K¯ t | I i+ p (t)|.

i,i ′ ,q=1 j, j ′ =(q−kn +1)∨1 p=0

Since

∞

i′ j′

i ′ =1

K¯ t

⟨I⟩t . kn3r¯n2

. kn , we have q 

∞ 

k n −1

i j i+ p K¯ t | I (t)| = kn3r¯n2

Since

i+ p (t)| ≤ t and

i=1 | I

2ξ ′ −3/2

−1/2

| I i+ p (t)|

p,q=0 i=1

i,q=1 j=(q−kn +1)∨1 p=0

∞

k ∞ n −1 

∞ 

ij K¯ t .

j=1

6 2 ¯ ij j=1 K t . kn , we obtain ⟨I⟩t . kn r¯n , and thus we have o p (kn4 ) because ξ ′ > 9/10. The Lenglart inequality implies that

∞

)= bn ⟨I⟩t = O p (kn4 · bn −1/4 2 sup0≤s≤t |Is | = o p (kn ) as desired. bn (b) We rewrite the target quantity as IIt =

∞ k n −1 

j i+ p n ij I− D • A t α np K¯ − M¯ βn (J)− 

i, j=1 p=0

=

∞ k n −1  i, j=1 p=0

+

∞ k n −1 

α np

0



i, j=1 p=0

t



α np A′T j

0

t

i j i+ p n ¯ n  j K¯ s  Is Ds Mβ (J )s ds

i j i+ p n ¯ n  j K¯ s  Is Ds Mβ (J )s {A′s − A′T j }ds

=: II1,t + II2,t . −1/4

First we claim that bn

′ ′

T j j′ E[ M¯ βn (J)s M¯ βn (J)u |FT j∨ j ′ ]

is FT j∨ j ′ -measurable due to Lemma 12.2 and kn − 1 due to the optional sampling theorem, we have  t t  k n −1   i j i+ p E[II21,t ] = α np α np′ E A′T j A′T j ′ K¯ s  Is i,i ′ j, j ′ :| j− j ′ |≤kn p, p ′ =0

×





i j i+ p n ¯ i j i + p n II1,t = o p (kn2 ) as n → ∞. Since A′T j A′ j ′ K¯ s  I s Ds K u I u Du

0

0

i ′ j ′ i ′ + p′ n ¯ n  j ¯ n  j ′ Dsn K¯ u  Iu Du Mβ (J )s Mβ (J )u



dsdu

= 0 if | j − j ′ | >

2748

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

.



k n −1



 t

i,i ′ j, j ′ :| j− j ′ |≤kn p, p ′ =0 0

×



t

0 j′

j

 i j i+ p ¯ i ′ j ′ i ′ + p ′ Is K u Iu E K¯ s 

| M¯ βn (J)s |2 + | M¯ βn (J)u |2



dsdu.

On the other hand, the optional sampling theorem and (13.2) yield     j  j  E | M¯ βn (J)s |2 FT j = E ⟨ M¯ βn (J)s ⟩FT j =

k n −1

   (βqn )2 E ⟨M n ⟩( Jj+q )t FT j . kn r¯n .

(13.4)

q=0 ′ ′





i j i+ p ¯ i j i + p is FT∧ ( j, j ′ ) -measurable by Lemma 12.2, we obtain Is K u Iu Since K¯ s    k n −1   ′ ′ ′ ′ i j i+ p i j E[II21,t ] .kn r¯n E  K¯ t | I (t)| K¯ t | I i + p (t)| . i,i ′ j, j ′ :| j− j ′ |≤kn p, p ′ =0

Therefore, [SA4] and (7.1) imply that     k ∞ ∞ n −1 n −1   k  i j i j i+ p 4 2 i+ p 2 4 2 I (t)| = kn r¯n E | I (t)| K¯ t . E[II1,t ] . kn r¯n E K¯ t | i, j p=0

Since

∞

p=0 i=1

∞

i+ p (t)| ≤ t and

i=1 | I

j=1

j=1

ij K¯ t . kn , we conclude that E[II21,t ] . kn6r¯n2 , hence

−1/4

bn

II1,t = o p (kn2 ). Next we estimate II2,t . By Lemma 12.2 we have  t ∞ k    n −1   i j i+ p ′ ′ n  j  ¯ ¯ E[|II2,t |] . E K s Is E | Mβ (J )s (As − A T j )| FT j ds , 0

i, j=1 p=1

hence by the Schwarz inequality and (13.4) we obtain ∞ kn −1  t 1/2     1   i j i+ p ′ ′ 2 ¯ 2 E[|II2,t |] . (kn r¯n ) E K s Is E (As − A T j ) FT j ds . i, j=1 p=1

0

 j | ≤ |  j | ∨ |  j | ≤ ( Moreover, if I¯i ∩ J¯ j ̸= ∅ we have |s − T S i+kn − T Si − T S i+kn −  Si ) + j+k j i n   ¯ (T − T ) ≤ 2kn rn (t) for any s ∈ I . Hence by [SA4] and (13.3) we conclude that  t  ∞ k n −1  i j i+ p 1− λ2 ¯ E[|II2,t |] . (kn r¯n ) E K s Is ds i, j=1 p=1 1− λ2

≤ (kn r¯n )

0

 E

k ∞ n −1 

| I i+ p (t)|

p=1 i=1

for some λ ∈ (0, 3/4). Since

∞

i+ p (t)| i=1 | I −1/4 kn2 (kn r¯n )1−λ/2 , hence bn II2,t =

∞ 

 ij K¯ t

j=1

¯ ij j=1 K t . kn , we conclude that (ξ ′ −1/2)(1−λ/2)−1/4 O p (kn2 bn ) = o p (kn2 ) because

≤ t and

∞

E[|II2,t |] . ξ ′ > 9/10 and λ ∈ (0, 3/4). −1/4 Consequently, we obtain bn IIt = o p (kn2 ) as n → ∞ for every t.

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753 −1/4 −2 kn II)n∈N

(c) In light of (b), it is sufficient to show that (bn

is C-tight.

Fix a T > 0. Rewrite II as IIt =

∞  j j { M¯ βn (J)− Υ− } • At , j=1

j

i j i+ p n α np K¯ s  Is Ds for each s ∈ R+ . Then for 0 ≤ s < t ≤ T

∞ kn −1

where Υs =

i=1

−1/4 bn |IIt

p=0

− IIs | ≤

−1/4 bn

∞  t      ¯ n  j   ′   j   Mβ (J )u  Au Υu  du j=1 s



 ∞  t  j=1 s

   j 2  j −1/4 bn M¯ βn (J)u Υu  du

1/2  ∞  t  j=1 s





1/2

2  j  A′u Υu  du

. kn (t − s)1/2 Θn (T )1/2 , where Θn (·) =

∞  ·  j=1 0

j M¯ βn (J)u

2

j

Υu du.

j

Since Υu is FT j -measurable due to Lemma 12.2,     ∞  T   j −1/4 ¯ n  j 2  F  j Υu du E[Θn (T )] = E E bn Mβ (J )u T

j=1 0

 =E

∞  

T

j=1 0

 −1/2 bn

≤E

 j −1/2 bn ⟨ M¯ βn (J) j ⟩u Υu du

k ∞ n −1 

 i j i+ p n  j ¯ ¯ ⟨ Mβ (J ) ⟩T K T | I (T )|

p=0 i, j=1

 ≤E

−1/2 bn

k n −1

∞ 

 i j i+ p K¯ T | I (T )|⟨M n ⟩( Jj+q )T

.

p,q=0 i, j=1

If M n = M Y , [SC1] yields −1/2

bn

k n −1

∞ 

i j i+ p K¯ T | I (T )|⟨M n ⟩( Jj+q )T

p,q=0 i, j=1 −1/2

. bn

k n −1

∞ 

i j i+ p K¯ T | I (T )|| Jj+q (T )|

p,q=0 i, j=1

.

bn−1

k n −1

 ∞ 

p,q=0

i=1

| I i+ p (T )|2 +

∞  j=1

 | Jj+q (T )|2

≤ 4kn2 · bn−1 Hn (T ),

2749

2750

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

and thus by [SA6] we obtain E[Θn (T )] . kn2 . On the other hand, if M n = EY , [SC2] yields −1/2

bn

k n −1

∞ 

i j i+ p I (T )|⟨M n ⟩( Jj+q )T K¯ T |

p,q=0 i, j=1

. kn−1

k n −1

∞ 

i j i+ p K¯ T | I (T )|1{T j+q ≤T }

p,q=0 i, j=1

.

k ∞ n −1 

| I i+ p (T )|

∞ 

p=0 i=1

ij K¯ T . kn2 ,

j=1

and thus again we obtain E[Θn (T )] . kn2 . After all, for any η > 0 we have     −1/4 sup P w(bn kn−2 II; δ, T ) ≥ η ≤ sup P kn−1 δ 1/2 Θn (T )1/2 ≥ η n∈N −2

n∈N

≤η

δ sup kn−2 E[Θn (T )] → 0 n∈N

as δ → 0 if

Mn



{M Y , EY },

and thus we complete the proof of (c).



Lemma 13.2. Suppose that [A2]–[A5], [N] and [C3] hold. Then: −1/4

(a) Suppose [A6] holds. bn n = M t

1 (ψ H Y kn )2

ucp

 n } −−→ 0 as n → ∞, where {Mn − M ∞   ij ij ij K¯ t L¯ g,g (M X , M Y )t + L¯ g′ ,g′ (E X , EY )t

i, j=0

 ij ij + L¯ g,g′ (M X , EY )t + L¯ g′ ,g (E X , M Y )t . −1/4

(b) bn

 n , N ⟩t → p 0 as n → ∞ for any N ∈ {M X , M Y } and every t. ⟨M

Proof. By a localization procedure, we can assume [SA3]–[SA6], [SN] and [SC3] instead of [A3]–[A6], [N] and [C3] respectively (note Lemma 4.1 and the argument in the beginning of Section 7). Then, (a) immediately follows from Lemma 13.1. On the other hand, for N ∈ {M X , M Y } we have ∞     1 ij n i Y , N ⟩ (J ) j  ¯− ¯ gX ( ⟨M , N ⟩t = ⟨M K • M I) • g − t (ψ H Y kn )2 i, j=0   ij j + K¯ − • M¯ gY (J)− • ⟨M X , N ⟩g ( I)i  t ij i Y  j X  ¯ ¯ + K − • Eg′ (J )− • ⟨M , N ⟩g (I) t   ij + K¯ − • E¯ gX′ ( I)i− • ⟨M Y , N ⟩g (J) j , t

 n , N ⟩t = hence Lemma 13.1(b) yields ⟨M X Y M , M ) holds by [SA3]. 

1/4 o p (bn )

because (13.3) for A = ⟨L , M⟩ (L , M =

Proof of Theorem 4.1. Note that [A1′ ] implies [C3] due to Lemma 11.2, in all cases (a), (b)  n instead and (c), [A2]–[A6], [N] and [C3] hold; hence Lemma 13.2(a) allows us to consider M

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2751

of Mn . Moreover, [B2] holds by Proposition 4.4, [C1] holds by [A3], and for the case (b) [A1] and [W] with w given by (3.6) hold by Lemma 4.6. Therefore, we complete the proof due to Lemma 13.2(b) and Proposition 4.2.  Proof of Theorem 4.2. Note that we do not need the condition [A6] in order to verify the condition [B1] (see the proof of Lemma 13.1(b) and Lemma 13.2), an argument similar to the proof of Theorem 4.1 completes the proof.  14. Proof of Theorem 5.1 By a localization procedure, we may assume that [SC1]–[SC3], (7.2) and (7.4) hold. ucp According to Lemma 4.2, it is sufficient to prove Mn −−→ 0. Therefore, it is sufficient to show that        i j i j K¯ s V¯α ( I)− • W¯ β (J)  = o p (kn2 ), sup    s 0≤s≤t i, j   (14.1)      j i j i 2 I)  = o p (kn ) sup  K¯ s W¯ β (J)− • V¯α ( s 0≤s≤t  i, j as n → ∞ for any (V, α) ∈ {(M X , g), (E X , g ′ ), (A X , g)}, (W, β) ∈ {(M Y , g), (EY , g ′ ), (AY , g)} and t > 0.  ij j Consider the first equation of (14.1). Let Hs = i, j K¯ s V¯α ( I)i− • W¯ β (J)s . First we assume that (W, β) ∈ {(M Y , g), (EY , g ′ )}. By an argument similar to the proof of Lemma 4.3, we can  ij rewrite Hs as Hs = i, j K¯ − • {V¯α ( I)i− • W¯ β (J) j }s , and thus H is a locally square-integrable martingale. Therefore, it is sufficient to prove ⟨H⟩t = o p (kn4 ) as n → ∞ for any t > 0 due to the ′ Lenglart inequality. Since [W¯ β (J) j , W¯ β (J) j ]t = 0 if | j − j ′ | > kn , we have  ′ ′ i j i′ j′ I)i− V¯α ( I)i− • [W¯ β (J) j , W¯ β (J) j ]s . K¯ − K¯ − V¯α ( [H]s = i, j,i ′ , j ′ :| j− j ′ |≤kn

 Hence, (7.4), (7.5), (7.9), the Schwarz inequality and the fact that i K¯ i j . kn for every j yield  ′ E 0 [[H]s ] . kn2 · kn r¯n | log bn | [W¯ β (J) j , W¯ β (J) j ]s , j, j ′ :| j− j ′ |≤kn

and thus [SC1]–[SC3], the Kunita–Watanabe inequality and the inequality of arithmetic and geometric means imply that E 0 [[H]s ] . kn4 · kn r¯n | log bn | = o(kn4 ) because ξ ′ > 1/2. Therefore, we obtain the desired result due to Proposition 4.50 in [26]. Next we assume that (W, β) = (AY , g). Then, [SC1], (7.4), (7.5) and the Schwarz inequality yield   n −1  i j k  E 0 sup |Hs | . kn r¯n | log bn | K¯ t | Jj+q (t)|, 0≤s≤t

i, j

q=0

   hence the fact that i . kn for every j implies E 0 sup0≤s≤t |Hs | . kn r¯n | log bn | · kn2 = o(kn2 ) because ξ ′ > 1/2. Consequently, the first equation of (14.1) holds. By symmetry we also obtain the second equation of (14.1), and thus we complete the proof.  

K¯ i j

2752

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

Acknowledgments The author would like to express his gratitude to Professor Nakahiro Yoshida, who took care of his research and had valuable and helpful discussions. Thanks are also due to two anonymous referees for their insightful suggestions that substantially improved this paper. This work was partly supported by Grant-in-Aid for JSPS Fellows and the Program for Leading Graduate Schools, MEXT, Japan. References [1] Y. A¨ıt-Sahalia, J. Fan, D. Xiu, High-frequency covariance estimates with noisy and asynchronous financial data, J. Amer. Statist. Assoc. 105 (2010) 1504–1517. [2] T.G. Andersen, T. Bollerslev, Answering the skeptics: yes, standard volatility models do provide accurate forecasts, Internat. Econom. Rev. 4 (1998) 885–905. [3] O.E. Barndorff-Nielsen, P.R. Hansen, A. Lunde, N. Shephard, Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise, Econometrica 76 (2008) 1481–1536. [4] O.E. Barndorff-Nielsen, P.R. Hansen, A. Lunde, N. Shephard, Multivariate realised kernels: consistent positive semi-definite estimators of the covariation of equity prices with noise and non-synchronous trading, J. Econometrics 162 (2011) 149–169. [5] O.E. Barndorff-Nielsen, N. Shephard, Econometric analysis of realized volatility and its use in estimating stochastic volatility, J. R. Stat. Soc. Ser. B Stat. Methodol. 64 (2002) 253–280. [6] M. Bibinger, Efficient covariance estimation for asynchronous noisy high-frequency data, Scand. J. Stat. 38 (2011) 23–45. [7] M. Bibinger, An estimator for the quadratic covariation of asynchronously observed Itˆo processes with noise: Asymptotic distribution theory, Stochastic Process. Appl. 122 (2012) 2411–2453. [8] K. Christensen, S. Kinnebrock, M. Podolskij, Pre-averaging estimators of the ex-post covariance matrix in noisy diffusion models with non-synchronous data, J. Econometrics 159 (2010) 116–133. [9] K. Christensen, M. Podolskij, M. Vetter, On covariation estimation for multivariate continuous Itˆo semimartingales with noise in non-synchronous observation schemes, in: CREATES Research Paper 2011-53, Aarhus University, 2011. [10] E. Cinlar, R.A. Agnew, On the superposition of point processes, J. R. Stat. Soc. Ser. B Stat. Methodol. 30 (1968) 576–581. [11] E. Cl´ement, A. Gloter, Limit theorems in the Fourier transform method for the estimation of multivariate volatility, Stochastic Process. Appl. 121 (2011) 1097–1124. [12] F. Corsi, S. Peluso, F. Audrino, Missing in asynchronicity: A Kalman-EM approach for multivariate realized covariance estimation, Discussion Paper 2012-2, University of St. Gallen, 2012. [13] A. Dalalyan, N. Yoshida, Second-order asymptotic expansion for a non-synchronous covariation estimator, Ann. Inst. Henri Poincar´e Probab. Stat. 47 (2011) 748–789. [14] F. Delbaen, W. Schachermayer, A general version of the fundamental theorem of asset pricing, Math. Ann. 300 (1994) 463–520. [15] T.W. Epps, Comovements in stock prices in the very short run, J. Amer. Statist. Assoc. 74 (1979) 291–298. [16] M. Fukasawa, Realized volatility with stochastic sampling, Stochastic Process. Appl. 120 (2010) 829–852. [17] T. Hayashi, J. Jacod, N. Yoshida, Irregular sampling and central limit theorems for power variations: the continuous case, Ann. Inst. Henri Poincar´e Probab. Stat. 47 (2011) 1197–1218. [18] T. Hayashi, S. Kusuoka, Consistent estimation of covariation under nonsynchronicity, Stat. Inference Stoch. Process. 11 (2008) 93–106. [19] T. Hayashi, N. Yoshida, On covariance estimation of non-synchronously observed diffusion processes, Bernoulli 11 (2005) 359–379. [20] T. Hayashi, N. Yoshida, Asymptotic normality of a covariance estimator for nonsynchronously observed diffusion processes, Ann. Inst. Statist. Math. 60 (2008) 357–396. [21] T. Hayashi, N. Yoshida, Nonsynchronous covariation process and limit theorems, Stochastic Process. Appl. 121 (2011) 2416–2454. [22] S. Ikeda, A bias-corrected rate-optimal estimator of the covariation matrix of multiple security returns with dependent noise, 2010. Working paper. [23] J. Jacod, On continuous conditional Gaussian martingales and stable convergence in law, in: S´eminaire de Probabiliti´es XXXI, in: Lecture Notes in Mathematics, vol. 1655, Springer, Berlin, New York, 1997, pp. 232–246.

Y. Koike / Stochastic Processes and their Applications 124 (2014) 2699–2753

2753

[24] J. Jacod, Y. Li, P.A. Mykland, M. Podolskij, M. Vetter, Microstructure noise in the continuous case: the preaveraging approach, Stochastic Process. Appl. 119 (2009) 2249–2276. [25] J. Jacod, M. Podolskij, M. Vetter, Limit theorems for moving averages of discretized processes plus noise, Ann. Statist. 38 (2010) 1478–1545. [26] J. Jacod, A.N. Shiryaev, Limit Theorems for Stochastic Processes, second ed., Springer, 2003. [27] Y. Li, P.A. Mykland, E. Renault, L. Zhang, X. Zheng, Realized volatility when sampling times are possibly endogenous, 2009. Available at SSRN: http://ssrn.com/abstract=1525410. [28] P. Malliavin, M.E. Mancino, Fourier series method for measurement of multivariate volatilities, Finance Stoch. 6 (2002) 49–61. [29] P. Malliavin, M.E. Mancino, A Fourier transform method for nonparametric estimation of multivariate volatility, Ann. Statist. 37 (2009) 1983–2010. [30] M.C. M¨unnix, R. Schafer, T. Guhr, Impact of the tick-size on financial returns and correlations, Phys. A 389 (2010) 4828–4843. [31] T. Ogihara, N. Yoshida, Quasi-likelihood analysis for stochastic regression models with nonsynchronous observations, 2012. arXiv:1212.4911v1. [32] P.C.B. Phillips, J. Yu, Information loss in volatility measurement with flat price trading, 2008. Cowles Foundation for Research in Economics, Yale University (unpublished paper). [33] M. Podolskij, M. Vetter, Bipower-type estimation in a noisy diffusion setting, Stochastic Process. Appl. 119 (2009) 2803–2831. [34] M. Podolskij, M. Vetter, Estimation of volatility functionals in the simultaneous presence of microstructure noise and jumps, Bernoulli 15 (2009) 634–658. [35] S.I. Resnick, R.J. Tomkins, Almost sure stability of maxima, J. Appl. Probab. 10 (1973) 387–401. [36] C.Y. Robert, M. Rosenbaum, Volatility and covariation estimation when microstructure noise and trading times are endogenous, Math. Finance 22 (2012) 133–164. [37] R.T. Varneskov, Flat-top realized kernel estimation of quadratic covariation with non-synchronous and noisy asset prices, in: CREATES Research Paper 2011-35, Aarhus University, 2011. [38] L. Zhang, Efficient estimation of stochastic volatility using noisy observations: a multi-scale approach, Bernoulli 12 (2006) 1019–1043. [39] L. Zhang, Estimating covariation: Epps effect, microstructure noise, J. Econometrics 160 (2011) 33–47.