3 Asymptotic Efficiency
Statisticians compare sequences of estimators to each other. This book focuses on a possible comparison of these sequences of estimators at infinity. Firstly, let us recall the notion of Cramér–Rao bound in dimension one. The following proposition is set for a regular statistical experiment generated by a sample of independent and identically distributed random variables n n , B , (Pn,ϑ , ϑ ∈ Θ) , n ≥ 1 . [3.1] i=1
Here, Pn,ϑ = ν on B and
i=1
n
i=1 Pϑ
where (Pϑ , ϑ ∈ Θ) is dominated by a σ–finite measure
dPϑ (x) = f (ϑ, x), dν
x∈
.
Consequently, in this experiment, n
dPn,ϑ f (ϑ, xi ), (x) = f (ϑ, x) = n dν ⊗n
x = (x1 , . . . , xn ) ∈
i=1
n
. [3.2]
i=1
P ROPOSITION 3.1.– For a fixed sample size n ∈ , let Tn be an unbiased estimator of ϑ ∈ Θ ⊂ , i.e. Eϑ (Tn ) = ϑ, for all ϑ ∈ Θ. Under the assumptions of theorem 2.2, we have the Cramér–Rao inequality, Eϑ n(Tn − ϑ)2 = nVarϑ (Tn ) ≥ I(ϑ)−1 for all ϑ ∈ Θ.
[3.3]
56
Statistical Inference in Financial and Insurance Mathematics with R
Proof. By the Cauchy–Schwarz inequality, Var(Tn ) ≥
(Covϑ (Tn , Sϑn ))2 . Var(Sϑn )
Under regularity conditions, the score satisfies Eϑ (Sϑn ) = 0 as in equation [2.17] and Varϑ (Sϑn ) = Eϑ ((Sϑn )2 ) = nI(ϑ). Moreover, since the estimator is unbiaised, namely Eϑ (Tn ) = ϑ for all ϑ ∈ Θ ⊂ , we have ˆ n Tn (x)Sϑn (x)fn (ϑ, x)ν ⊗n (dx) Covϑ (Tn , Sϑ ) = n i=1
ˆ =
Tn (x)
n
i=1
∂ = ∂ϑ
ˆ
n
∂ fn (ϑ, x)ν ⊗n (dx) ∂ϑ
Tn (x)fn (ϑ, x)ν ⊗n (dx) =
i=1
∂ ∂ Eϑ (Tn ) = ϑ = 1. ∂ϑ ∂ϑ
The previous proof can be extended to the multidimensional case (see [MON 97, theorem 3 p. 94]) and to biased sequences of estimators (see [IBR 81, theorem 7.3 p.75]). It is worth mentioning that the previous proposition only considers unbiased estimators and a fixed sample size. The next example shows that, even in regular statistical experiments generated by a sample of independent and identically distributed random variables, there are sequences of estimators such that lim Eϑ n(Tn − ϑ)2 ≤ I(ϑ)−1
n→∞
and, for a specific value ϑ0 , lim Eϑ0 n(Tn − ϑ0 )2 < I(ϑ)−1 .
n→∞
They are called superefficient sequences of estimators at ϑ0 . E XAMPLE 3.1.– Let us consider the Gaussian shift statistical experiment described in examples 2.1 and 2.14. In this setting, the sequence of maximum likelihood estimators is given by the empirical means X n , n ≥ 1 . The
Asymptotic Efficiency
57
√ sequence is ( n)n -consistent and asymptotically normal with variance σ 2 (ϑ) = 1 for all ϑ ∈ . It should be noted that this sequence of estimators is unbiased. In the following, the Hodges example is constructed. Let us define Tn =
1 X n if X n > n− 4
1 αX n if X n ≤ n− 4
with |α| < 1. The sequence of estimators (Tn , n ≥ 1) shows asymptotical normality with variance σ 2 (ϑ) = σ 2 (ϑ) = 1 for all ϑ = 0 and σ 2 (0) = α2 < 1 = σ 2 (0). The sequence (Tn , n ≥ 1) is consequently superefficient at ϑ = 0. Indeed on the 1 one hand, for ϑ = 0 and large n, we get |ϑ| − n− 4 > c > 0. By the triangle inequality
1 1 Pn,ϑ Tn = X n = Pn,ϑ X n ≤ n− 4 = Pn,ϑ X n − ϑ + ϑ ≤ n− 4
1 ≤ Pn,ϑ |ϑ| − X n − ϑ ≤ n− 4
1 = Pn,ϑ X n − ϑ ≥ |ϑ| − n− 4
≤ Pn,ϑ X n − ϑ > c −→ 0 as n −→ ∞ since the sequence of empirical means (X n , n ≥ 1) is consistent. In other words, the sequence of estimators (Tn , n ≥ 1) behaves like (X n , n ≥ 1) at infinity for all ϑ = 0 and is asymptotically normal with variance σ 2 (ϑ) = σ 2 (ϑ) = 1. On the other hand, for ϑ = 0,
1 Pn,ϑ Tn = αX n = Pn,ϑ X n > n− 4 √
1 = Pn,ϑ n X n − 0 > n 2 −→ 0 as n −→ ∞ since, under Pn,ϑ , the sequence (X n , n ≥ 1) is asymptotically √ √ normal with rate n and, consequently, ( n)n -consistent. Consequently, the sequence of estimators (Tn , n ≥ 1) behaves like (αX n , n ≥ 1) at infinity and is asymptotical normal with variance σ 2 (0) = α2 < 1 for ϑ = 0.
58
Statistical Inference in Financial and Insurance Mathematics with R
Even if the Hodges sequence of estimators defined in example 3.1 is superefficient at 0, it can be shown that it is not a uniform improvement of the sequence of maximum likelihood estimators, as will be described later on. One possible definition of asymptotic efficiency is based on the following minimax property which is satisfied by the sequence of empirical means X n , n ≥ 1 and not by the sequence (Tn , n ≥ 1) in the previous example 3.1. For (ϕ(n, ϑ0 ))n -consistent sequence of estimators (Tn , n ≥ 1), the van Trees inequality lim inf lim inf C→∞
sup
n→∞ |ϑ−ϑ |
Eϑ ( (ϕ(n, ϑ0 ) (Tn − ϑ))) ≥ c(ϑ0 )
[3.4]
is considered in the following. Here the risk function is a symmetric 2 non-negative quasi-convex function with lim|z|→∞ e−|z| (z) = 0 for all > 0. For instance, in the one-dimensional case, the cost function (x) = x2 can be taken to give a sense of asymptotic lower bound for variance of the sequence of estimators in the statistical experiment considered. It is worth emphasizing that the aforementioned inequality can be proved in numerous statistical experiments. A sequence of estimators is said to be asymptotically efficient if it reaches the lower bound in [3.4]. √ For most of the regular experiments, the rate will be ϕ(n, ϑ) = n and will not depend on ϑ. However, singular experiments could have other rates (see example 3.4) and rates could depend on the parameter. It is worth mentioning that all estimators (not only unbiased) compete in the last notion of efficiency. E XAMPLE 3.2.– Let us consider the Gaussian shift statistical experiment described in examples 2.1, 2.14 and the two sequences of estimators described in example 3.1, namely the sequence of empirical means (X n , n ≥ 1) which is also the sequence of maximum likelihood estimators and the Hodges sequence of estimators (Tn , n ≥ 1) with α = 0 (see also [NIC 13] where this example is treated). It can easily be shown that, for all n ≥ 1 and all ϑ ∈ Θ, 2 =1 Eϑ n X n − ϑ
[3.5]
Asymptotic Efficiency
59
and, consequently, for all C > 0, all n ≥ 1 and all ϑ0 ∈ Θ, sup |ϑ−ϑ0 |< √Cn
2 =1 Eϑ n X n − ϑ
[3.6]
which shows that the sequence of empirical means (X n , n ≥ 1) reaches the lower bound in [3.4] and is asymptotically efficient. In this sense, the sequence maximum likelihood estimators X n , n ≥ 1 behaves better than the sequence (Tn , n ≥ 1). Indeed, for C > 0, let us consider the similar quantity for the Hodges sequence of estimators (Tn , n ≥ 1) in the vicinity of ϑ0 = 0. It can be shown that 2
C sup Eϑ n (Tn − ϑ)2 ≥ E √C n Tn − √ n n |ϑ|≤ √C n
≥ C 2 E √C
n
{Tn =0}
2 ≥ C 1 − Pn, √C (Tn = 0) . n
It had been shown that Pn,0 (Tn = 0) −→ 0 as n tends to infinity. We admit that Pn,0 et Pn, √C are contiguous (see [ROU 72] for a definition) and behave n
similarly at infinity. Consequently, for all C > 0, lim
n→∞
sup Eϑ n (Tn − ϑ)2 ≥ C 2
|ϑ|< √Cn
and lim lim
C→∞ n→∞
sup Eϑ n (Tn − ϑ)2 = ∞
|ϑ|< √Cn
which is clearly not efficient.
60
Statistical Inference in Financial and Insurance Mathematics with R
3.1. Likelihood ratio, local asymptotic likelihoods and the van Trees inequality Let Θ be an open subset of The statistical experiment n i=1
,
n
p.
properties
Let νn be a σ-finite measure on
B , (Pn,ϑ , ϑ ∈ Θ)
of
the
n
.
i=1 B
,n≥1 ,
[3.7]
i=1
dominated by νn with Radon–Nikodym derivatives dPn,ϑ (x) = fn (ϑ, x), dνn
x∈
n
,
i=1
is considered. Let us consider the sequences of non-degenerate matrices (ϕ(n, ϑ), n ≥ 1) such that, for all ϑ ∈ Θ, the minimum eigenvalue λ∗ (n, ϑ) increases to ∞ when n → ∞. We define, for u ∈ Un,ϑ = {v ∈ p ; ϑ + ϕ(n, ϑ)−1 v ∈ Θ}, the likelihood ratio associated with the sequence (ϕ(n, ϑ), n ≥ 1) with Zn,ϑ (u) : x −→ Here, for x ∈
n
dPn,ϑ+ϕ(n,ϑ)−1 u (x). dPn,ϑ
i=1
[3.8]
,
dPn,ϑ+ϕ(n,ϑ)−1 u (x) dPn,ϑ =
fn (ϑ + ϕ(n, ϑ)−1 u, x) fn (ϑ, x)
{fn (ϑ,x)>0}
+∞·
{fn (ϑ,x)=0} .
[3.9]
For a homogeneous statistical experiment, the likelihood ratio can be reduced to Zn,ϑ (u) : x −→
fn (ϑ + ϕ(n, ϑ)−1 u, x) fn (ϑ, x)
{fn (ϑ,x)>0} ,
x∈
n i=1
.
Asymptotic Efficiency
3.1.1. Local likelihoods
asymptotic
(mixing)
normal
property
of
61
the
A statistical experiment satisfies the local asymptotic mixing normal (LAMN) property of the likelihoods at ϑ ∈ Θ with rate (ϕ(n, ϑ), n ≥ 1), if there is a symmetric positive definite matrix I(ϑ) such that 1 ∗ ∗ Zn,ϑ (u) = exp u ζn − u I(ϑ)u + rn (u, ϑ) 2
[3.10]
with, as n → ∞, rn (u, ϑ) −→ 0
and
(ζn , I(ϑ)) −→ (ζ, I(ϑ))
in law under Pn,ϑ . Moreover, conditionally to I(ϑ), the random vector ζ is centered and Gaussian of variance I(ϑ). When I(ϑ) is deterministic in [3.10], the statistical experiment satisfies the local asymptotic normal (LAN) property of the likelihoods at ϑ ∈ Θ. Heuristically, statistical experiments which satisfy the local asymptotically normal property of the likelihoods at ϑ ∈ Θ behave locally and asymptotically like the Gaussian shift experiment as it is seen in the next example. E XAMPLE 3.3.– Let us consider the statistical experiment of observing a sample of independent and identically distributed Gaussian random variables of mean ϑ ∈ (a, b) ⊂ and variance 1 described in example 2.1 (Gaussian shift experiment). In this statistical experiment, n dPn,ϑ 1 −n 2 (x) = (2π) 2 exp − (xi − ϑ) , dν ⊗n 2 i=1
x = (x1 , . . . , xn ) ∈
n i=1
,
ϑ ∈ Θ,
62
Statistical Inference in Financial and Insurance Mathematics with R
and, consequently, log
dPn,ϑ+ √u
n
dPn,ϑ
2 u − (xi − ϑ)2 xi − ϑ + √ n n u2 1 (xi − ϑ) − . −√ 2 n n
1 (x) = − 2 i=1 =u·
i=1
Here, I(ϑ) = 1 and (L √1n ni=1 (Xi − ϑ) | Pn,ϑ , n ≥ 1) converges weakly to the Gaussian distribution N (0, I(ϑ)) by the classical central limit theorem. It is worth emphasizing that there is no remainder in the LAN formulation [3.8] for the Gaussian shift experiment. Different statistical experiments which satisfy the local asymptotically normal property of the likelihoods are presented in section 3.2. 3.1.2. The van Trees inequality This section outlines the two main theorems for determining an asymptotic van Trees inequality and defines the notion of efficiency in statistical experiments. They are due to Le Cam and Jeganathan. T HEOREM 3.1.– For a statistical experiment satisfying the LAMN property of the likelihoods at point ϑ0 , we have the following asymptotic van Trees inequality, for any (ϕ(n, ϑ0 ))n -consistent sequence of estimators (Tn , n ≥ 1), lim inf lim inf C→∞
sup
n→∞ |ϑ−ϑ |
Eϑ (ϕ(n, ϑ0 ) (Tn − ϑ))
≥ E I(ϑ0 )−1 ξ
[3.11]
where ξ is a standard Gaussian random variable independent of I(ϑ0 ). Proof. A general proof can be found in [JEG 83]. The proof for the LAN setting can be found in [HÁJ 72] or [LEC 72]. In the following, the LAN property of the likelihoods is shown for different statistical experiments: statistical experiments generated by a sample
Asymptotic Efficiency
63
of independent and identically distributed random variables, a sample of independent but non-homogeneous random variables, a homogeneous ergodic Markov chain and a sample of Gaussian dependent random variables. 3.2. LAN property for different statistical experiments 3.2.1. Statistical experiments generated by a sample independent and identically distributed random variables
of
Let Θ be an open subset of and ν be a σ-finite measure on B . The statistical experiment generated by a sample of independent and identically distributed random variables is considered and is defined by n n , B , (Pn,ϑ , ϑ ∈ Θ) , n ≥ 1 i=1
with Pn,ϑ =
i=1
n
i=1 Pϑ
dPϑ = f (ϑ, x), dν
and Radon–Nikodym derivatives x∈
ϑ ∈ Θ.
,
Let us consider the dimension one case p = 1. In the regular conditions of theorem 2.2, the local asymptotic normality of the likelihoods holds at ϑ ∈ Θ √ with rate ϕ(n) = n and the Fisher I(ϑ) is defined by [2.18]. information
u Indeed, the Taylor expansion of log f ϑ + √n , x gives u ∂ u log (f (ϑ, x)) = log (f (ϑ, x)) + √ log f ϑ + √ , x n n ∂ϑ + for x ∈ n i=1
and ϑ < ϑ∗n < ϑ +
√u n
u2 ∂ 2 log (f (ϑ∗n , x)) n ∂ϑ2
say. Consequently, for x = (x1 , . . . , xn ) ∈
,
log
dPn,ϑ+ √u
n
dPn,ϑ
(x) = log
fn (ϑ +
√u , x) n
fn (ϑ, x) n
u ∂ u2 log (f (ϑ, xi )) − I(ϑ) + rn (u, ϑ) =√ ∂ϑ 2 n i=1
64
Statistical Inference in Financial and Insurance Mathematics with R
with u2 rn (u, ϑ) = 2
n 1 ∂2 log (f (ϑ∗n , xi )) − I(ϑ) . n ∂ϑ2 i=1
On the one hand, classical central limit
theorem gives that the sequence the n 1 of distributions (L √n i=1 Sϑ (xi ) Pn,ϑ , n ≥ 1) converges weakly to the Gaussian distribution N (Eϑ (Sϑ ), Varϑ (Sϑ )) = N (0, I(ϑ)). On the other hand, we can show the convergence to zero in probability of the remainder using the uniform law of large numbers [BIL 95]; it can be noted that convergence in probability and convergence in law toward a constant is equivalent and the local asymptotic normality property of the likelihoods is proved. The local asymptotic normality can also be stated for a family of 2 probability measures (Pϑ , ϑ ∈ Θ) dominated by ν and L (ν)-differentiable, i.e. the function g(u, x) = f (u, x) is differentiable at point u = ϑ in L2 (ν) for all ϑ ∈ Θ, namely there is a function ψ : Θ × −→ such that, for all ϑ ∈ Θ, ˆ ψ 2 (ϑ, x)ν(dx) < ∞ [3.12] and ˆ lim
h→0 , ϑ+h∈Θ
g(ϑ + h, x) − g(ϑ, x) − ψ(ϑ, x) h
2 ν(dx) = 0. [3.13]
The reader can refer to Appendix 2 for properties and examples. √ It is also assumed that the L2 (ν)-derivative of f , denoted as ψ, is continuous with respect to ϑ in L2 (ν) for all ϑ ∈ Θ. The following result is known as the second Le Cam lemma. T HEOREM 3.2.– For a regular independent and identically distributed statistical experiment and under the previous assumptions, the LAN property
Asymptotic Efficiency
of the likelihoods holds at ϑ ∈ Θ with rate ϕ(n) = information I(ϑ) is defined by ˆ ψ 2 (ϑ, x)ν(dx). I(ϑ) = 4
√
65
n and the Fisher
[3.14]
Proof. The proof can be found in [IBR 81].
It is worth mentioning that for a regular positive function f , the function ψ coincides with x −→
∂ f (ϑ, x) ∂ϑ
for ν-almost all x ∈
.
3.2.2. Statistical experiments independent random variables
generated
by
a
Let ν be a σ-finite measure on B , Θ be an open subset of i Pϑ , ϑ ∈ Θ , i ≥ 1
sample
of
and
be a sequence of families of probability measures dominated by ν. We consider the statistical experiment generated by a sample of independent random variables defined by n n , B , (Pn,ϑ , ϑ ∈ Θ) , n ≥ 1 i=1
with Pn,ϑ =
i=1
n
i i=1 Pϑ .
dPϑi = f i (ϑ, x), dν
Let us denote, for i ≥ 1, Radon–Nikodym derivatives x∈
,
ϑ ∈ Θ.
As in the previous section, we suppose that the functions gi (u, x) = f i (u, x), i ≥ 1, are differentiable at point u = ϑ in L2 (ν) for all ϑ ∈ Θ, i.e. there are functions ψi : Θ × −→ such that, for all ϑ ∈ Θ, ˆ ψi2 (ϑ, x)ν(dx) < ∞ [3.15]
66
Statistical Inference in Financial and Insurance Mathematics with R
and ˆ lim
h→0 , ϑ+h∈Θ
gi (ϑ + h, x) − gi (ϑ, x) − ψi (ϑ, x) h
2 ν(dx) = 0.
[3.16]
The derivatives (in the mean square sense), denoted as ψi (ϑ, x), are assumed to be continuous with respect to ϑ in L2 (ν) for all ϑ ∈ Θ. For all i ≥ 1, we define ˆ ψi2 (ϑ, x)ν(dx) Ii (ϑ) = 4 and Ψ2 (n, ϑ) =
n
Ii (ϑ)
i=1
which are finite due to [3.15]. Moreover, we assume that 1) for all ϑ ∈ Θ and n ≥ 1, Ψ2 (n, ϑ) > 0;
[3.17]
2) for any k > 0 and u such that ϑ + uΨ−1 (n, ϑ) ∈ Θ, n
1 lim sup 2 n→∞ |u|
ˆ
ψi ϑ + uΨ−1 (n, ϑ), x
i=1
2
−ψi (ϑ, x)) ν(dx); 3) Lindeberg’s condition is satisfied: for every ε > 0, ⎛ ⎞ 2 n ψi (ϑ, x) 1 ⎠ = 0. Eϑ ⎝ lim ψ (ϑ,x) √i >εΨ(n,ϑ) i (ϑ, x) n→∞ Ψ2 (n, ϑ) f i i=1 f (ϑ,x)
[3.18]
[3.19]
T HEOREM 3.3.– For regular independent statistical experiments and under the previous assumptions, the LAN property of the likelihoods holds at ϑ ∈ Θ with rate ϕ(n, ϑ) = Ψ−1 (n, ϑ). Proof. The proof can be found in [IBR 81].
Asymptotic Efficiency
67
3.2.3. Statistical experiments generated by a homogeneous Markov chain Let ν be a σ-finite measure on B and Θ be an open subset of . The observations (X0 , X1 , . . . , Xn ) are the first elements of a homogeneous Markov chain that is strictly stationary and ergodic (see [ROU 72]) whose law depends on a parameter ϑ ∈ Θ. We denote p(x; ϑ) the initial distribution (and stationary distribution) and p(y; x, ϑ) the transition probability density function of the Markov chain, which are absolutely continuous with respect to ν for all ϑ ∈ Θ. Then, we consider the statistical experiment n i=1
,
n
B , (Pn,ϑ , ϑ ∈ Θ) , n ≥ 1 .
i=1
Let us denote dP2,ϑ = p(y; x, ϑ)p(x; ϑ) dν ⊗2 the Radon–Nikodym derivative of P2,ϑ with respect to ν ⊗2 . We suppose the previous function to be continuous on Θ for ν ⊗2 -almost all z = (x, y). p(y; x, u) is supposed to be Moreover, the function g(u, z) = differentiable at point u = ϑ in L2 (p(·, ϑ)ν ⊗2 ) for all ϑ ∈ Θ, i.e. there is a function ψ : Θ × 2 −→ such that, for all ϑ ∈ Θ, ˆ ψ 2 (ϑ, z)p(x; ϑ)ν ⊗2 (dz) < ∞ [3.20] 2
and
ˆ lim
h→0 , ϑ+h∈Θ
p(x; ϑ)ν
⊗2
2
g(ϑ + h, z) − g(ϑ, z) − ψ(ϑ, z) h
(dz) = 0.
2
[3.21]
Finally, the function ψ is assumed to be continuous with respect to ϑ in L2 (ν ⊗2 ) for all ϑ ∈ Θ.
68
Statistical Inference in Financial and Insurance Mathematics with R
In the following, the Fisher information is denoted as ˆ ψ 2 (ϑ, z)p(x; ϑ)ν ⊗2 (dz) I(ϑ) = 4 2
[3.22]
which is finite due to [3.20]. T HEOREM 3.4.– For homogeneous Markov chain experiments and under the previous assumptions, the LAN property of the likelihoods holds at ϑ ∈ Θ with √ rate ϕ(n) = n and the Fisher information is defined in [3.22]. Proof. The proof can be found in [ROU 72].
3.2.4. Fractional Gaussian noise statistical experiment This section deals with the example of a statistical experiment generated by a dependent Gaussian sample. Depending on the observation scheme, the statistical experiments are regular or singular but both satisfy the LAN property of the likelihoods. E XAMPLE 3.4.– Let us consider example 2.10 which describes the statistical discrete observations of fractional Gaussian noise with mesh Δn . The parameter ϑ = (H, σ) is to be estimated. When Δn = Δ > 0 is fixed, the LAN property of the likelihoods has been established in [COH 13]. The rate √ is ϕ(n) = nI2 with a nondegenerate Fisher information matrix. For Δn −→ 0, the joint estimation is singular. However, the LAN property of the likelihoods has also been established recently in [BRO 17] with a family of non-diagonal rate matrices ϕ(n, ϑ) depending on parameter σ, namely
˜n − H 1 H lim inf E(H,σ) ϕn (ϑ) ≥ E I(ϑ0 )− 2 ξ n→∞ σ ˜n − σ for proper cost functions and any (ϕn (ϑ))n -consistent sequence of estimators ˜ n, σ ((H ˜n )∗ , n ≥ 1) of H and σ. By taking ϕ(n, ϑ) =
√
n
1 0 , σ log Δn 1
Asymptotic Efficiency
69
and ((x, y)∗ ) = x2 , we get, for instance, the asymptotic lower bound of ˜ n − H)2 ]. nE(H,σ) [(H √ This shows that the efficient rate is n and gives the efficient variance for the estimation of H (when H and σ are unknown) on the one hand. For √ n −σ log Δn −1 , ϕ(n, ϑ) = − 1 0 σ log Δn and ((x, y)∗ ) = y 2 , we get the asymptotic lower bound of n E [(ˆ σn − σ)2 ]. (log Δn )2 (H,σ) √
n and the efficient variance is It can be seen that the efficient rate is log |Δ n| computable for the estimation of σ (when H and σ are unknown) on the other hand.
3.3. Asymptotic efficiency of some sequence of estimators For a statistical experiment satisfying the LAN property of the likelihoods and under proper assumptions (that are not detailed in this book, see [IBR 81] for precise statements), the sequence of maximum likelihood estimators and Bayesian estimators are asymptotically normal and efficient. These assumptions can be checked for different statistical experiments. Statistical experiments generated by an independent and identically distributed sample or independent sample are treated in [IBR 81]. Consistency, asymptotic normality and efficiency of the sequence of maximum likelihood estimators for some homogeneous Markov chains are mentioned in [VER 98] and the reference therein. For the statistical experiment generated by a dependent Gaussian sample described in example 3.4, asymptotic efficiency of the sequence of maximum likelihood estimators is proved in [DAH] for large sample convergence scheme and in [BRO 17] for high-frequency convergence scheme.