Journal
of Econometrics
BOUNDS
45 (1990) 367-384.
North-Holland
FOR EXACT MOMENTS OF ESTIMATORS ERRORS-IN-VARIABLES MODEL AND SIMULTANEOUS EQUATIONS
IN THE
Ralph FRIEDMANN* tit7iwrsiriit
Received
Bielefeld,
September
4800
Bielefeld.
West Germany
1988, final version received March
1989
This paper presents some useful inequalities for confluent hypergeometric functions. These inequalities are applied to the finite-sample bias and mean square error of the least squares estimator of the slope coefficient in a simple linear functional errors-in-variables model. Using that both the ordinary and two-stage least squares estimator of a structural coefficient in a simultaneous equation model can be considered equivalently as least squares estimators in appropriately defined errors-in-variables models, we evaluate bounds for the exact bias and mean square error for these estimators as well.
1. Introduction In this paper we consider finite-sample moments of the least squares estimator of the slope coefficient in a simple linear functional errors-in-variables model and of the ordinary least squares (OLS) and two-stage least squares (TSLS) estimator of a structural coefficient in a simultaneous equation model. The structural equation being estimated is assumed to include two endogenous variables, and the number of included or excluded exogenous variables as well as the number of equations are restricted only by the condition, that the equation is over-identified by zero restrictions on the structural coefficients. Richardson and Wu (1970) derived and analyzed the exact distribution of the least squares estimator in the errors-in-variables model. They expressed the bias and mean square error in terms of the confluent hypergeometric function. Similar results were obtained for the OLS and TSLS estimator in simultaneous equations by several authors such as Sawa (1968,1972), Richardson (1968) and Richardson and Wu (1971). In fact, the distribution functions of the OLS and TSLS estimator have the same form as the distribution of the least squares estimator in the errors-in-variables model. Anderson (1976) clarified the connections of the estimation of linear functional relationships with structural coefficient estimation in simultaneous equations. In particular he showed that *I am indebted to an anonymous referee for very helpful revision and extension of an earlier version of this paper.
0304-4076/90/$3.50aT
1990,
Elsevier Science Publishers
suggestions
which led to a significant
B.V. (North-Holland)
368
R. Friedmunn.
Bounds for exact moments of estimators
for a linear functional relationship that is satisfied by true reduced form coefficients, least squares estimation using the ‘observable’ estimates of the reduced form coefficients is equivalent to TSLS estimation in the simultaneous econometric model. In section 4 we shall consider differently defined errorsin-variables models with the least squares estimator being equivalent to the OLS estimator in one case and to the TSLS estimator in the other case. The exact results for the considered estimators are not very tractable. The common strategy is to derive asymptotic expansions of the exact distributions to obtain more insight into the properties of the estimator from approximate results for large sample size, large values of the noncentrality parameter, or a large number of excluded exogenous variables [see, for example, Richardson and Wu (1970), Sawa (1972) and Kunitomo (1980)]. In this paper the exact results will be used in an alternative way. The analysis aims at simple inequalities for the exact moments instead of asymptotic expansions. The derived bounds for the bias and mean square error are based on the monotonicity of appropriate functions involving the hypergeometric function. One result concerning the least squares estimator in the errors-in-variables model is that, if N > 2 holds for the sample size N, the ratio of the exact smallsample bias to the well-known asymptotic bias lies between (N - 3)/( N - 1) and one. This ratio is proved to be a monotonically decreasing function of the noncentrality parameter 7. For large values of the noncentrality parameter [T > (N - 3)/2 > 0] the upper bound for the finite-sample to asymptotic bias ratio can be improved such that it is restricted to a narrow band above (N - 3)/( N - 1). The bounds for the mean square error are of similar nature but slightly more complicated. The article is organized as follows. The linear functional errors-in-variables model and the expressions for the exact moments of the least squares estimator are stated in section 2. The absolute value of the relative bias is proved to be a monotonically increasing function of the sample size N. The formulas for the exact bias and the mean square error of the least squares estimator are transferred from the case of uncorrelated errors with equal variances to the general case of normal errors with arbitrary covariance matrix. In section 3 we derive the inequalities for hypergeometric functions and evaluate the bounds for the exact bias and mean square error of the least squares estimator in the errors-in-variables model. An extension of the results to the OLS estimator and TSLS estimator of a structural coefficient in a simultaneous equation model is given in section 4. 2. The errors-in-variables
model
The simple linear functional errors-in-variables equation relating the unobservable, nonstochastic 77, =
cx+ P‘t,>
i=
l,...,
N,
model is given by the linear variables n, and E,,
R. Friedmunn.
and the observation
369
Bounds for exact moments of estimators
equations
Y, = 7, + EI’
i=l
x, = E, + a,,
,...,
N,
Nk2.
For the random measurement errors E, and 6,, i = 1,. . , , N, it is assumed that (cl, 8,) are independently and identically distributed two-dimensional normal variates with mean zero and positive definite variance-covariance matrix
(3)
The sample size is N and so the sample of observations (y,, xI) consists of N independent drawings from a bivariate normal distribution with mean vector N, and covariance matrix 2. The substitution of the ((Y+P~,,<,)~ i=l,.... eqs. (2) into (1) yields
y,=(Y+Px,+u,,
i= l,...,
N,
(4)
where the disturbance is u, = E, - PC?,. Because of the dependency and x, the classical least squares estimator b,
’
b=
c(x,-q2
where X = x,x,/N 2.1. Reduction
between
u,
(5)
’
and jj = c, y/N
are the sample means,
is biased
to a scalar covariance matrix
In order to simplify the exposition in the following, we perform a linear transformation of the observation eqs. (2) in such a way that the covariance matrix (3) is reduced to a scalar matrix. Let 0 denote the regression coefficient of E, on 6, and o* the ratio of the conditional variance of E, given 6, to the variance of 6,
e=
012 iA=
(
(Jll(J22
a nonsingular
52
‘1
l/2 (6)
022
(J22
We make
-
linear
transformation
such that
y,, q,, E; are trans-
370
formed
R. Eriedmunn,
Bounds for exact moments of estimutors
into
lli* = (9j-etj)/03
i=l
(7)
N,
>...,
&*=(Ei-esi)/ld, and xi, ti, Sj remain equivalent expression ?jy =
unchanged. Using the transformed variables of the errors-in-variables model (l), (2):
we have an
a* + /3*&,
y,*=Tj;+&*,
i= l,...,
N,
(8)
x, = 5, + 6,) where (E,*, S,), i = 1,. . . , N, are independently and identically distributed two-dimensional normal variates with mean zero and scalar covariance matrix a,,I, and (Y*= a/o, p* = (p - e>/w.
(9)
The least squares estimator b* of j?* in the transformed model (8) is related with the least squares estimator b in the original model in the same way as p* is related with fi by (9) that is b=B+ob*.
Therefore,
assuming
00)
/3* z 0, the bias of b can be expressed
as
E( b*) - p* E(b)-P=(P-@)
and the mean
p*
3
square error of b can be expressed
as
E( b* - p*)’ E(b-D)2=(D-8)2
p*2
02)
Using (11) and (12) expressions for the exact relative bias and the relative mean square error of the least squares estimator b* under the restrictive assumptions of uncorrelated errors with equal variances can immediately be transferred into expressions for the bias and mean square error in the general case with arbitrary error covariance matrix 2.’ ‘Halperin and Gurian (1971) and Mittag (1987) generalized the results by Richardson to the case of correlated errors using different methods: their results are equivalent implied by (ll), (12). (13). and (19).
and Wu to those
R. Friedmann,
Bounds for exact moments of estimators
311
2.2. Exact moments of the least squares estimator Richardson least squares
and Wu (1970) investigated the small-sample properties of the estimator b*. For the relative bias they derived the expression
E( b*) - ,L3*
= -exp(-z),F,
03)
P* where ,F,( a, b, z) denotes (1960, p. 2)1,
the confluent
hypergeometric
function
[see Slater
n=N-I
04)
and
m-9’ =ir,
z=
.T=
‘C(ti-S)’ n
(15)
022
2022
Notice that the noncentrality parameter r can be interpreted as the signal to noise variance ratio of the independent variable x, = 5, + ai. The relative bias of the least squares estimator b* in the errors-in-variables model with uncorrelated errors lies between zero and minus one, and for large samples it can be approximated by E(b*)-/3* p* which implies
-1 =- 1+7
the well-known
lim
E(b*) -P* p* II-+ CC
___
expression
+ o(n-2)
1
for the asymptotic
-1 =- 1+7’
(16)
)
relative
bias
(17)
Based on numerical computations, Richardson and Wu made the conjecture, that the absolute value of the relative bias is a monotonically increasing function of n. In fact, this conjecture, which obviously holds to the order of approximation by (16) is also true for the exact bias, because for
h(a)
= exp( -aT),F,(a
- 1, a, a7),
one obtains
cYh/aa>O For the proof
J.Econ-
D
if
08)
a>O.
of (18) see appendix
A.
372
R. Friedmunn,
The relative
Bounds
forexoci
moments of estimutors
mean square error of b* was derived by Richardson
and Wu as
E( b* - p*)’ 1 + (l/p*)’ = ** n-2 P n-3 + n-exp(-z)
n
1F, y-Li,z i
1
(19)
3. Bounds for the bias and mean square error The functions involved in the expressions mean square error have the form exp(-z) We shall functions
,F,(a-k,a,z).
derive
g(z)
(13) and (19) for the bias and
inequalities
=exp(-z)
f(z)=exp(-z)
for this type
of functions.
Consider
the real
,F,(a--k,a,z)(a+z)k,
(20)
,F,(a--k,a,z)zk.
(21)
The inequalities of interest are implied by the following proposition stating that g is a monotonically decreasing function and f is a monotonically increasing function of z, z > 0, with appropriate parameter values a and k. Proposition 1. For any pair of parameters k, a, where k is a natural number and a is a real number with a 2 k, the real function g(z) satisjes for z > 0: CYg/dz C 0,
(22)
lim g(z) ;+o
= g(0) = uk,
(23)
lim g(z) ;+cT
= (U - k)k,
(24)
and, if a 2 k + 1, the real function f(z) af/az
> 0,
lim f(z) ;+o lim f(z) :+oc
satisfies for z > 0 (25)
=f(O) = (U-k)k,
=O,
(26) (27)
R. Friedmnnn,
where the symbol
Bounds
for exact
From
See appendix
k= 1,2,...
I
k
.
(28)
B.
(22) to (27) we conclude
(a-k), (a+4
373
((Y)~ denotes the quantity
((Y)A=CI((Y+l)...((Y+k-l),
Proof.
moments of estimators
exp(
the following
inequalities:
-z) ,F,( a - k, a, z)
(29)
Substituting
n/2
for a, and (n/2)7
for z according
to (15), we obtain
(30)
The ratios
of the lower bound
to the upper bounds
(30a) and (30b) are given
by
=
(n-2) L(n-4 n
(1 +
7)” ’
n
(n - 2k) . . .
n
’
(31a)
(31b)
R. Friedmum, Bounds for exact moments of estimutors
314
respectively. Hence, with increasing n the ratio (31a) tends to one, while with increasing T the ratio (31b) tends to one. Thus for fixed n the upper bound (30b) is sharper than (30a) for large values of r; a sufficient condition is 7.2 (n - 2)/2. The bounds for the finite-sample bias of the least squares estimators b* and b are implied by (11) (13) and (30) with k = 1. Proposition 2. The absolute value of the exact bias of b* in the errors-in-variables model (8) and of b in the errors-in-variables model (1) (2) is bounded by
1
1+r
if
n22
if
n 2 4,
(32)
l-2 n 7
where l/(1 + r) is the absolute value of the asymptotic relative bias (I 7) and 0 = o12/oz2. The ratio q of the bias to the asymptotic bias,
E(b) -P ‘=
,!hmW(E(b)
is a decreasing function
-/I)
=
of r, and, if n 2 4,
(33)
Lower and upper bounds for the mean square error (19) of b* and of the mean square error (12) of b can be derived by (30) with k = 1,2. The results for the mean square error are given in Proposition 3.
Bounds for exact moments of estimators
R. Friedmann,
Proposition 3. The mean square errors of the least squares estimators are bounded by 1 + (1/p*>”
ny1 + T)’
E(b*-/3*)2
P
*2
1+
I
E(b+)*
=
(P-e)2
(l/P*)’
If
+ (n - 3)(n -4)
if
n24 (34)
1 + (VP*)’
where
(n-3) + (n-2)(1+,+
(n-2)(1+7)
I
b and b*
+ (?I - 3)(n - 4)
n(1 + T) I
315
nr
n2r2
n 2 6,
p* = (p - e>/o.
With increasing sample size n the ratio of the lower bound to the upper bound (34a) tends to one, and with increasing r the ratio of the lower bound to the upper bound (34b) approaches one. 4. Simultaneous
Consider Yl
equation model
the first structural =y*P
+
-qY,+
Z,Y*
equation, +
u,
(35)
in a system of G 2 2 simultaneous equations relating G endogenous and K exogenous variables. The N x 1 vectors y, and y2 denote observations on endogenous variables; Z is a N X K matrix of observations on K exogenous variables partitioned as Z = (Z,, Z,), where Z, is a N X K, matrix of included exogenous variables and Z, is a N X K, matrix of excluded ones, that is the K, X 1 vector y2 is restricted to zero. We assume that K = K, + K, I N and K, 2 2; thus eq. (35) is overidentified by zero restrictions on the structural coefficients. The scalar j? and the K, X 1 vector yr are the coefficients to be estimated. The N x 1 vector u is a normal disturbance vector with mean zero and scalar covariance matrix. The observation matrix Z of exogenous variables is assumed to be nonstochastic and of rank K. The reduced form of the structural equation system is assumed to exist, with the reduced form equa-
316
tions
R. Friedmunn,
Boundsfor exact
for y, and y z being written
momenis of estimators
as
Yl
=
z,n,,+z2=,2+ Ul>
Y,
=
-WI,,
(36) +
z2fl22
+
“23
where III,,, I12r are K, x 1 vectors and IIt*, II,, are K, X 1 vectors of constant coefficients. The random vector ( IJ;, u$) is distributed as multivariate normal with zero mean vector and positive definite covariance matrix %JN
~I,~,
U,,lN
022IN
1
(37)
.
We shall ev_aluate bounds for the exact*bias and mean square error of the OLS estimator /?t and the TSLS estimator & of j3, B,
i=
=Y;J4,.Y,/M,.Y2~
1,2,
(38)
where A, = I, -
z;(z;z,)-‘z;,
A, = Z;(
Z;‘Z:)-‘Z:’
(39) with
Z: =A,Z,.
(40)
Expressions for the finite-sample bias and mean square error of p^t and & have been derived and analyzed by Richardson (1968) Richardson and Wu (1969) and Sawa (1972). It may be worth, however, to consider fit and &, following Anderson (1976) as least squares estimators in linear functional errors-in-variables models such that the results of section 3 immediately extend to p^r, b2.* The basic linear relationship on which the following errors-in-variables models will be built is given by the restriction on the reduced form coefficients II,,
=
(41)
n;,p,
which is implied
by restricting
y2 identically
equal to zero.
4.1. The OLS estimator Let Q denote an orthogonal N X N matrix of normalized eigenvectors of the symmetric idempotent matrix A, of rank N - K,, partitioned as Q = *Interrelations between estimators in the errors-in-variables model and in the simultaneous equation model have also been analyzed by Kunitomo (1980) and Schneeweiss (1985).
R. Friedmunn,
317
Bounds for cxuc~ moments of estimators
(Q,, Q2) where Qi is the N x (N - K,) matrix consisting of the eigenvectors corresponding to the N - K, characteristic roots equal to one. Then
Q;AIQl=
Q;Q, = t-K,, (42)
QIQ; = A,, Q;Z,= 0. We define
the nonstochastic
9’ = Q;(
~1 -
(N - K,) x 1 vectors
no and to as
~1)= Q;ZP-I,, t
(43)
Notice that the linear functional relation no = t”fi follows by a linear transformation of (41). Hence, by virtue of (41) and (43) a linear functional errorsin-variables model is given by
Q;yl= TJ() + Q/q. Q;yz =
(44)
E”+ PA,
where the error vector (u;Qi, u;Qi) is distributed zero mean vector and positive definite covariance
as multivariate matrix
normal
with
(45)
The only difference between the mathematical structure of the model (44) and that of the model (l),(2) is that (44) is a linear homogeneous relation while (1) is inhomogeneous. It should be noted, however, that the inhomogeneous linear relation (1) can be transformed into a homogeneous one with n = N - 1 observations and independent errors such that (44) (45) fits exactly the set-up of the original errors-in-variables model. *The least squares estimator b of p in (44) is equivalent to the OLS estimator pi:
Consequently
the parameters
n, z, 7, which were defined by (14) (15), take the
R. Friedmunn,
378
following
values
Bounds for exact moments of estimutors
in the model (44):
n=n,=N-K,>K,,
(47)
z=g=(nl,2)Tl,
(48)
22
E0’5”/n,= %2WlZ2f122/nl
(49)
T=T1= 022
022
Using these parameters we obtain the bounds for the exact bias and mean square error of PI from Propositions 2 and 3 as a corollary. We will reproduce the the expressions with the parameters n, and 71 in order to facilitate comparison with the bounds for the TSLS moments. TheJinite-sample Corollary I. (35) is bounded by n1
n,(l
bias of the OLS estimator & in the structural eq.
-2
I + 5) 1 1 + 71
(50)
n, - 2 if
n, 2 4,
n171
where 0 =
(J12/(J22.
The mean square error of p^, is bounded by 1 + (l/P*)’ nI(l
5
+ (n, - 3)(nI
+ 7,)
nf(l
- 4)
+ TV)’
E(&-P)2 u-9’ 1+
(l/P*>”
n, - 3
(81 - 2)(1 + TI) + (n, - 2)(1 + TV)’ 1 + (l/P*)” *ITI
+ (nl - 3)(n, nzrf
- 4)
if
n, 2 4 (51)
if
n, 2 6,
379
R. Friedmunn, Bounds for exact moments of estimutors
where
With increasing sample size the ratios of the lower bounds (50),(51) to the respective upper bounds (50a),(51a) tend to one, and with increasing +rr the ratios of the lower bounds in (50),(51) to the respective upper bounds (50b),(51b) tend to one. The upper bounds (50b),(51b) are sharper than (50a), (50b), respectively, if rr 2 (n, - 2)/2 2 2.
4.2. The TSLS
estimator
Following Anderson (1976) we express the TSLS estimator & as least squares estimator in an errors-in-variables model. We start with another linear transformation of the basic relation (41). Let R denote a nonsingular K, X K, matrix which is a square root of Z;‘Z?, that is RR’ = Z 2*‘Z 2* = Z;A,Z,. Then
for the following R’II,,
linear relationship
of the K, x 1 vectors R’fl12,
= R’Il,,P,
=
R’Il,,, (53)
we obtain the ‘observation’ reduced form coefficients, fi,,
(52)
equations
using the least squares
estimates
of the
(zyz:)-‘zyy,
= II,* +
(zyz;)-‘z;?l;,
i = 1,2,
(54)
= R’IT,, + w2,
(55)
as R’fl,,
= R’II,,
-t wl,
R’fi2,
where the K, x 1 error vectors wr, w2 are given as w, = R’(Z:‘Z:)-‘ZT’u,, i = 1,2, and (w;, w;) is distributed as a multivariate normal with zero mean vector and covariance matrix
(56)
The TSLS estimator
b2 is equivalent
to the least squares
estimator
b of /3 in
R. Friedmum,
380
the errors-in-variables
model (53) (55)
fi;, RR~A,, b= T~;,RR~~, For the parameters
Bounds for exuct moments of estimators
Y&42Y, =--=p2. YP2Y2
we obtain
(57)
in the linear functional
errors-in-variables
model
(53) (55) n=n,=K,,
(5%)
II;, RR’II,,
fl;,Z;-4,Z,fl,,
t0’Eo
(59)
z=
2a,,
2a,,
=
= -2022
)
~=22%4,Z2fl22/K2 7’72’
(60) a22
Notice that the parameter z takes the same value in the TSLS case as in the OLS case; the noncentrality parameters ri (49) and r2 (60) are related by N - K, (61)
72 = Kzrl.
If one adopts the usual assumption that ri converges to some positive finite number with increasing sample size N, then r2 obviously tends to infinity with increasing sample size. Because of this dependence between r2 and the sample size we substitute r2 by (61) in the resultant formulas for the TSLS estimator. bias of the TSLS
Corollary 2. The finite-sample eq. (35j is boundeh by K, - 2
E(@2j,)
estimator fi2 in the structural
-P
< K,
+
nlrl
b-6
K2 + ’
K,-2 i
nlrl
nlrl
(62) if
K,24,
R. Friedmunn,
Bounds for exucr moments of estimulors
381
where 8 = %/%. The mean square error of & is bounded by
(l/P*>’ (&-3)(&-4)
1+ K2
+
n171
+
W2+nd2
E(li2-P)2 (P-U’
K2(1
(
K,
1+
I
I
-
+ 2)
o/P*)’
(l/P*>‘) ( K2
+
(K2VJ
+
( K,
n171
+
-
3)K,2
if
K,24
if
K2>6,
(K2-2)(K2+w)2
3) (
K2
-
4)
( nd2
(63) where P*=(P-e)/w,
w2=(u11022-o;2)/0;2
With increasing model size, measured by K,, the ratios of (62) (63) to the respective upper bounds (62a),(63a) tend model size K,, with increasing n,T,, the ratios of the lower to the upper bounds (62b),(63b), respectively, tend to the upper bounds (62b),(63b) are sharper ( K, - W,/T respectively.
the lower bounds to one; for fixed bounds (62),(63) one. If n,T, 2 than (62a),(63a),
Appendix A For the real function rameter r > 0, we find c5’h z=exp(-a7)
h(a)
= exp( -aT)
lFI(a
a,a7)
- 1, a, ar),
-71FI(a-1,a,a7)
a > 0, with
pa-
1. (A.11
382
R. Friedmutm.
The confluent
hypergeometric
Bounds for exuct moments of estimators
function
*F1(a-l,a,uT)=l+
in (A.l) u-l
f
(a+
$=ra+s-l and the first derivative
is obtained
with SU7
(a+s-
s=l
(A.2) and this derivative
U7
E s=l
= exp(
-
ST
(u+s)2(u+s-l) UT2
-uT)
f s=o
(u-
1)’
s. I
(a+
(u+s)(u+s-1)
E s=l
I
U+S
in (A.l) leads to
f a’ $=I I (u +s)2
-uT)
S!
i
(m)”
(u-1)7
+
(u+s)2
i
ah
= exp(
(uq-l
E((u:s)2+
=r+
z=exp(-ur)
s(u - 1)7 + u+s-1 1)2
(u - 1)7 (UT)” u+s 1 s!
s=o
Substituting
64.4
’
s!
~,Fl(u-l,u,uT)= f i =
is given by
(u+s+1)2(a+s)
I
s!
(4 s!
(4” s!
.
This result shows that the first derivative is positive for a > 0. Hence h(n/2), the absolute value of the relative bias, is a monotonically increasing function of n. Appendix B: Proof of Proposition 1 (22):
Since [see Slater (1960, p. IS)]
iexp(-z),F,(u-k,u.z)=
-iexp(-z),F,(u-k,u+l,z),
R. Friedmunn,
the derivative
of g(z) = exp( -z)
ag k -=
a(a+z)
JZ
Bout&
for exact moments of estimators
383
,F,( a - k, a, z)(a + z)~ is
“-‘exp(-z)[a,F,(a-k,a,z)
-(a+z),F,(a-k,a+l,z)]. Using
the recurrence
relation
(2.2.2.) in Slater (1960, p. 19) we have
a,FI(a-k,a,z)-(a+z),F,(a-k,a+l,z)
=
V+l)
-----zlF,(a-k,a+2,z), (a+l)
hence
ag
k(k+l)
i?Z
a(a+l)
_=_
if z > 0, with parameters (23):
z(a+z)
‘-‘exp(
-z)
,E;( a - k, a + 2, z) < 0,
a r k, k > 0.
Obvious.
(24): The asymptotic expansion of the confluent for large z, z > 0, yields [see Slater (1960, p. 60)]
r(a)
,F,( a - k, a, z) = r(a
_ k)
exp(z)z-Yl+ O(z-‘)I,
hence
lim 2-m
g(z)
r(a)
= lim .‘+m T(a-k)
= rL(u)k,
because
k is a natural
number
hypergeometric
= (a - k)k,
and a 2 k.
function
384
R. Friedmum,
Bounds for exact moments of estimators
From (2.1.15.) in Slater (1960, p. 15) it follows that the derivative (25): f(z)=exp(-z),F,(a-k,a,z)zk is
af -= az
kzk-‘exp(
with parameters z > 0. Obvious.
(27):
Since
g(z)
-= --cc
(27) is implied
,F,( a - k - 1, a,
z),
a 2 k + 1, k > 0. Hence the first derivative
(26):
lim
-z)
f(z)
a
lim ;*oo
z
of f is positive
for
k
( i I+-
of
=l,
by (24).
References Anderson, T.W., 1976, Estimation of linear functional relationships: Approximate distributions and connections with simultaneous equations in econometrics. Journal of the Royal Statistical Society B 38, 1-31. Halperin, Max and Joan Gurian. 1971. A note on estimation in straight line regression when both variables are subject to error, Journal of the American Statistical Association 66, 587-589. Kunitomo, Naoto, 1980, Asymptotic expansions of the distributions of estimators in a linear functional relationship and simultaneous equations, Journal of the American Statistical Association 75. 693-700. Mittag. Hans-Joachim. 1987. Moditizierte Kleinst-Quadrate-SchHtzung im Model1 mit fehlerbehafteten Daten. Mathematical Systems in Economics 109 (Athenaurn, Frankfurt). Richardson. David H., 1968, The exact distribution of a structural coefficient estimator, Journal of the American Statistical Association 63, 1214-1226. Richardson. David H. and De-Min Wu. 1970, Least squares and grouping method estimators in the errors in variables model, Journal of the American Statistical Association 65, 724-748. Richardson. David H. and De-Min Wu. 1971. A note on the comparison of ordinary and two-stage least squares estimators, Econometrica 39, 973-981. Sawa, Takamitsu, 1968. The exact sampling distribution of ordinary least squares and two-stage least squares estimators, Journal of the American Statistical Association 64, 923-937. Sawa. Takamitsu, 1972. Finite-sample properties of the k-class estimators, Econometrica 40, 653-680. Schneeweiss, Hans, 1985. Estimating linear relations with errors in the variables: The merging of two approaches. in: H. Schneeweiss and H. Strecker, eds., Contributions to econometrics and statistics today - In memoriam Gunther Menges (Springer. Heidelberg) 207-221. Slater, L.J., 1960. Confluent hypergeometric functions (Cambridge University Press, Cambridge).