Mixed estimation in covariance structure models

Mixed estimation in covariance structure models

Statnxtic~ & Probi,'oaily T.=tl~s 4tlC)gfil 157 159 Nc.rl"h - H ~lland Jtt~e 19S6 Douglas G, BONEJ I Recelv>d .l.~nllary 198(, Rc~'i~c(t March t ~ ...

154KB Sizes 0 Downloads 71 Views

Statnxtic~ & Probi,'oaily T.=tl~s 4tlC)gfil 157 159 Nc.rl"h - H ~lland

Jtt~e 19S6

Douglas G, BONEJ I

Recelv>d .l.~nllary 198(, Rc~'i~c(t March t ~ A~.~z~'act "llle mi~ed cqimal"nr ih,lr I1.~.=.hem, w i d d y used in Liro:ar rc_e~cssio.t iiiod4~].: i~. iL_ni~lied la.~ L~L~gencrel eo,~,a~.aace ~;c~tv~tttrc mc'~:lcl. Tl~i~ applic.tI[Oll pl,:,~'Ldes u ~[llpJe atterr.a:%~ L,) 1he lal,l~.'~.'~iall c51itllat.O[t. L:tmstTalnua estlmallOn aim ineRualigy uun~lrttbted e~dmu:ion metllods chat Iza~.e llcen t,,,.,pm~.d in I]1.¢ ~ovatqartce xcvaccare nlotl~.q ]il~.'n.:tttalU, ~.'~.v,~'or~': rJo....&r~B.tl¢~ ~tlXle:)Jt'£'%. efosg-'ealiLtati~n. ,4:,t.l,,& llrq¢ttl" a * t r l k n t ~ , mixed estima~m ~tc'~hastic linear ra'm~le.~ir~i.~, Wr~18 nes[.

], ffltrcKIBdi(m

generalized least sqta~.,'~,

where ~ comai~ the unknown e[ement~ in _4.,,

B, ~: ~P, r, Let j, b~ a 1 x i~ observable mtllti~'LtTiLn£ nt~rma] random vector Wjlil meaa ~ero and uovar[ancu matrix v . I.¢t S be the observable r~zdom [na[nx

r~-= ?"Y/n

(1) '.','here }" is an n × g matrix or ~: independent replicates of the random ','ector j.,. The general covariance structure model may be written as

.*-~g',~'7,.-~. • (3)

+ R :,t,B'-~ )A:. + Z,

L,d-,.4i'f'B' ~AI,

Co','alJallce ~trllml]re model described h } r ~ e l l t ] e f (1980} and ]r~r~skog (]gl~q].) which represent~ the covariancc ~trtlctUTe tmdcrlying many important statistical m(Kiel~; .~L]ch as the latent variable ¢~on0mctric model, the simplex and circumflex models, and the componeJlts, of covauiance model. Best asymptotica]iy normal estirnator~ of @ have been given h~,' .loreskog (1970). ba_~ed on the

k=ff~+~ (4} w h e r e k is at. observable random vector that i.~ as_','ntptoticall~,,ma]tivariate norlnal wllh mean hrN, and full rank covadancc man-ix [2. It ,s assumed

analysis model {Anderson, 1958, Sectlo~ 14.7). .Setting

J

and ~.y define* a general

,'~..B ]J'qa..l~

T(@)=:'tOA'+Ewherel~corttainslheuraknow,~ element.s in A. ¢', had - defines the genera] factor

r(e)=

£,, ~,.,

method or maximnm likeliJlood and by Browne (1974;, havad on the method o f generalized least squares. Browne 0 9 8 2 , t984) define~ a distqbution4rL,c BAN estimator of @ h~r applications where the multjvariate norzmtlity of j, cannot be a.ssumed. I~ most appli,~ations of {2}, the researcher v,ill have prior informatinn regarding the parameter space of t@. Pricer infnrrnaliort may be e×pressiblc m terms of the familiar general linear hypothesis h=Jl(~ where H is an *-xq matri_~ o f known constants and h i , an r × 1 vector of known constattts, In this nou2, we consider a stochat,;tic general linear hypothesis (see Judge et al.. "3 • ' 19~,~, ~emLon 3.2.31 of the form

,S '~(1~'), t E {2) where ~3(@) is tile expected value e l S e×preesed a~; at fanction (ff q ~ p ( p + 1)/2 unknown and unc)bscr',.ab]c parameters t~, and E is an unob½ervablc random matrix of deviation~ from cxpecration. Many familiar model* are defined by the ap~ili~;ation of .~11~1in (2). Setting ' ; ' ( 8 ) = ~ - ' K where e = o-' amd V i.s kno'~n, define.,, the sphericity ul0,del (,amr3¢rson, 1958, S~tion 10.7.1), Setting

s~.,l~B-~I'@l"B '-t

inuRuali~ ? t;L~rl.~tlajll~.

STATISTICS & PROBABILITY LETTERS

~ d u m e 4, N u m b e r 4

that the elements in ~0 are stochastically independent of the elements in E. We will consider here the problem of defining an efficient estimator of O subject to the general stochastic linear constraints specified in (4). In the linear regression model, Theil and Goldberger (1961) define an estimator of the unknown parameter vector subject to general stochastic linear constraints and refer to such an estimator as a 'mixed' estimator. We will define a mixed estimator of O for the general covariance structure model by direct application of the Theil and Goldberger results. 2. Estimation and hypothesis testing

Define an unrestricted estimator of O as O f ( s ) where s is the vector of nonduplicated elements in the observable random matrix S and l'(s) is a twice differentiable vector-valued function of s such that p l i m ( O - O ) = O. The asymptotic covariance of s is given by Anderson (1958, Theorem 4.2.4) and application of the multivariate delta method gives the asymptotic covariance matrix of O which we denote as ZO. Since O is an observable random vector, we may apply the TheU and Goldberger (1961) results directly to obtain an augmented linear model

where e = 0 - O is an unobservable random vector of disturbances. One may easily verify that the estimator 0* = (~'

t^

l

1

+ HI2-H)(~'O

+ H'~-'k)

(6)

minimizes the generalized least squares quadratic loss function given by Theil and Goldberger (1961) and an estimator of the asymptotic covariance matrix of O* is

= (£o' + H'h '11)-'

(7)

where ~ and ~ are estimators of ~ and ~2. To evaluate the general stochastic linear hypothesis we apply the following statistic of Theil (1963): w= 158

(h -

+ b)

'(I, - riO).

(S)

June1986

Since O and h are independent asymptotically multivariate normal random vectors, h - H O is asymptotically multivariate normal with covariance matrix (HY~t~H' + ~2). Thus, W converges to a central chi-square distribution with rank[HI = r degrees of freedom when (4) is correct. Clearly, implimentation of (6)-(8) may be carried out when ~) and ~ ) have been obtained by the maximum likelihood, generalized least squares, or the distribution free methods referenced earlier.

3. Summary The mixed estimator that has been applied extensively in the linear regression model (see Toutenberg, 1982) will have important applications in the general covariance structure model. For example, in a cross-validation study, a researcher may want to impose the constraints h = H O + w in the validation sample where H is an identify matrix, h is a vector of parameter estimators defined for a calibration sample, and ,Q is the estimator covariance matrix defined for the calibration sample. Application of (8) gives a cross-validation test. In applications where general exact linear constraints are required, (6)-(8) will provide simple alternatives to more involved methods that have been proposed in the literature (Lee, 1980; Lee and Bentler, 1980) by setting ~2- 1 = cI = where c is a sufficiently large constant, for example c = 106 × tr(~2~.)). The mixed estimator may also be viewed from a Bayesian standpoint (see Theil, 1971, pp. 670-672) and can be applied in the general covariance structure model in a manner analogous to the Bayesian estimation described by Martin and McDonald (1975) and Lee (1981). The mixed estimator, like a Bayesian estimator, is a function of both sample and prior information but combines the two sources of information without the use of Bayes formula. The mixed estimator also has been used to impose inequality constraints in linear regression models (see Touten-. berg, 1982) and can be similarly applied in covariance structure models as a simple alternative to the more elaborate methods described by Rindskopf (1883, 1984).

Volume 4, N u m b e r 4

STATISTICS & PROBABILITY LETTERS

References Anderson, T.W. (1958), An Introduction to Multivariate Analysis (Wiley, New York). Bentler, P.M. (1980), Multivariate analysis with latent variables: Causal modeling, Annual Review of Psychology 31, 419-456. Browne, M.W. (1974), Generalized least squares estimators in the analysis of covariance structures, South African Statistical Journal 8, 1-24. Browne, M.W. (1982), Covariance structures, in D.M. Hawkins, ed., Topics in Applied Multivariate Analysis (Cambridge University Press, Cambridge) pp. 72 141. Browne, M.W, (1984), Asymptotically distribution-free methods for the analysis of covariance structures, British Journal of Mathematical and Statistical Psychology 37, 62 83. Joreskog, K.G. (1970), A general method for the analysis of covariance structures, Biometrika 57, 239-251. Joreskog, K.G. (1981) Analysis of covariance structures, Scandinavian Journal of Statistics 8, 65 92. Judge, C.G.. W.E. Griffiths, R.C. Hill and T.-C. Lee (1982), Theory and Practice of Econometrics (Wiley, New York). Lee. S.-Y. (1980), Estimation of covariance structure models with parameters subject to functional constraints, Psychometrika 45. 309-345.

June 1986

Lee, S.-Y. (1981), A Bayesian approach to confirmatory factor analysis, Psvchometrika 46, 153 160. Lee, S.-Y. (1985), On testing functional constraints in structural equation models, Biometrika '72, 125 131. Lee, S.-Y. and Bentler, P.M. (1980), Some asymptotic properties of constrained generalized least square estimation in covariance structure models, South African Statistical Journal 14, 121-126. Martin, J.K. and R.P. McDonald (1975), Bayesian analysis in unrestricted factor analysis: A treatment of Heywood cases, Psvchometrika 40, 505 517. Rindskopf, D. (1983), Parameterizing inequality constraints on unique variances in linear structural models, Psychometrika 48, 73-83. Rindskopf, D. (1984), Using phantom and imaginary latent variables to parameterize constraints in linear structural models, Psychometrika 49, 37 47. Theil, H. (1963), On the use of incomplete prior information in regression analysis, Journal of the American Statistical Association 58, 401 414. Theil, H. (1971), Principles of Econometrics (Wiley, New York). Theil, H. and A.S. Goldberger (1961), On pure and mixed estimation in economics, International Economic Review 2, 65-78. Youtenberg, H. (1982), Prior Information in Linear Models (Wiley, New York).

159