A limiting property for the powers of a non-negative, reducible matrix

A limiting property for the powers of a non-negative, reducible matrix

Structural Change and Economic Dynamics, vol. 4, no. 2, 1993 A LIMITING PROPERTY FOR THE P O W E R S OF A N O N - N E G A T I V E , REDUCIBLE MATRIX ...

663KB Sizes 0 Downloads 22 Views

Structural Change and Economic Dynamics, vol. 4, no. 2, 1993

A LIMITING PROPERTY FOR THE P O W E R S OF A N O N - N E G A T I V E , REDUCIBLE MATRIX ERIK

DIETZENBACHER

l

Considering a dynamic multisector model, the behaviour of A k is examined for a non-negative matrix A as k becomes large. It is well known that for a primitive matrix A, the matrix (A/;t) k converges to yq'/(q'y), where ~. denotes the dominant eigenvalue, and where y and q' are the right and left eigenvector associated with ~.. In this note, the same result is shown to hold, under certain conditions, when A is reducible with primitive diagonal block submatrices. Under weaker conditions A~/(e'Ake) is proved to converge to yq'/(e'y)(q'e), where e denotes the summation vector. The results are interpreted in terms of dynamic multisector models and interindustry linkage indicators. 1. I N T R O D U C T I O N

AND MOTIVATION

Consider a dynamic multisector model, the solution of which is obtained from the following system of n first-order, linear, h o m o g e n e o u s difference equations x(/) = Ax(t - 1) = A/x(0).

(1)

It is assumed that the n x n coefficients matrix A is non-negative 2 and irreducible. Then, it follows from the P e r r o n - F r o b e n i u s t h e o r e m that the dominant eigenvalue ~. is real, positive, and a simple root of the characteristic polynomial. Its corresponding right eigenvector (or Perron vector) y is unique (up to a scalar) and positive. Suppose that the initial solution x(0) is chosen proportional to y, that is x(0) = ~y with the scalar ~ 4: 0. The general solution is then given by x(t) = ~ . ' y . When the initial solution x(0) is chosen arbitrarily, the general solution is readily given when it is assumed that all eigenvalues are distinct. Let the dominant eigenvalue be denoted by ~, and the other eigenvalues b y / t 2. . . . . /~n. In this case x ( t ) = ~ l ~ . ' y + ~2/~y2+ -. "~-~n~ty n, where yi denotes the right eigenvector corresponding to/~i. Since the n eigenvectors are linearly independent, the scalars ~i are determined by x ( 0 ) = ~ y + ~ 2 y 2 + . . . + ~nyn. In the long run ~ , ' y becomes the dominant part in the general solution x(t), if ~,> I~,1. That is, limt--,~xj(t)/;ttYj = ~1 for all j = 1 . . . . . n, where xj(t) and yj denote the jth E c o n o m e t r i c s I n s t i t u t e , U n i v e r s i t y of G r o n i n g e n , P O B o x 800, 9700 A V G r o n i n g e n , T h e Netherlands. F o r any v e c t o r x, the f o l l o w i n g e x p r e s s i o n s a n d n o t a t i o n s are used. Positive, x >> 0, if x i > 0 for all i. N o n - n e g a t i v e , x -> 0, if xi - 0 for all i. S e m i p o s i t i v e , x > 0, if x >-- 0 a n d x :~ 0. T h e s a m e n o t a t i o n s are used for matrices.

(~ Oxford University Press 1993

353

354

E. D I E T Z E N B A C H E R

element of the vector x(t), respectively y. This property is known as the relative stability of the balanced growth solution ~ t y (see Nikaido, 1968; Takayama, 1985). It should be noted that relative stability does not imply that x/(t) converges to ~ty/, see example 1 in Appendix A. Above it was assumed, for convenience, that all eigenvalues are distinct. It is essential, however, that ~.> I~il for any i = 2 . . . . . m, where m denotes the number of distinct eigenvalues. For any non-negative, irreducible matrix this condition is satisfied if and only if the matrix is primitive (or acyclic). 3 For a primitive matrix A it is well known that limit® (A/3.) t = y q ' / ( q ' y ) , where q' (>>0) denotes the left Perron vector of A (satisfying q ' A = Zq,).4 For primitive matrices relative stability of the balanced growth solution is easily shown to hold. lira xs(t)/ t ~

'ys = l i m [A'x(0)]s/Z'ys = l i m I--~oo

t ~

= [yq'x(O)is/(q'y)y:

= [q'x(O)lys/(q'y)ys

= q'x(O)/(q'y).

(See example 2 in Appendix A for a numerical illustration.) If x(t) is interpreted as the actual output vector at time t, relative stability states that in each sector the ratio between the actual output and the balanced growth output converges to the same constant. Once again, this leaves the possibility that the actual output diverges from the balanced growth output. The actual output of any sector j, measured as a share of the total actual output, does converge however. That is

. xs(t ) [A'x(0)] s lm = lim t-->~ ~sXs(/) t-->=E s[Atx(0)]s

lyq'x(O)ls

= lit~m~Es [yq'x(0)]s

lim ,~ ES [(A/A)tx(0)]s _

lim Ys ,--'®Es Yj"

(2)

Observe that this result is independent of the initial solution x(0). Further, using e for the column summation vector, i.e. e = (1, 1 . . . . , 1 ) ' , it follows from (2) that l i m , ~ x ( / + 1)l[e'x(t)] = lim,__,~ Ax(t)lIe'x(t)] = A y / ( e ' y ) = ;~y/(e'y) and thus limt__,~ e'x(t + 1)/[e'x(t)] = ~. The growth in the total actual output converges to ~. The same limiting property as in (2) holds for the share of the actual output value PsXs(t) in the total actual output value EsPsXs(t). This share converges to

PsYslEsPsYs.

A crucial assumption in deriving the results is that the matrix A is primitive. Seneta (1981) uses the following definition of a primitive matrix. A square non-negative matrix A is said to be primitive if there exists a positive integer k such that A k >>0. It immediately follows from this definition that a reducible

3 Note that, for a non-negative, irreducible matrix, the dominant eigenvalue is a simple root of the characteristic polynomial. If the matrix is irreducible but not primitive, it is possible that there is another real eigenvalue #z = -3, and that there are complex eigenvalues #i such that It~il = 3,. 4 See Steenge (1986, 1987, 1990) for an application of (A/3,)' within the context of static multisector models. Both for the closed model and the reformulated open model the d o m i n a n t eigenvalue equals one. The elements of the so-called 'infinite order' input matrices A t are shown to express the 'embodiedness' of good i in the production of one unit of good j.

A NON-NEGATIVE, REDUCIBLE MATRIX

355

matrix can never be a primitive matrix. 5 Thus, irreducibility is a necessary condition for a matrix to be primitive. Non-negative, irreducible matrices are divided into two categories; primitive (or acyclic) and imprimitive (or cyclic) matrices. It is easily seen that neither ( A / ~ , ) k n o r Ak/(e'Ake) needs to converge as k--* o% when A is imprimitive (see example 3 in Appendix A). Properties of both primitive and imprimitive matrices are well documented (see, e.g. Berman and Plemmons, 1979; Seneta, 1981). A sufficient condition for a non-negative, irreducible matrix to be primitive is that at least one of its diagonal elements is positive. For an input matrix, this condition is likely to hold. It simply means that some sector uses part of its own output as an input for production. Still, primitivity is considered to be rather restrictive, because the underlying assumption of irreducibility is less plausible in an economic context. Within an input-output model for instance, it means that for the production of any good, all goods are used, directly or indirectly. In particular when the number of goods (i.e. n) becomes large, irreducibility is questionable. Many empirical studies have analyzed the degree to which input matrices can be triangularized (see, e.g. Chenery and Watanabe, 1958; Simpson and Tsukui, 1965; Korte and Oberhofer, 1970; Santhanam and Patil, 1972; Song, 1977; Fukui, 1986). Reducibility is explicitly taken into account by Sraffa (1960) by distinguishing basic and non-basic commodities (see also Pasinetti, 1977). In example 3 (in Appendix A) it is seen that (A/3,) k and Ak/(e'Ake) do not need to converge when A is imprimitive. However, if the matrix A fails to be primitive because it lacks the underlying assumption of irreducibility, convergence may be obtained under certain, mild conditions (see example 4 in Appendix A). The present paper considers the behaviour of Ak/(e'A~e) and (A/~,) k as k--~ o% where A is reducible with primitive submatrices along its main diagonal. Another interpretation of the result in (2) is given in terms of interindustry linkages. In order to measure such linkages, basically two concepts prevail: direct linkages (Chenery and Watanabe, 1958) and direct plus indirect linkages (Rasmussen, 1956). The direct forward linkages are traditionally obtained as the row sums of the input matrix A, i.e. the elements of the column vector Ae. Its ith element expresses how much of sector i's production is additionally required as input when in each sector the output is raised by one unit. Linkage indicators are obtained by relating these linkages (Ae) to their average ( e ' A e / n ) , which gives n A e / ( e ' A e ) . A shortcoming of this method is that it considers the effect of an equal increment in each sector's output. This calls for the use of weights so that the direct forward linkage indicators are derived as n A c / ( e ' A ¢ ) , where c denotes a vector of column weights. Since important sectors may be identified as sectors 5 The n x n matrix A is reducible if there exists a partition {I, J} of {1. . . . . n} such that (1) IUJ={1 . . . . . n} and I f q J = O with I:~O and J4:~3, and (2) aq=0 for i e I and jeJ. Consequently, for i e I and j • J, a 2 = ~ t aital/= ~tEt aitaq + ~,E.t aital/= 0, since a,/= 0 for t • 1 and k air 0 for I EJ. In the same way it follows that also a~j = 0 for i e I and j e J. =

356 E. DIETZENBACHER

with large linkage indicators, the indicators themselves may be used as weights to obtain improved indicators. We may therefore start with the traditional forward linkage indicators f, = nAel(e’Ae). An improved measurement is then obtained from f2 = nAf,/(e’AfJ = nA2e/(e’A2e). The general expression becomes fk = nAke/(e’Ake) which converges to nyl(e’y), given that A is primitive. The direct plus indirect forward linkages are derived from the Leontief inverse (I - A)-’ as the row sums (I - A)-‘e, and the indicators as n(I - A)-‘e/[e’(I A)-‘e]. Again, we may start with the traditional direct plus indirect forward linkage indicators f, = n(1 - A)-‘e/[e’(I T A)-‘e]. Improved indicators, using f, as a weighting vector, are given by f2 = n(1 - A)-‘i,/[e’(I - A)-‘$,] = n(I A)-2e/[e’(I - A)-2e]. In general fk = n(I - A)-“el[e’(I - A)-ke]. Note that A and (I-A)-’ have the same dominant eigenvalue A and the same right Perron vector y. When A is irreducible and A.< 1, (I - A)-’ >> 0 and thus primitive. Therefore, also & converges to nyl(e’y). Similarly, the expression for the direct backward linkage indicators is given by ne’Ak/(e’Ake). The indicators for the direct plus indirect backward linkages yield ne’(I- A)-k/[e’(I - A)-ke]. Both converge to nq’/(q’e) as k-m. Notice that both the direct and the direct plus indirect measurement of linkages give the same result when the weighting procedure as described above is adopted. (See Dietzenbacher, 1992, for details of the ‘eigenvector’ method.) 2. THE

MAIN

RESULTS

In this section we present our main results, the proofs are given in Appendix B. In order to avoid various inconveniences in the proofs, the simplest case of a reducible matrix is considered first. More general results are discussed in the next section. Assume that, after a suitable permutation of the rows and columns, the matrix A can be written as follows A=

A,01

LB

AZ’

where A, and A2 are irreducible, square matrices, while B > 0. Denote the dominant eigenvalues of Ai and A2 with A,, respectively &. It is well known that the Perron vectors of A contain elements which are zero (see, e.g. Gantmacher, 1974, p. 78). Let the Perron vectors be partitioned according to (3). That is, q’ = (q;, q;) and y = (yi, y;)‘. The following theorem gives a characterization of the zero-elements. THEOREM 1. Let A be reducible and partitioned as in (3), square, non-negative and irreducible, and B > 0. Then,

(i) ifil,<~,=A:q,=Oandq,,y,,y,>>O, (ii) ifA1>0, (iii) if A, = A, = A: q2, y, = 0 and y2, q, ~0, while the Perron vectors are all unique up to a scalar multiple.

with A,

and A2

A NON-NEGATIVE, REDUCIBLE MATRIX

357

Proof. See Appendix B. Note that in case (i) ql and Yl are the Perron vectors of Al, in case (ii) q2 and Y2 are the Perron vectors of A2, and in case (iii) ql is the left Perron vector of AI and Y2 is the right Perron vector of Az. In a Sraffian context the products of the sectors in group 2 are termed basic commodities, those of group 1 are called non-basic commodities. Basic commodities are required (directly or indirectly) for the production of all commodities (basic as well as non-basic). The production of basic commodities, however, does not require any non-basic commodity. The dominant eigenvalue (3') may be interpreted in terms of the uniform rate of surplus within the quantity model or in terms of the maximum rate of profit within the price model. Both rates are, mathematically, obtained as ( 1 / 3 . ) - 1 . The right Perron vector corresponds to the standard commodity and the left Perron vector to the price vector in the case of a zero surplus wage. From an economic point of view it seems plausible to assume that the dominant eigenvalue corresponds to the submatrix relating to the basic commodities, i.e. 3' = 3'2 > 3'1. In this case assertion (ii) above applies, stating that the standard commodity is composed only of basic commodities. All prices are positive and the prices of the non-basic commodities depend on the prices of the basic commodities, as follows from q~ = q~B(3'l - AI) -1 with 3'q~ = q~A2. The case 3. = 3'1 > 3'2, where the dominant eigenvalue is related to the non-basic commodities, is considered to be highly unusual. From assertion (i) above it follows that the prices of the basic commodities become zero. The interpretation is that the prices of the non-basic commodities become infinite once they are expressed in terms of the price of a basic commodity. (For a detailed discussion of these matters, see Pasinetti, 1977, chapter 5.) The next theorem considers the convergence of A~/(e'Ake). THEOREM 2. Let A be reducible and partitioned as in (3), with A1 and A 2 square, non-negative and primitive, and B > O. Let q' and y denote the left and the right Perron vector of A. Then, l i m k ~ Ak/(e'Ake) = y q ' / ( e ' y ) ( q ' e ) . Proof. See Appendix B. Next, the behaviour of (A/3')/~ is examined. Consider the term s~u'~ in equation (B1). 6 Sz corresponds to the right Perron vector of A, u'l to the left Perron vector, where these are chosen such that u'lsl -- 1. For arbitrary Perron vectors y and q' it follows that slu~ = y q ' / ( q ' y ) , 7 which shows that (A/3')k---~yq'/(q'y). Given the conditions as stated in T h e o r e m 2, equation (B1) applies if and only if 3,1 :# 3'2- In case 3'1 = 3'2 equation (B10) is to be used. Denote the left and the right Perron vectors of the separate submatrices Ai (i = 1, 2) by v" >> 0, respectively wi >> 0.

6 E q u a t i o n n u m b e r s (B1), (B2), etc., refer to A p p e n d i x B. 7 T a k e q' = a'u~ and y = flsl. T h e n q'y = otflu~s, = crfl, and yq' = ocfls,u~. T h u s slu~ = y q ' / o c f l = yq'/(q'y).

358

E.

DIETZENBACHER

Note that the left Perron vector of A is given by q' = (v'l, 0') and the right Perron vector by y = (0', w~)'. Further, write Ak=[

Ak tB(k)

0] A2k

(4)

It now follows from equations (B8)-(B10) that (Ai/).)k---~ wiv~/(v~wi) with i = 1, 2, as might have been expected. It also follows from these equations, however, that B(k)>>0 becomes infinite as k---,oo. In summary, we have obtained the following corollary. COROLLARY 1. Under the assumptions o f Theorem 2 (i) if A1 4: Z2: limk_~ (A/).) k = yq'/(q'y) (ii) if ~1 = ~-2: limk_,o~( A i / ~ ) k : wiv[/(v~wi) with v~ (respectively (respectively right) Perron vector o f Ai, f o r i = 1, 2.

Wi)

the

left

The limiting properties of stochastic matrices are obtained straightforwardly from this corollary. To this end, partition the summation vector in accordance to (3), i.e. e' = (e'l, e~). If A is row-stochastic, Alel = el and Aze2 < Bet + A2e2 = e2. The Subinvariance Theorem implies that ~.2 < 1 -- ~.~, and corollary 1 yields lira A ~ = elv~ [e2v'l

k~

,

with

,

vlel=l.

On the other hand, if A is column-stochastic, e'tAl
w2e 1

0,] ,

with

and

e2w2--1.

w 2 e 2 _1

Of course, when B > 0, A cannot be double-stochastic. The matrix A is called completely reducible if B = 0. Within the context of multisector models, such matrices reflect the existence of subeconomies which act entirely independent from each other. As the proofs are straightforward, only the results are presented. COROLLARY 2. Let A be reducible and partitioned as in (3), with A1 and A2 square, non-negative and irreducible, and B = O. Then, up to a scalar

(i) i f ~,2 < )-1 = ~,: Y = (Wtl, 0¢) t, q' = (v~, 0'), (ii) if 21 < ;% = ~: y = (0', w~)', q' = (0', v~), (iii) if ;q = )~z = ~: Y = (trw'l, w~)', q' = (flv'l, v~), with o: and fl scalars. Note that in case (iii), the geometric multiplicity of ~, equals two. When A is completely reducible the expression in (4) holds with B(k) = 0. The limiting behaviour of (A/~.) k is then given by Corollary 1, using the Perron vectors of Corollary 2. 3. G E N E R A L I Z A T I O N S

In the previous section we have examined the behaviour of Ak/(e'Ake) for the simplest case of a reducible matrix and for a completely reducible matrix. Within the context of the model in (1), it follows from equation (2) that the solution for

A NON-NEGATIVE, REDUCIBLE MATRIX

359

the output shares is given by x(k)=Akx(O)/[e'A~x(O)]. The forward linkage indicators were, by means of a stepwise procedure, obtained as fk = nAke/(e'Ake). As a first step we have taken the traditional direct linkages fl = n A e / ( e ' A e ) , implying that the initial solution yields fo = e. Instead one might wish to start with an arbitrary initial weighting vector fo. This leads to fk = nAkfo/(e'Akfo). The indicators were defined such that on average they are equal to one. That is, e'fk/n = 1 for all k. A further generalization is obtained by requiring that the weighted average of the indicators equals one. Using an arbitrary positive vector r, this yields fk = nAkfo/(r'Akfo) with r'fk/n = 1 for all k. The same holds for the dynamic model in (1) when the shares in the total output value are considered• Given a constant price vector p, x ( k ) = AkX(0)/[p'Akx(0)] with p'x(k) = 1. Let r ' > 0 denote the weights for the rows and c > 0 the weights for the columns• In accordance to (3) they are partitioned as r' -- (r'l, r~) and ¢ = ((1, c~)'. ASSUMPTION 1. Let r and e be positive vectors satisfying (i) cl > 0, if ~2 ~ ~1 ~ ~ ,

(ii) r~ > 0, if Zl < A2 = A, (iii) ¢1 > 0 and r~ > 0, if /~1 = J~'2 = ~" Since we are interested in the vectors Ake/(r'Ak¢) and r'Ak/(r'Ak¢), we consider the behaviour of Ak/(r'Ak¢). The following assertion follows straightforwardly from the equations (B2) and (B10). COROLLARY 3. If assumption 1 and the assumption in Theorem 2 hold, then limk__,~ Ak/(r'Akc) = yq'/(r'y)(q'c)• As an immediate consequence, Akc/(r'Akc)---~y/(r'y) and r'Ak/(r'Akc)---> q ' / ( q ' c ) as k - + o0. We have seen that the results in T h e o r e m 1 can be given an interpretation within a Sraffian context. To this end the commodities are divided into basics and non-basics. The simplest case of a reducible matrix is (3). The basic commodities are in group 2, the non-basics in group 1. Since A1 and A2 are irreducible and B is positive, each basic commodity is required (directly or indirectly) for the production of any commodity. The irreducibility of A1 also implies that each non-basic commodity is required for the production of any non-basic commodity. This implicit assumption is somewhat unrealistic though. Therefore the general case is considered next. ASSUMVrION 2. Let A be nonnegative and reducible, and partitioned as follows. A1 0..'.'.'.'.'.'...... •

A =

0 ng I

.

0

"'..

...

gg,g_ 1 •...

B,,,1

(4)

. .'] . . . . . 0" . . . . . . A g _ l

...

A~ "....

""'..B,,, , , - 1 " "'"Am

360 E. D I E T Z E N B A C H E R w i t h A i square and primitive for all i = 1 . . . . .

exists a j = 1 . . . . .

m, and for all i = g . . . . .

m there

i - 1 such that Bij > 0.

T h e c o m m o d i t i e s in group m m a y be i n t e r p r e t e d as basic c o m m o d i t i e s and all the other c o m m o d i t i e s are non-basics. If it is a s s u m e d that B,~i > 0 for each i = 1 . . . . . m - 1, it easily follows that each basic c o m m o d i t y is r e q u i r e d within any p r o d u c t i o n process. This a s s u m p t i o n is only sufficient, h o w e v e r . A necessary and sufficient condition is that within each column of submatrices, there is at least one positive B matrix. Equivalently, for each j = 1 . . . . . g - 1: Bij > 0 for s o m e i = g . . . . . m, and for e a c h j = g . . . . . m - l : B i t > 0 for s o m e i = j + l . . . . . m. T h e matrix A is a 'Sraffa matrix' if, in addition to the condition a b o v e , ~l.m > /~i for all i = 1 . . . . . m - 1, w h e r e ~.i d e n o t e s the d o m i n a n t eigenvalue of Ai.8 K r a u s e (1981, p. 177) e m p l o y s the t e r m 'Sraffa m a t r i x ' , " b e c a u s e this type of structure seems to be considered by Sraffa as the only case of reducibility which is economically m e a n i n g f u l . " T h e limiting p r o p e r t i e s of 'Sraffa m a t r i c e s ' (with primitive submatrices) are c o v e r e d by the following corollary. COROLLARY 4. Let A satisfy assumption 2 and let the geometric multiplicity of dominant eigenvalue be equal to one. Then l i m k _ _ , = A k / ( e ' A k e ) =

the

yq'/(e'y)(q'e). Proof. See A p p e n d i x B. W h e n the g e o m e t r i c multiplicity (nl) of the d o m i n a n t eigenvalue (~,) differs f r o m one, the left and the right P e r r o n vector are no longer unique (up to a scalar). In that case, n l d e n o t e s the dimension of the e i g e n s p a c e c o r r e s p o n d i n g to Z. Finding a basis (i.e. nl linearly i n d e p e n d e n t vectors) for the right and the left e i g e n s p a c e is, in general, far f r o m simple. 9 Let the nl right (respectively left) eigenvectors corresponding to ~. be d e n o t e d by y ~ , . . . , y", (respectively q l ' , . . . . qn,,). Notice that the right (as well as the left) eigenvectors span an nl-dimensional subspace of ~". T h e right and the left eigenvectors can t h e r e f o r e be a p p r o p r i a t e l y chosen. F o r e x a m p l e , such that they satisfy the following assumption. ASStJMZnON 3. q i , y / = 0 for i ~ j and q ; ' f > 0, for i, j = 1 . . . . .

n~.

T h e next corollary asserts the limiting p r o p e r t i e s of b o t h ( A / Z ) k and A k / ( e ' A % ) when the algebraic and the g e o m e t r i c multiplicity are equal to each other. COROLLARY 5. Let A satisfy Assumption 2 and let the eigenvectors be chosen so as to satisfy Assumption 3. If the algebraic multiplicity and the geometric multiplicity are both equal to nl, then limk__,=(A/3.) k = ~7'=lyiqi'/(qi'y i) and limk--,~ A k / ( e ' A k e ) = [ ~ L t yiq"/(q~'y')]/[~'i'=~(e'y')(q"e)/(q"y')].

Proof. See A p p e n d i x B.

8 See Krause (1981). As a matter of fact, the submatrices A i are only required to be irreducible. Cooper (1973) shows that it is possible to express the positive Perron vectors of A in terms of the Perron vectors of the underlying submatrices Ai.

A NON-NEGATIVE,

REDUCIBLE

MATRIX

361

4. SUMMARY AND CONCLUSIONS

In this note some results have been derived on the behaviour of (functions of)

A k, where A stands for a reducible, non-negative matrix with primitive submatrices along its main diagonal. The results are relevant for dynamic multisectoral analysis, because it provides an alternative for the often restrictive assumption of primitivity and the underlying assumption of irreducibility. The assumption of irreducibility can become problematic if the number of commodities becomes large. The results also can be useful in models where (ir)reducibility plays a crucial role such as in Sraffian type of models. Our results are also relevant for the study of certain types of interindustry linkages. ACKNOWLEDGEMENTS I wish to t h a n k L a m b e r t S c h o o n b e e k for the v a l u a b l e discussions a n d two r e f e r e e s for their helpful c o m m e n t s a n d suggestions.

REFERENCES BERMAN, A. and PLEMMONS, R. J. (1979). Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York. CHENERY, t'I. B. and WATANABE, T. (1958). 'International Comparisons of the Structure of Production', Econometrica, 26, 487-521. COOPER, C. D. H. (1973). 'On the Maximum Eigenvalue of a Reducible Non-Negative Real Matrix', Mathematische Zeitschrift, 131, 213-17. DIETZENBACHER, E. (1992). 'The Measurement of Interindustry Linkages: Key Sectors in the Netherlands', Economic Modelling, 9, 419-37. FUKUI, Y. (1986). 'A More Powerful Method for Triangularizing Input-Output Matrices and the Similarity of Production Structures', Econometrica, 54, 1425-33. GANTMACHER, F. R. (1974). The Theory of Matrices, Vol. II. Chelsea, New York. KORTE, B. and OBERHOFER,W. (1971). 'Triangularizing Input-Output Matrices and the Structure of Production', European Economic Review, 2, 493-522. KRAUSE, U. (1981). 'Heterogeneous Labour and the Fundamental Marxian Theorem', Review of Economic Studies, 48, 173-78. NIKAIDO, H. (1968). Convex Structures and Economic Theory. Academic Press, New York. PASINETFI, L. L. (1977) Lectures on the Theory of Production. MacMillan Press, London. RASMUSSEN, P. N. (1956). Studies in Intersectoral Relations. North-Holland, Amsterdam. SANTHANAM, K. V. and PATIL, R. H. (1972). 'A Study of the Production Structure of the Indian Economy: An International Comparison', Econometrica, 40, 159-76. SENETA, E. (1981). Non-Negative Matrices and Markov Chains, 2nd edn. Springer-Verlag, New York. SIMPSON, D. and TSUKUI, J. (1965). 'The Fundamental Structure of Input-Output Tables, An International Comparison', Review of Economics and Statistics, 47, 434-46. SONG, B. (1977). 'The Production Structure of the Korean Economy: International and Historical Comparisons', Econometrica, 45, 147-62. SRAFFA, P. (1960). Production of Commodities by Means of Commodities. Cambridge University Press, Cambridge. STEENGE, A. E. (1986). 'Saaty's Consistency Analysis: An Application to Problems in Static and Dynamic Input-Output Models', Socio-Economic Planning Sciences, 20, 173-80. (1987). 'Consistency and Composite Numeraires in Joint Production Input-Output Analysis; An Application of Ideas of T. L. Saaty', Mathematical Modelling, 9, 233-41. (1990). 'On the complete instability of empirically implemented dynamic Leontief models', Economic Systems Research, 2, 3-16. TAKAYAMA, A. (1985). Mathematical Economics, 2nd edn. Cambridge University Press, Cambridge. -

-

-

-

362 E. D I E T Z E N B A C H E R

APPENDIX

A

This appendix contains all the examples referred to in the text.

Example 1 Consider the following primitive matrix

]1 Then ). = 5, /u2 = 2, y = (1, 2)', y2 = (1, - 1 ) ' . +

Let x ( 0 ) = ( 1 1 , 1 ) ' ,

then

~=4 and ~2=7. The solution is relatively stable as (4 x 5' + 7 x 2')/5' = 4 = ~ and lim,~x2(t)/(~.'y2) = l i m , ~ (8 x 5' - 7 x 2')/(2 x 5') = 4 = ~1. The distance, in Euclidean space, between x(t) and ~.'y goes to infinity as t goes to infinity, however. Using Ilxll = (Eix~) ''2, it turns our that Ilx(t) - ~Z'yll = 7{¢~ x 2'.

lim,~®x,(t)/()~'yt)= l i m , ~

Example 2 For the matrix A in example 1 we have q' = ( 1 , 1). Together with y = (1,2)' and x ( 0 ) = ( l l , 1)' this yields q ' x ( 0 ) / ( q ' y ) = 1 2 / 3 = 4 , which is of course the same as ~t in example 1.

Example 3 Consider the following cyclic matrix

Then it = 2 (double root), y = (1, 2)' and q' = (2, 1). For k = 0, 1, 2 . . . . (A//l)zk = [~ AZk = 22~[~

A2k/(e'A2ke)=[lo 2 Example 4 Consider the following matrix

~],

10'2]

~],

1/02],

0 A2~+l/(e,AEk+le)=[4/5

1~5].

A NON-NEGATIVE,

REDUCIBLE

MATRIX

363

The eigenvalues are a and 1. To the eigenvalue a correspond a right eigenvector (a - 1, b ) ' and a left eigenvector (1, 0). To the eigenvalue 1 correspond a right eigenvector (0, 1)' and a left eigenvector (b, 1 - a). It is easily seen that for a :/: 1 A k = [ b(1 - ak)/(1 ak - a)

~1 '

Distinguish the following three cases. (i) a < 1. Then 3. = 1 and the Perron vectors are y = (0, 1)' and q ' = (b, 1 - a). lim ( A / 3 . ) k = [

k~

0

01]

b / ( 1 - a)

limA~/(e,Ake )

_ _ 1

k~

l+b-a

[~

= Yq'/(q'Y)

0 1

]=yq,/(e,y)(q,e) ' a

(ii) a = 1. Then 3. = 1 is a double root. The Perron vectors are still unique (up to a scalar), however. Namely y = (0, 1)' and q' = (1, 0)

Clearly limk~® (A/3.) k = l i m k ~ A k does not converge, e'A~e = kb + 2, so that

(iii) a > 1. Then 3. = a and the Perron vectors are y = (a - 1, b ) ' and q' = (1, 0). Hence iim(A/3.)k= [

k~

~lim A~/(e,Ake )

1

b / ( a - 1)

a + b1~ l

01]

[abl

APPENDIX

= Yq'/(q'Y) ~] = y q ' / ( e ' y ) ( q ' e ) .

B

Proof of Theorem 1

From the P e r r o n - F r o b e n i u s T h e o r e m it is known that y > 0 and q > 0. As the matrices A~ and Az are irreducible 3. > 0. (i) If 3. = 3.~, Aty~ = 3.y~ implies that either y~ >> 0 or y~ = 0. Suppose, y~ = 0, then By~ + A2yz = Azyz = 3.Y2 yields Yz = 0, as 3. is not a root of A2. This, however, contradicts y > 0, hence y~ is the right Perron vector of At. It is unique up to a scalar and positive. Next By~ + Aey2 = 3.Y2 or Y2 = ( 3 . 1 - A 2 ) ~Byt, where I denotes the identity matrix of appropriate size. By assumption, 3. > 3.2. Hence, ( 3 . I - A2) -t >>0 and, together with By~ > 0, this yields Y2>> 0, which is also unique up to a scalar. With respect to the left Perron vector, q2A2 = 3.q~ implies q2 = 0. T h e n q'jA~ + q~B = q~A~ = 3.q'~ implies q~ >>0 and unique up to a scalar multiple. (ii) Analogous to the previous case. (iii) 3.=3.~=3.2. A~y~=3.y~ implies either y~>>0 or y ~ = 0 , again. Suppose, y~>>0, then 3.Yz= Azyz + By~ > A2y2. It follows that Y2 ~ 0. T h e n y > 0 implies that Y2 > 0. The Subinvariance T h e o r e m (see, e.g. Seneta, 1981, p. 23) states that, if 3.Yz> A~'2 with 3. > 0 and Yz > 0, the d o m i n a n t eigenvalue of A2 is smaller than 3., which is a contradiction. Therefore, y~ = 0 and A2y2 = 3.Y2 yields that Yz >> 0 is the Perron vector of A2, which is unique up to a multiplicative constant. Analogously, it is shown that q2 = 0 and qj >> 0 and unique up to a scalar. •

364 E. D I E T Z E N B A C H E R

Proof of Theorem 2 Consider the Jordan canonical form of A. There is a non-singular matrix S such that S - ' A S = J, where, in general,

j=

J~ •

o

J. with ,I~ and n~ x n~ Jordan block, i.e.

I

/~i

.I~ =

~1i ,

".

1

,

0 1

"..

"'"".i'"".. 1 "./-(i

1

0

~

Notice that the values /~i do not need to be distinct. The n u m b e r of Jordan blocks corresponding to the same eigenvalue /~ denotes the geometric multiplicity of this eigenvalue. The sum of the orders, of all the Jordan blocks corresponding to the same eigenvalue, gives its algebraic multiplicity. Further, of course, ~=~ n~ = n. Now distinguish the same three cases as in the T h e o r e m 1. Case (i), ~2 < ~,~ = ~.. Because of the primitivity of A~ and A 2 , ,~, = ~l and I/~[ < ~, for any i = 2 , . . . , h. Note that n~ = 1, so that the first Jordan block is a scalar, ,l~ = A. From S-JAS= J follows AS = KI. Let the columns of S be written as s~, s2~,... , s2,2,. •. , s , . . . . . sl, i. . . . . sm . . . . . sh,,. Then, AS = g i yields As~ = ;tst. As sj:/:0, this implies that s~ is the right Perron vector of A, hence s~ >>0. In general, A s , =/tls~, As~: = s~ + ~isi2 Ascii = s~.,~_~ + / ~ s ~ i. Clearly, S ~AkS = jk or A*S = KI*, which yields Aks~ = AkS~ and . . . . .

k k [k~ k-, Aksii=(o)l~S~j+~l]l~, s,.j-i+...+(jkl)l.~k-i+'s,=~(jkl)l~i-i+tsi,,,=, ,or;--,

.....

o,

-

.....

A ~=A~SS -~. Now, write S - ~ = U and denote the rows of U with u'~, u~ . . . . . u~,~, . . . . u~,~. . . . . u~,,~. Then S - ~ A S = U A U ~ = J , or U A = J U which yields u'~A = ~.u~ with u~ 4= 0. This implies that ul is the left Perron vector of A, hence u'~ > 0. Notice that the Perron vectors are chosen such that u')s) = 1. The product of two matrices, say F G , may be written as ~i f~g~', where f~ denotes the ith column of F and g" the ith row h ni of G. Thus A ~ = (A~S)U = Aks~u~ + Z~=2 Zj=, A~s~u[j, which yields Ak=A~s,u'+~(j

/=2/=1

k ' I=1

--

~ J+'

l}#~

,

SilUij

(B1)

and e,Ake = ),k(e's,)(u',e) +

k _ i / / ~ '+'(e's.)(u;,e)

(B2)

The summations consist of polynomials in/~i with maximum degree k, while [/~i[< ~. for all

i = l, . . . , h. W h e n k > j - l, (j k_ l) is a polynomial in k of degree min[j - l; k - j + l] = t t t j - 1-- ~. Therefore, A k / ( e ' A k e ) converges to slul/(e s0(ule).

A NON-NEGATIVE,

REDUCIBLE

MATRIX

365

Note that the Perron vectors st and u't were chosen such that u'tst = 1. In general, q' = cru't and y = fist, which yields q ' y = ocfi and, therefore, stu'~ = y q ' / ( q ' y ) and stu'J(e'st)(u'te) = yq'/(e'y)(q'e). Case (it), ~.t< ~.~= ~., is proved in the same way. Case (iii), ,~.t = ~.2 = ~.. F r o m T h e o r e m 1 it is known that the Perron vectors are unique up to a scalar multiple, so that the geometric multiplicity of ~. is equal to one. The first Jordan block then becomes

From S-~AS = J = U A U -t follows A S = S$ and U A = J U . D e n o t e the columns of S with s o and the rows of U with u~;, where j = 1 . . . . . nl and i = 1 , . . . , h. N o t e that n t = 2 . Consider the first two columns of the matrix A S ( = f~), Astt = 3.s~t and

Ast2 = st~ + 3.st2.

(B3)

Now partition the vectors st, and st~ according to equation (3), using the following notations. sit = \ s ~ ) ]

and

s,2 = \s~2)].

Then, (B3) becomes A,s~tt) = ;ts~tt)

(B4)

Bs~tt ) + A 2 s ~ ) = ~.s~ )

(B5)

A ~t oot 2) _- - ~o) ~ t l ~~_ I~~ °. t)2

(B6)

ot2

~2ot2

--

i~ot 2 •

D e n o t e the left and the right Perron vector of Ai (i = 1, 2) by v" >> 0, respectively wl >> 0. M o r e o v e r , let these vectors be chosen such that vfwi = 1. Notice that the left Perron vector of A equals (a scalar multiple of) (v'~, 0') and the right Perron vector of A is (a scalar multiple of) (0', w~')'. Premultiplying (B6) with the left Perron vector vl of A~ gives v ' AI ~ l © ~ ,.(t) o(tt )a- - V,tS~tt ) .11_ ,~Vmt2, with v t A I = ~-V't- This yields -tott~'~°)=0 which implies s~tt) = 0, as v't >> 0. Next, (B5) with S~tl) = 0 gives A2s~ ) = ~.s~2), while s~ =/=0 of course. Thus, s~2) is the right Perron vector of A2, take s~ ) = w2 >> 0. Premuitiplying (B7) with the left Perron t ' l l ~ ( t ) ..1- ' A ~( 2) - , (2) -1- ] r~(2) n ,l~©(t) vector v 2 o f A2 gives V 2 - - ~ 1 2 V2~2Ol2--V2Stt --,bV2ot2 , with v 2 A 2 = ~ . v 2. Thus, V 2 ~ O l 2 ~--v 2~11 , . ( 2 )- -_ .v 2, v_v 2 -_- 1. As v 2 B > 0 , it is clear that s~):~0. Then, (B6) with s~t,)=O yields A .to12 O ) = 1~Ol2 3.(1) , which implies that s~ ) is the right Perron vector of A t , take s~ ) = wt >> 0. W e have thus obtained

( )

St t ~

W02

and

(w,)

st2 =

s~

.

Notice that st~ is the fight Perron vector of A. In the same way it may be derived from U A = J A that the first two rows of U , in their partitioned form, read as follows.

u;, = (u~tt~',v;)

and

u't2= (v;,0 ' ) .

(B9)

Thus, u[2 is the left Perron vector of A. As U = S - ~, the following restrictions are obtained i , p (2) from U S = I; v2w2 = v~wt = 1 and u~tt)'wt + v2st2 = 0. Analogously to case (i), it follows that h

ni

A* = A * s . u ' . + A*s~2u't2 + ~ ~'~ A*s,;u'; i=2/=t :

gkSllUet

t -~- k g k - l s t

l U ' t 2 -[- A k s I 2 u ' 1 2

+ ~ k k /+l , i=2;=t t=, j - - l l~i S"Uo"

(B10)

366 E. DIETZENBACHER

Moreover, s,, and u12 are positive, so that (e’s,,) > 0 and (uize) > 0. Hence, A*/(e’Ake) converges to s,,u;,/(e’s,,)(u~,e). Again, for left and right Perron vectors q’, respectively y, which are scalar multiples of II;,, respectively s,,, Ak/(e’Ake) converges to yq’/(e’y)(q’e). n

Proof of Corollary 4 Let n, be the algebraic multiplicity of the dominant eigenvalue A. Since multiplicity equals one the first Jordan block, of order n, x n, , is given by

the geometric

Any other Jordan block J; contains ,ui # a along its main diagonal. Consider AS = denote the first n, columns of S by s,,, . . . , s,“,. Then we have As,, = As,,, multiplicity is one, the &I + &, . . . , As,,, = s,,~,-, + As,,,. Since the geometric vector is unique up to a scalar. Thus, s,, is the right Perron vector. Next, UA = JU, the first n, rows of U are denoted by II;,, . . . , II;,, gives the following equations.

SJ and As,, = Perron where

u;,A = Au;, i-u;,,

ulm-,A = Au;,,,-, + u;,,, u;,,A This implies that II;“, is the left Perron

Ak =

= Au;,,. vector.

Equation

(BlO) now becomes

2 A”s,~u;~ + 5

i=I

2 A+; r=2i=1

Since Ipi(< A, only the first part of this expression is relevant for the limiting property. The k are polynomials in k of degree min[j - 1; k - j + I] = j - 1 as binomial coefficients cI-1 ) k++ The maximum degree for the polynomials in the first part is obtained for (and only for) j = n, and 1= 1. Thus A“/(e’Ake) converges to s,,u;,,/(e’s,,)(ui,,e). n

Proof of Corollary 5 From the equality of the algebraic and the geometric multiplicity (=n,), it follows that the Jordan form J consists of (amongst others) 11, Jordan blocks, each of which is given by the scalar 1. That is, J, = jl for i = 1, . . . , n,. Let the first n, columns of S be denoted by sr,. . . 9s,, and the first 12, rows of U by II;, . . . , II:,. Then AS = SJ implies As; = Asi for n,. Thus, s; is a right eigenvector of A. Similarly, UA = JU yields &A = Au] i=l,..., n,) so that u,! is a left eigenvector of A. From U = S’ or US = I, it follows that (i = 1I..., u]q=l for i=l,..., n, and u,!s, = 0 for i#j. Take y’ and q” proportional to s;, respectively II,!. Then siul = y’q”/(q”y’). Equation (Bl) changes into

It immediately lim,_,A*/(e’A*e)

follows that lim,,, (A/A)k = C::,, siu,! = C:l, yiqi’/(q”y’) = [Cy;, fqi’/(qi’yi)]/[C:L, (e’y’)(q”e)l(q”y’)].

an;