The Frobenius theorem, its Solow-Samuelson extension and the Kuhn-Tucker theorem

The Frobenius theorem, its Solow-Samuelson extension and the Kuhn-Tucker theorem

Journal of Mathematical Economics 1 (1974) 199-205.Q North-Holland Publishing Company THE FROBENIUS THEOREM, ITS SOLOW-SAMUELSON EXTENSION AND THE ...

415KB Sizes 17 Downloads 73 Views

Journal of Mathematical

Economics 1 (1974) 199-205.Q North-Holland

Publishing Company

THE FROBENIUS THEOREM, ITS SOLOW-SAMUELSON EXTENSION AND THE KUHN-TUCKER THEOREM

M. MORISHIMA and T. FUJIMOTO* LondonSchool of Economics, London, Engrand Received August 1973, revised version received April 1974 In economics, the Frobenius theorem and its extension are often used in the theory of linear models and the theory of balanced growth. In these contexts optimization does not appear explicitly. Our proof by the Kuhn-Tucker theorem may lead to some interesting economic reinterpretations of the Frobenius root and eigenvectors in these models. As for economic applications, see, for example, Morishima (1964); the extended bibliography of Debreu and Herstein (1953) is also useful.

1. Introduction

The Frobenius theorem concerning non-negative square matrices is the key theorem in the analysis of linear economic models. It asserts a number of propositions, among which the following one is most basic: Any non-negative, square and indecomposable matrix A has a positive characteristic root I with which a positive eigenvector X is associated. This proposition, proved by Frobenius (1908/g) in an elementary way, was later proved by Wielandt (1950) in a simpler way by applying Brouwer’s fixed-point theorem’ [Wielandt’s proof is familiar among economists through Debreu and Herstein (1953)]. Then Karlin (1959) and Nikaido (1969) proved the theorem in elementary ways that avoided the fixed-point theorem. A recent proof by Arrow and Hahn (1972) is the same as Karlin’s. The proof given by Murara (1972) is somewhat similar to the original one by Frobenius. Later, a non-linear extension of the theorem was discussed by Solow and Samuelson (1953) and then by Morishima (1964). In this they slightly generalised Wielandt’s method. In fact, in their articles, Brouwer’s fixed-point theorem was again used to establish the existence of a positive eigenvalue and a positive eigenvector. This note provides two alternative proofs of the theorem in the non-linear case. They are related to Karlin’s and Nikaido’s proofs. However, one of them, discussed in section 3, uses the Kuhn-Tucker (1950) theorem explicitly, while *The authors acknowledge the comments given by the referees. ‘Before Wielandt, Rutmann (1938, 1940) has extended the Frobenius theorem to the case of linear operations in Banach spaces by using Schauder’s fixed-point theorem.

M. Morishima and T. Fujimoto, The Frobenius theorem

200

the other in section 4 is elementary and seems useful in the classroom for students of economics, as it is simple and enables one to dispense with the Kuhn-Tucker theorem as well as the fixed-point theorem.

2. Assumptions We use the following notation for vector comparisons: For two vectors X’ and X2, (a) X1 I X2 means X! 5 Xi’ for all elements; (b) X1 < X2 means X’ % X2 and X’ # X2 ; and (c) X’ < X2 ineans X’ c X2 for all i. Let H(X)’ be2 (H,(X),

H,(X),

* - -9 ~“cm

which is a vector function from R” (the n-dimensional Euclidean space) to itself. In addition to continuity and differentiability of H(X), we assume : (A.1) - Homogeneity. H(crX) = aH(X) for any number a. (A.2) - Non-negativeness. H(X) 2 0 for all X _2 0. (A.3) - Monotonicity. H(X1) 6 H(X’) for all X1 and X2 such that X1 S X2. (A.4) - Indecomposability. For any non-negative vector Y’ having some zero elements, at least one of its zero elements is converted into a positive member by the transformation Y’(dH/BX). It is easy to see that if A is a constant square matrix that is non-negative and indecomposable, then H(X) 3 AXsatisfies these four assumptions.

3. A proof using the Kuhn-Tucker

theorem

In this section we apply the Kuhn-Tucker theorem to establish the following generalised Frobenius theorem. (A. l)-(A.4) are all assumed. Theorem.

(i) There are a positive number A* and a positive vector X* fulfilling H(P) iz*x*.

=

(ii) X* is unique up to the proportionality factor. (iii) IfA # A*, then there isno X 2 Osuch that H(X) = IX. (iv)

IfIn]

> 1* then there is no X # 0 such that H(X)

2’lIuoughout vector.

= LX.

this paper, an accent applied to a vector denotes the transposition

of that

M. Morishima and T. Fujimoto, The Frobenius theorem Proof.

201

(i) Let us first consider a problem to minimise 1 subject to H(X) $1X,

(1)

e’X5

1,

(2)

e’XL

1,

(3)

x 2 0,

(4)

where e’ is a row vector whose elements are all unity. We write S = (XIX L 0, e’X = l}. It is obvious that for a given X > 0 in S, the minimum 1 that satisfies (1) is given as .2(X) Z maXi Hi(X)/Xi *

(5)

Take any X0 > 0 in Sand put ,l” = 2(X”). Define co = {XlXE s, H(X) 5 PX). X E Co implies X > 0, because otherwise we would have a contradiction. 3 Since Co is bounded and closed and (5) is continuous on Co, it takes on a smallest value A* at a point X* in C ‘. Therefore, X* > 0. It is then seen that the I* thus determined [i.e., the minimum of 1 in Co subject to (l)] gives a solution to our minimizing problem [i.e., the minimum of I subject to (l)-(4)], because it is evident that in S- Co, there is no X such that H(X) < I*X (s AoX). Moreover, from (5), 1an( X)/a X 1< co if X > 0;4 consequently there is no singular point such as an outward cusp in C ‘. Hence we can apply the Kuhn-Tucker theorem to our minimizing problem. Consider a Lagrangian function, L = A-

Y’(IX-H(X))-~(l-e’X)-v(e’X-l),

(6)

which is minimized with respect to 2 and X and maximized with respect to Y, ~1and v. At the minimum point, 1 *, X*, the following conditions are fulfilled: 1-y’xz -JY’+

0,

(7)

Y’(ZZ/8X)+pe’-ve’

-nx+H(X)

5 0,

h 0,

(8) (9)

e’X5

1,

(10)

e’X2

1,

(11)

3Let Y be detined as a vector such that Y, = 0 if X, > 0 and Yr > 0 if X, = 0. Then from H(X) S IoX, we obtain Y’ (%I/BX) X 5 1” Y’X = 0, as H is homogeneous of degree one in X. On the other hand, the extreme left-hand side of this expression is positive, because assurnption (A.4) implies that at least one (Y’@H/aX), must be positive for some X, > 0, a contradiction. 4This holds almost everywhere. At points where A(X) is not differentiable, it has both righthand and left-hand derivatives and they never take on *CO.

202

M. Morishima and T. Fujimoto, The Frobenius theorem

together with the additional conditions : Y’ L 0, p L 0, v r_ 0,

(12)

L(l-

(13)

YX) = 0,

(-JY+

Y’(aH/aX)+fie’-ve’)X

Y(-nX+H(X))

= 0,

= 0,

(14) (15)

p(l-e’X)

= 0,

(16)

v(l-e’X)

= 0.

(17)

We have already seen that X* > 0. Assumptions (A.2)-(A.4) imply that H(X) > 0 if X > 0. Therefore the minimum A* is positive by (5), and the corresponding Y is non-negative and non-zero by (12) and (13). It is also seen that the homogeneity of degree one of H(X), together with (lo), (ll), (14) and (15) implies p = v and that (8) holds with equality as X* > 0. Next, we show that Y > 0. Suppose the contrary; that is, some elements of Y are zero. Then eq. (8) reduces to (Y’(aH/aX))i = p-v for those i whose Yi ‘s are zero. On the other hand, assumption (A.4) implies that (Y’(aH/aX))i is positive for at least one i with Yi = 0. Therefore, p- v>O, a contradiction. Hence, Y > 0, from which we obtain A*X* = H(X*) by taking (9) and (15) into account. 5 The statements (ii)are proved in the following way. As 1* is the solution to the above minimization problem, it is clear that if 1 < L*, the statement (iii) trivially holds for 1 < Iz*. Suppose now that for L such that 1112 rl*, there is x # 0 which satisfies H(x) = 11. Let R = {iJWi # 0} and c1= minisR Xr/lXJ. Then 2 E X*-a/XI 2 0, where 1x1 represents a vector ([X1/, . . ., 1X$. Evidently, a > 0. By assumptions (A.2) and (A.3), we have H(lXl) 2 IH(

= lnwl = InI 1x1.

Therefore, H(Z+alXl)

= H(X*) = 2*X* = n*(Z+cxlwl) $ 1*Z+alll

1x1 5 A*Z+H(alXI).

(18)

IfZ L 0, the Zi = 0, by definition, for at least one i. From (18) we have H(Z+aIXI)-H(aIWI)

5 I*Z.

(1%

Define T = {iJZi = O}. From (19) we have (aH/aX)ij = 0 for i E T, j$ T. Then by choosing Y such that Yj = 0 for j # T and Yj > 0 for j E T, we have a contradiction to assumption (A.4). Hence Z = 0, which implies InI = L* %ecause of the homogeneity we have rl*X* = @H(X*)/aX)X*. Also, because p = v and X* > 0, we have Y’(S&k’*)/aX) = L* Y’ from (8) and (14). Thus X* and Y’are the column and the row eigenvector of Gf(X*)/aXassociated with 1*.

203

M. Morishima and T. Fujimoto, The Frobenius theorem

because of (18); and 8 must equal either (l/or&Y* or -(l/cl)X* because of assumption (A.4) and (18). This establishes the statements (ii), (iii) and (iv) of the theorem. Remark 1. In the above we can get rid of differentiability of H(x> [assumption (A.4)]. Since H(X) is continuous and monotonic with respect to each variable, it has right-hand and left-hand derivatives, BH+/aX and aH-/aX, respectively, everywhere. (In fact, it is differentiable almost everywhere.) We can then replace (A.4) by similar assumptions concerning dH+/i3X and BH-/8X, respectively, and apply the Kuhn-Tucker theorem, regarding these derivatives as if they were derived from different constraints.

4. An elementary

proof

In this section, we are concerned with a slightly more general case that H(X) is continuous but not necessarily differentiable. An elementary proof will

be given on the following new assumption of indecomposability the equivalent role as assumption (A.4) has played.

which plays

(A.4)’ - Indecomposability. For any non-empty subset of indices R = {il, i,, . . .) im} c (1,2, . . ., n], the relations X/ = Xtfor iGRand Xi c X:forh#R imply that there exists at least one i E R such that Hi(X’) # Hi(X2). The other three assumptions, (A. l)-(A.3), are kept throughout. Define sets: C(A) = {X+S,

H(X) $ AX),

D(A) = {XIX E S, H(X)

< AX},

E(A) = {X[Xd,

< UT},

C+(A) = {X(XE

H(X)

S, X > 0, H(X) s AX}.

Then we have Lemma 1.

C(A) = C+(A)fir

any A > 0.

Proof. Obviously C(A) 2 C’(1). Suppose C(L) # C’(A) for some 1. Then C(1) has an X*, at least one of whose elements is zero. Put R = {il_XT = 0} and decrease each X,*, h # R, by a sufficiently small amount, so that R remains unchanged. Then, because of assumptions (A.3) and (A.4)‘, at least one Hi, i E R, becomes negative after the decrease. This contradicts (A.2); hence C(J) = C’(1). Lemma 2. the same A.

If D(A) is empty for some A > 0, then E(A) is empty as weN for

204

M. Morishima and T. Fujimoto, The Frobenius theorem

Proof. Suppose E(1) is not empty for some 1 > 0. Then there must be an X0 which satisfies H,(X”) 6 Ax,“, for the given I, with equality for at least one i. As C(n) = C+(1) and C(n) 2 E(A)), we have C’(A) =, E(1); hence X0 > 0. Next define R = {ilH,(X’) = AX,“] and diminish each X,“, h #R, by a sufficiently small amount so that X,” > 0 and H,, (X0) < AX:, for all h # R, after (as well as before) the decrease. By assumptions (A.3) and (A.4)‘, there must be at least one i E R whose Hi diminishes. Thus the number of strict inequalities in H(X”) I 1X0 can be increased. Repeating this procedure, we finally find that there is a strictly positive X at which H(X) < 1X. This implies that D(J) is not empty. We can now prove the generalized Frobenius theorem. Proof.

Let us first note the following

properties

of C(n).

(a) C(1) is not empty for sufficiently large 1, because any X > 0 in S satisfies H(X) I lXif I is taken as 2 2 maxi H,(X)/X,. (b) If A’ < I’, then C(n’) c C(A2). (c) C(0) is empty. If it is not, any XE C(0) should be positive by virtue of Lemma 1. On the other hand, H(X) = 0 as 1 = 0. Put R = {l} and decrease each X,, h $ R, by a sufficiently small amount so that X remains positive. By assumption (A.4)‘, H,(X) has to be negative; a contradiction to (A.2). Now because of the above properties (a)-(c) of C(A), there must be a I* such that C(I) is empty for A < 1* and not empty for 1 > A*. By the continuity of H(X), C(l*) is not empty.6 Hence I* > 0 by (c). As C(A) is empty for ,? 0 by Lemma 1. Thus (i) of the theorem is proved. An argument similar to the one in the last part of section 3 establishes the other propositions (ii)of the theorem. Remark 2. The proof in section 3 is very similar to the proof in section 4. The only difference lies in the fact that the former uses the Kuhn-Tucker theorem to show that H(X) 2 IX holds with strict equality at the minimum 1, while the latter has an advantage that it allows a clear geometrical interpretation of the theorem in low (two or three) dimensional cases. Remark 3. It is clear from the proof that if H’(X) 5 H’(X) for any X >= 0, then AT, the Frobenius eigenvalue of H’, is not greater than 1: of HZ. Remark 4. In the case of H(X) being not necessarily indecomposable, the generalized Frobenius theorem can be proved in the following way as Arrow 6Take any decreasing sequence {A,} that converges to A*. Let {Xv} be a corresponding sequence of vectors Xv E C(A,) c S. By the Bolzano-Weierstrass theorem, there is a subsequence of {Xv} which converges to an X* in S. Then by the continuity of H(X), we have H(X*) 6 1*X*. Hence X* E C(A*).

M, Morishima and T. Fujimoto, The Frobenius theorem

205

and Hahn (1972) did for the linear case: First consider H(X) +sUX where E is a positive number and U is an n x n matrix with all elements being unity. Corresponding to any decreasing sequence {E”) which converges to zero, we have a decreasing sequence (1:) of the Frobenius eigenvalue and a sequence {X”*) of the Frobenius eigenvector. Obviously, A,*> 0 and Xv* > 0 for all v, because the modified system is indecomposable. Then the same argument as the one in footnote 2 above establishes the existence of A* and X* such that H(P) = I*X*. However, note that A* I 0 and A’* 2 0, if H(X) is decomposable. See Morishima (1964, pp. 199-202). References Arrow, K.J. and F.H. Hahn, 1972, General competitive analysis (Holden-Day, San Francisco, and Oliver and Boyd, Edinburgh). Debreu, H. and I.N. Herstein, 1953, Nonnegative square matrices, Econometrica 21, 597-607. Frobenius, G., 1908/9, ifber Matrizen aus positiven Elementen, Sitzungsberichte der kijniglich preussischen Akademie der Wissenschaften. Karlin, S., 1959, Mathematical methods and theory in games, Programming and economics, vol. 1 (Pergamon Press, New York). Kuhn, H. and A. Tucker, 1950, Nonlinear programming, in: J. Neyman, ed., Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (Berkeley). Morishima, M., 1964, Equilibrium, stability and growth (Clarendon Press, Oxford). Murata, Y., 1972, An alternative proof of the Frobenius theorem, Journal of Economic Theory 5, 285-291. Nikaido, H., 1969, Convexity structure and economic theory (Academic Press, New York). Rutmann, M.A., 1938, Sur une classe specielle d’opbateurs lineaires totalement continus, Comptes Rendus (Doklady) de 1’Academie des Siences de I’URSS 58, no. 9. Rutmann, M.A., 1940, Sur les operateurs totalement continus lineaires laissant invariant un certain cone, Mat. Sb. 8, no. 50. Solow, R.M. and P.A. Samuelson, 1953, Balanced growth under constant returns to scale, Econometrica 21,412424. Wielandt, H., 1950, Unzerlegbare, nicht-negative Matrizen, Mathematische Zeitschrift 52, 642-648.