The principle and models of dynamic programming, II

The principle and models of dynamic programming, II

JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 135, 268-283 (1988) The Principle and Models of Dynamic Programming, II CHUNG-LIE WANG Departm...

674KB Sizes 13 Downloads 29 Views

JOURNAL

OF MATHEMATICAL

ANALYSIS

AND

APPLICATIONS

135, 268-283 (1988)

The Principle and Models of Dynamic Programming, II CHUNG-LIE WANG Department of Mathematics and Siatistics, Uniuersity of Regina, Regina, Saskatrhewan S4S OA2 Canada Submilted by E. Stanley Lee Received March 9, 1987

1. INTR00ucT10~ In the development of optimization techniques, one of the most useful methods in treating allocation processesand optimal control processesis a technique known as dynamic programming (abbreviated DP). In the last thirty years, DP has developed many useful structures (e.g., see Aris [l], Bellman [4, 51, Bellman and Dreyfus [7], Dreyfus and Law [15], Iwamoto [18-211, Lee [24], Nemhauser [28]) and has broadened its scope in conjunction with several well-developed nonlinear mathematical programmings (abbreviated NMPs), namely convex programming, quadratic programming, and geometric programming (e.g., see Peveridge and Schechter [9], Boot [lo], Dullin et al. [16], Roberts and Varberg [32], Rockafellar [33]). There is another NMP, called fractional programming (abbreviated FP) (e.g., see Charnes and Cooper [ 111, Craven [ 133, Dinkelbach [14], Jagannathan [23], Schiaible and Ibaraki [35]), which has also developed rapidly in the past quarter century. From one single source, Bibliography in FP, provided by S. Schiaible [34], we can find 551 references in the period 1962-1981. It should be noticed that FP methods have been developed for use with convex or concave functions (e.g., see Craven [13], Schiaible and Ibaraki [35]), just as convex, quadratic, and geometric programming methods developed for such functions (e.g., see [9, 10, 16, 32, 33, 521). As a matter of fact, even for treating general NMP problems involving arbitrary objective functions of multiple variables, the convexity (or concavity) is often examined so as to impose Lagrangian conditions or Kuhn-Tucker conditions for the purpose of establishing theoretical arguments or gaining computational advantage (e.g., see c9, 131). 268 0022-247X/88 $3.00 Copyright 0 1988 by Academic Press. Inc. All rights of reproduction in any form reserved.

DYNAMIC

PROGRAMMING

269

However, the DP method requires no such direct restrictions. As DP is conceptionally simple, so is FP. A connection between them should extend their mutual scope. It appears that no such explicit connection has appeared in the literature so far. Consequently, it is not only natural to apply the DP method to FP problems, but it also is significant to reveal their intrinsic nature by so doing. This is the motivation of this paper. To this end, we shall lay out the background concerning DP and FP in Section 2. In subsequent sections, we shall reestablish some known problems, such as Rayleigh quotient, Dinkelbach example, Beckenbach inequality, and a generalized arithmetic and geometric (abbreviated AG) inequality, to reveal their linkage and conclude with some remarks.

2. DP

AND

FP

We have demonstrated by examples in Wang [52] that DP can be used to treat optimization problems with nonsequential nonrecursive schemes. Indeed, optimization problems with sequential and recurrence schemes may be more suitable to be solved by the DP approach (e.g., see Denardo [26]). However, the foundation of DP, the Principle of Optimality, by no means suggests so (Bellman [4, p. 831). Principle of Optimality. An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.

The mathematical transliteration of this principle yields all the functional equations as models for problems in many aspects.As suggestedin Bellman and Lee [8, p. 11, the basic form of the functional equation of DP is f(u) = opt [Wu, u,f(T(u, f-J)))12

(1)

where u and v represent the state and decision vectors, respectively, T represents the transformation of the process, and f(u) represents the optimal return function with initial state U. (Here opt denotes max or min.) In view of the above, the form (1) may be modified as

f(u) = opt CWu, 0, g(T(u, u)))l, where g needs not be identical with $ 409 135.1-18

(2)

270

CHUNG-LIE

WANG

In order to apply the DP approach to FP problems, we first display some notations and symbols and cite a theorem from Dinkelbach [14]. R = the field of real numbers, R” = {(x,, .... x,)Ix,~R,j=l,...,

n},

S = a compact (and connected) subset of R”, p, q = continuous real functions on S with q(x) > 0 for all x E S, F(x) =p(x)/q(x),

D(x, r) = D(x) =p(x) - rq(x), x E S, r E R.

if and THEOREM 1. r. = F(x,) = optx F(x) optl D(x) = 0, where opt denotes max or min.

only

is

az3,

ro) =

Remark 1. Since D(x) and F(x) can be transferred from each other by basic algebraic operations and transposition, Theorem 1 is almost selfevident. However, a simple proof of it can be found in Dinkelbach [14, p. 4941 (or Jagannathan [23]). It appears that D(x) and F(x) may be more justifiably called difference and quotient programming, respectively (owing to their duality), rather than current terminology: parametric programming and FP (e.g., see Schiaible and Ibaraki [35, p. 3251). Moreover, the connectedness of S may be useful for computational procedure, but it plays no particular role in proving Theorem 1. Remark 2. If p and q are twice differentiable functions, then Theorem 1 is readily established by the Taylor formula of cp ( = F or D) (e.g., see Wylie [54]),

dx) = cp(xcl)+ (~rp(-%), x-x0>

+ f
(x - xd2>,

0<0<1, (3)

where ( , ) denotes the inner product on R”. In view of (3) in association with some results concerning functionals on a compact subset of a Banach space of integrable functions established in Wang [49], the continuous counterpart of the discrete version Theorem 1 can be easily formulated and established. We omit its obvious detail. For the purpose of facilitation in solving NMP problems (in particular, DP and FP problems), we may in some casesintroduce one or more transitional constraints (e.g., see Wang [45, 521). DEFINITION. A transition constraint of a mathematical programming problem is an additional constraint consistent with the original constraint (or constraints) designed to facilitate solving the problem.

DYNAMICPROGRAMMING

271

3. RAYLEIGH QUOTIENT

Many FP problems of discrete and continuous cases,without referring to the terminology, have been studied and recorded in the literature before and after the sixties. No attempt was made to reveal all the referencesconcerned. However, in general, for example, such results can be found in Beckenbach [2 J, Beckenbach and Bellman [ 31, Iwamoto and Wang [22 3, Mitrinovic [25], Polya [30], and Wang [37-39, 45, 48-531. Speaking FP problems, a very important one in analysis and algebra, called Rayleigh quotient (abbreviated RQ), is a typical nonlinear FP problem (being precise, a quadratic FP problem) first appearing in the literature late in the last century. A complete study concerning RQ can be found in many books, for example, see Beckenbach and Bellman [3, pp. 71-731, Bellman [6, pp. 110-1151, Courant and Hilbert [12, pp. 31471, and Noble [29, pp. 4077416]. In the same references,some extensions of RQ as variational characterization are also recorded. As a historical footnote, our sole purpose here is to reissue a special two-dimensional RQ by which we reveal a linkage of FP and DP. By so doing, we state and prove the RQ inequalities as 2”-- 6 p d 1,+ )

(4)

where ax* + 2bxy -t cy* P = P(X? Y) =

x*+y*

and L’ =a+c+[(a-c)Z+4b2]“2 2

(5)

in which a+ and A- are two eigenvalues of the symmetric matrix

In order to establish (4) we apply Theorem 1 to transform the problem opt p into the problem opt D = opt D(x, y, r) x, ”

subject to a transition constraint x*+y*=f*,

272

CHUNG-LIE

WANG

where D 3 D(x, y, r) = ax2 + 2bxy + cy’ - r(x* + y2)

obviously, opt D(x, 0, r) = (a - r) t2. We note that once y is chosen, the remaining problem is that of choosing x subject to x*&-y*

so as to optimize D = p + (a-r)

t2,

where p=p(y)=(~-a)y*+2b(t~-y*)“~y.

It follows that opt D = opt ,u+ (a - r) t*.

(6).

There are a couple of straightforward ways (with details omitted) to obtain optp=

P -> i fl+,

if opt =min if opt = max?

(7)

where p+ =

c-a+

[(c-a)*+4b*]“* 2

tZ.

Setting opt D = 0, we obtain r = I. ~ or I +. The optimum is attained at t= x2=-+

(c -a) t2

2 -2[(c-a)*+4b2]“*’

(c-a) t* t2 J’2=Z+2C(c-a)2+4b*)1:2’

respectively. Finally, (4) follows from (5)-( 8).

(8)

DYNAMIC PROGRAMMING

213

4. DINKELBACH EXAMPLE In [14], W. Dinkelbach proved Theorem 1 (as stated above) and provided a method for solving special nonlinear FP problems with concave p(x) and convex q(x). In the Appendix, he applied his concaveconvex program to the example (9)

max ~(4 ~)lq(x~ Y) subject to

where p(x,y)=

-3x2-22y2+4x+8y-8

and q(x,y)=x2+y2-6y+8.

In fact, he only approximately solved the problem (9)(10) with four iterations and claimed that by a use of Ritter method [31], the result of the problem is 16-41. X=29+ 43 + 18r ‘=29+ 10r’ D(r) =

(11)

-6r2-39r+22

29+10r



where

D(r) = maxCp(x,Y)- 4-c ~11 without giving a detailed evaluation by the Ritter method. Dinkelbach also pointed out that if (10) is replaced by x+3y=5

(12)

the solution of the problem (9)-( 12) can be found by using classical Lagrange multipliers. Incidentally, the problems (9)-(10) and (9k( 12) have the same result (11). However, they are clearly different problems.

274

CHUNG-LIE

WANG

Here, we alternatively verify the Dinkelbach claim of the problem (9)-(10) by means of the DP approach of Bellman [3]. So using Theorem 1, we have its dual problem in DP setting D(r) = max D(x, y, r) Y,v

(13)

x + 3y = a,

(14)

subject to x,yao

and O
(15)

-3~~-2y~+4~+8y-8-r(x~++6y+8).

(16)

where D(x,y,r)=

Apparently, we have Dl(a, r) = max D(x, 0, r)

x=a

= -3a2+4a-8-r(a2+8).

(17)

We note that once y is chosen, the remaining problem is that of chosen x subject to x = a - 3y,

O
(18)

so as to maximize D(a - 3y, y, r); i.e., D(r) = max D(a - 3y, y, r), Y.(1

(19)

where D(a-hy,r)=

-2y2+8y-r(y2-6y)+D,(a-3y,r).

(20)

Using (17) and (20), a straightforward manipulation yields D(a-3y,

y, r)= -(29+

10r) y-

9a-2+3(a+l)r

29 + 10r

1+44r), ’

(21)

where (29+10r)E(a,r)=

-(r+2)(r+3)

a-

9r2+41r+40 (r+2)(r+3)



1

+K(r)

(22)

275

DYNAMIC PROGRAMMING

with K(r)=

9r2+41r+40-(71r2+324r+228). (r+2)(r+3)

From (18~(21), it follows that max D(a - 3y, y, r) = Max E(a, r).

(23)

(1

y,u

The maximum is attained at x=

2a+b-(9-a)r 29 + 10r

and

Y=

9u-2+3(u+l)r 29+ 10r



(24)

Furthermore, from (13), (15), (19), (22), and (23), it follows that D(r) = max E(a, r) = E(5, r) = (1

-6r2-3r+22 29+ 10r



Setting a = 5 in (24) together with (25), we obtain the exact solution (11) of the problem (9)-( 10) as Dinkelbach [ 141 claimed. Remark 3. In the above process, we only implicitly require 29 + 10r and (r + 2)(r + 3) to be positive. In fact r > 0. Consequently, we solve not only the problem (9)-(10) but also the problem (9)-(12) at the same time by means of the DP setting.

5. BECKENBACH INEQUALITY One of the most interesting generalizations of the discrete Holder inequality was proved by E. F. Beckenbach as Theorem 1 given on page 24 of [2] by a method of multidimensional analysis. Needless to say, this generalization is also a typical nonlinear FP problem and is now known as the Beckenbach inequality (see Mitrinovic [25] and Wang [37, 38, 531). Its continuous version can be found in Iwamoto and Wang [22] and Wang [Sl, 531. We now cite two theorems concerning Beckenbach inequality from Wang [Sl, p. 5703 for our purpose as follows. THEOREM 2. Let positive reals u, /3, A, B, k,, m + 1 1 the Beckenbuch inequality

F(x) 3 F(d),

(26)

276

CHUNG-LIE

WANG

where

(aA+fix;=,+, xp,“” F(x)= S1B+fi~,“=,+, k,Xj and

m+ 1 dj6n,

d, = ( Akj/B)y’p,

(27)

holds for all nonnegative x, + , , .... x,. The sign of inequality in (26) is reversed for 0 < p < 1. In either case, the sign of inequality holds if and only if x, = d,,

m$I
(28)

THEOREM 3. Let positive reals CI,/I, A, B and a positive function k defined on the positive part of the real line be given and let p and q satisfy p-l + qq’ = 1. Then for p > 1, the inequality

G(f) 3 G(h)>

(29)

where G(f)=

(MA + B j;f p dt)“” crB+/?j;kfdt

and h(t) = [Ak(t)/BIY”‘,

O
(30)

holds for any continuous function f on 0 d t < T< 00. The sign of inequality in (29) is reversed for 0 < p c 1. In either case, the sign of equality holds if and only $f is a constant function.

Theorem 2 has been proved by the method of applying the convexity of a real multivaviable function [37] and the DP approach [38], while Theorem 3 has been proved by the properties of nonlinear positive functionals [Sl] and the continuous DP approach [22]. Both theorems have been extended and established correspondingly by using the usual (discrete and continuous Holder inequalities in [53]. In this section, we established alternatively Theorems 2 and 3 in a unified manner through the FP and DP approaches. In doing so, we minimize F(x) and G(f) respectively for p > 1. Using Theorem 1, and its continuous counterpart mentioned in Section 2, we have their dual problems in DP setting respectively min D = min D(x)

(31)

DYNAMIC

277

PROGRAMMING

subject to i

x/P=a,

j=m+

(32)

I

and min D = min D(f)

(33)

subject to ‘.f p dt = a,

(34)

s0 where

in

(31),

in

(33).

A straightforward manipulation (see Wang [38, 521 and Iwamoto and Wang [22] for details) yields D(r) = min D = min D(a, r), u

(35)

where D = D(a, r) = (aA +/la)“”

- r(ctB + /?Qa”“)

(36)

with ( i=z+, ky)“’

for

(31 )-W, (37)

114 0

=kYdt 0

for

>

(33)-(34).

We rewrite (36) as D= -mB+

(aA +~a)“p-fl(rQ)Y[a/(rQ)Y]‘ip

(38)

and apply the concavity of the function z”p (p > 1) on the last two terms of the right-hand side of (38):

Da --t-d+ Cl -P(rQ)“l[ = -r~B+(orA)‘~~[l

1 -;frQJq]“’

-(~Q)~l’l~=D(r).

(39)

278

CHUNG-LIE WANG

The sign of equality holds if and only if CCA+ /?a = a/( r(l)”

or

A(rQP a = 1_ B(rQ)”

=”

(40)

From (35), (39), and (40) it follows that the minimum D(r) is attained at a = 6.

Now setting D(r) = 0, a simple manipulation of (39) yields

with ci = (AQ/B)” in which Q is given in (37). (Note that the corresponding value given in [22] contains a typing error.) For 0

1) with maximality of F(x) and G(f), max D, and the convexity of the function z”” (0


(41)

and Popoviciu inequality (A,IG,)“~(A.-,IG,-,)“~’

(42)

(where Ak = c,“= 1 al/k, G, = nj=, ajilk, k = n - 1, n) have been given many generalizations (e.g., see Mitrinovic [25, pp. 90-1061, Wang [44, 49, 503). It is interesting and useful to notice that the Rado-Popoviciu inequalities (41)-(42) form a natural duality which may be referred to as a differencequotient duality. Incidentally, in view of Theorem 1, the inequalities (41)-(42) and their generalizations can be treated as FP problems in a unified manner.

279

DYNAMIC PROGRAMMING

In order to clarify our above observation as well as to demonstrate a linkage of the DP and FP approaches, we formulate and prove the following (as an example): THEOREM 4. Let positive reals ci, p,, 1 < i 6 m, 1
F(x) 2 (A/B)Pm’Pn,

(43)

where F(x) =

PrnA+C,“=m+lPjxj P, BpmlpnJ&m + , .p,/‘n

(44)

in which A

=

f /=1

PJCjlf'm,

B

=

fi

1
,-plpm,

j=l

.I=

1

holds for all positive x, + I, .... x,. The sign of equality holds if and only if x m+1= ‘. =x,,= A.

Instead of minimizing F(x) in (44), we consider its dual problem (from Theorem 1) in DP setting min D = min D(x) J

(45)

subject to i

pp, = a,

,=m+l

where pjx,

- rBpmipn fi ,=m+

xplpn. I

A straightforward manipulation (e.g., see Wang [38, 521) yields min D = min D(a), a

where (P.- pmvp. (46) with x,,,

= ... = x,, = a/( P, - P,),

280

CHUNG-LIE

WANG

We rewrite (46) as D(a) = ~-[rgPmipn(~)‘~‘pm’P”)-(

1 -y&l

(47)

and apply a simple inverse AG inequality on [. .] in (47): = D(r). D(a) 3 P,A - $ ,.PnIPmB pn n The sign of equality holds in (48) if and only if ’ ~ (Pm/P,) =-

[ 1 a

,.BPm/P”

p”-pm

a pn-pm

or a = (P, - P,) rPniPmB= ci.

(49)

From (45), (48), and (49), it follows that the minimum D(r) is attained at a=& Now setting D(r) =O, we have r = ( A/B)Pm’Pn

and thus establish the inequality (43). Hence the conclusion of the theorem is clear. We conclude this section by stating a more genralized theorem as follows (with a proof similar to that of Theorem 2 omitted). THEOREM 5. Let positive reals a;, bi, cj, 1 ,< i < n, 1 6 j 6 m, 0 6 m 6 n, and reals s, t be given. Then for 0 < s < t or s < 0 < t, the inequality

(50)

H(x) > ff(4, where H(x)

=

(C;l=

1 ajCj

+

Cy=m+

1 ajxi)"r

(~7~1 bjc,‘+C:=,+,bjXi)“” and

holds for all positive x,, , , .... x,. The sign of inequality in (50) is reversed for s < t < 0. In either case, the sign of equality holds if and only if x, = d,,

m+ 1 dj
DYNAMICPROGRAMMING

281

Remark 4. Theorem 5 not only is a generalization of Theorems 2 and 4 but also includes several classical inequalities (such as AG inequality, Popoviciu inequality, as well as the monotonicity (with respect to t) of the weighted mean of order t: (x7=, p,u;E;, 1.)I”) as special cases. 7. CONCLUDING REMARKS We have employed four known problems to demonstrate a close link between DP and FD. Consequently, we have further revealed the elegance and versatility of the DP approach. In the above arguments we have not used the differentiability of any function but simple classical inequalities (or convexity or concavity of functions) to establish the optimality in the process. Needless to say, this approach is of great importance for treating problems with nondifferentiable functions. Finally, comparing Theorem 2 and Theorem 3 (and their proofs), the continuous counterparts of Theorems 4 and 5 can be readily formulated and established. However, these will not be exploited here. A further development of solving FP problems in conjunction with the DP appraoch by suitable inequalities appears to be very promising (see Wang [4446,52,53]).

ACKNOWLEDGMENT The author was supported in part by the NSERC of Canada under Grant A4091

REFERENCES 1. R. ARIS, “Discrete Dynamic Programming,” Ginn (Blaisdell), Wltham, MA, 1964. 2. E. F. BECKENBACH, On the HBlder inequality, J. Math. Anal. Appl. 15 (1966). 21-29. 3. E. F. BECKENBACHAND R. BELLMAN, “Inequalities,” 2nd rev. ed., Springer-Verlag, Berlin, 1965. 4. R. BELLMAN, “Dynamic Programming,” Princeton Univ. Press, Princeton, NJ, 1957. 5. R. BELLMAN, “Adaptive Control Process,” Princeton Univ. Press, Princeton, NJ, 1961. 6. R. BELLMAN, “Introduction to Matrix Analysis,” McGraw-Hill, New York, 1960. 7. R. BELLMAN AND S. E. DREYFUS, “Applied Dynamic Programming,” Princeton Univ. Press, Princeton, NJ, 1962. 8. R. BELLMAN AND E. S. LEE, Functional equations in dynamic programming, Aequationes Math. 17 (1978), l-18. 9. G. S. G. DEVERIDGE AND R. S. SCHECHTER, “Optimization: Theory and Practice,” McGraw-Hill, New York, 1970. 10. J. C. G. BOOT, “Quadratic Programming,” Vol. 2, “Studies in Mathematical and Managerial Economics,” North-Holland, Amsterdam, 1964.

282

CHUNG-LIE

WANG

11. A. CHARNES AND W. W. COOPER, Programming with linear fractional functionals, Naval Rex Logist. Quart. 9 (1962), 181-186. 12. R. COURANT AND D. HILBERT, “Methods of Mathematical Physics,” Interscience, New York, 1953. 13. D. B. CRAVEN, “Mathematical Programming and Control Theory,” Chapman & Hall, London, 1978. 14. W. DINKELBACH. On nonlinear fractional prgramming, Management Sci. 13 (1967), 492498. 15. S. E. DREYFUS AND A. M. LAW. “The Art and Theory of Dynamic Programming,” Academic Press, New York, 1977. 16. R. J. DUFFIN, E. L. Peterson, and C. Zener, “Geometric Programming-Theory and Application,” Wiley, New York, 1967. 17. G. H. HARDY, J. E. LITTLEWOOD, AND G. P~LYA, “Inequalities,” 2nd ed., Cambridge Univ. Press, Cambridge, 1952. 18. S. IWAMOTO, Inverse theorem in dymanic programming, I, J. Math. Anal. Appl. 58 (1977), 113-134. 19. S. IWAMOTO, Inverse theorem in dynamic programming, II, J. Math. Anal. Appl. 58 (1977), 247-279. 20. S. IWAMOTO, Inverse theorem in dynamic programming, III, J. Math. Anal. Appl. 58 (1977), 439448. 21. S. IWAMOTO, Dynamic programming approach to inequalities, J. Mufh. Anal. Appl. 58 (1977), 687-704. 22. S. IWAMOTO AND C.-L. WANG, Continuous dynamic programmig approach to inequalities, II, J. Math. Anal. Appl. 118 (1986), 279-286. 23. R. JAGANNATHAN, On some properties of programming problems in parametric form pertaining to fractional programming, Management Sci. 12 (1966), 609-615. 24. E. S. LEE, “Quasilnearization and Invariant Imbedding,” Academic Press, New York, 1968. 25. D. S. MITRINOVI~, “Analytic Inequalities,” Springer-Verlag, Berlin, 1970. of Operations Research: 26. J. J. MODER AND S. E. ELMAGHRABY (Eds.), “Handbook Foundations and Fundamentals,“Vol. I, Van Nostrand-Reinhold, New York, 1978. 27. J. J. MODER AND S. E. ELMAGHRABY (Eds.), “Handbook of Operations Research: Models and Applications,” Vol. 2, Van Nostrand-Reinhold, New York, 1978. 28. G. L. NEMHAUSER, “Introduction to Dynamic Programming,” Wiley, New York. 1967. Englewood Cliffs, NJ, 1969. 29. B. NOBLE, “Applied Linear Algebra,” Printice-Hall, 30. G. WLYA, Two more inequalities between physical and geometrical quantities, J. Indian Math. Sot. (N.S.) 24 (1960), 413419. 31. K. RITTER, Ein Verfahren zur LGsung parametorabhlngiger, niditlinearer MaximumProbleme, Unternehmensforschung, 6 (1962), 149-166. 32. A. W. ROBERTSAND D. A. VARBERG, “Convex Functions,’ Academic Press, New York, 1973. 33. R. T. ROCKAFELLAR, “Convex Analysis,” Princton Univ. Press, Princeton, NJ, 1970. 34. S. SCHIAIBLE, Bibliagraphy in fractional programming, 2. Opeer. Res. 26 (1982). 21 l-241. 35. S. SCHIAIBLE AND T. IBARAKI, Fractional programming, Europ. J. Oper. Res. 12 (1983), 325-338. 36. M. M. VAINBERG, “Variational Methods for the Study of Nonlinear Operations” (translated by A. Feinstein), Holden-Day, San Francisco, 1964. 37. CHUNG-LIE WANG, Convexity and inequalities, J. Math. Anal. Appl. 72 (1979), 355-361. 38. CHUNG-LIE WANG, Functional equation approach to inequalities, J. Math. Anal. Appl. 71 (1979), 423430. 39 CHUNG-LIE WANG, Functional equation approach to inequalities, II, J. Math. Anul. Appi. 78 (1980), 522-530.

DYNAMIC PROGRAMMING

283

40. CHUNG-LIEWANG, Functional equation approach to inequalities, III, J. Math. Anal. Appl. 80 (1981) 31-35. 41. CHUNG-LIEWANG, Functional equation approach to inequalities, IV, J. Mafh. Anal. Appl. 86 (1982), 96-98. 42. CHUNG-LIEWANG, Functional equation approach to inequalities, VI, J. Math. Anal. Appl. 104 (1984), 95-102. 43. CHUNG-LIEWANG, A generalization of the HGA inequalities, Soochow J. Math. 6 (1980). 149-152. 44. CHUNG-LIEWANG, Inequalities and mathematics programming, in “General Inequalities 3” (Proceedings, Third International Conference on General Inequalities, Oberwolfach) (E. F. Beckenbach, Ed.), pp. 149-164, Birkhluser, Basel/Stuttgart, 1983. 45. CHUNG-LIE WANG, Ineqalities and mathematical programming, II, in “General Inequalities 4” (Proceedings, Fourth International Conference on General Inequalities, Overwolfach) (W. Waiter, Ed.), pp. 381-393, Birkhauser, Basel/Stuttgart, 1984. 46. CHUNG-LIEWANG, A survey on basic inequalities, Notes Cunud. Mufh. Sot. 12 (1980), 8-12. 47. CHUNG-LIEWANG, On development of inverses of the Cauchy and Holder inequalities, SIAM Rev. 21 (1979), 550-557. 48. CHUNG-LIE WANG, An extension of a Bellman inequality, Ufilitus Mafh. 8 (1975) 251-256. 49. CHUNG-LIEWANG, Inequalities of the Rado-Popoviciu type and their determinantal analysis, Chinese J. Mufh. 7 (1979) 103-112. 50. CHUNG-LIEWANG, Inequalities of the Rado-Popoviciu type for functions and their applications, J. Mafh. Anal. Appl. 100 (1984), 43&446. 51. CHUNG-LIEWANG, Characteristics of nonlinear positive functionals and their applications, J. Mafh. Anal. Appl. 95 (1983), 564574. 52. CHUNG-LIEWANG, The principle and models of dynamics programming, J. Muth. Anal. Appl. 118 (1986), 287-308.

53. CHUNG-LIEWANG, Beckenbach inequality and its variants, J. Mufh. Anal. Appl. 130 (1988) 252-256. 54. C. R. WYLIE, “Advanced Engineering Mathematics,” 4th ed., McGraw-Hill, New York, 1975.