JOURNAL
OF MATHEMATICAL
ANALYSIS
The Principle
AND
APPLICATIONS
and Models Programming CHUM-LIE
Departmrnt
of Mafhemarics Reginu.
118,
(1986)
of Dynamic
WANG*
und
Slaficfirs,
Suskarcheuan Submirred
287-308
Umwr.~rty
S4.S OA2. h,v E. Slanle!
of Regina,
Canada Lzr
I _ INTRODUCTIOK
The successful application of Operations Research to military problems during World War II (e.g., see Moder and Elmaghraby [38, p. I]) has led over the past forty years to the development of a variety of sophisticated mathematical programming techniques to treat management and industrial problems (e.g., see Moder and Elmaghraby [38, 393, Ralston and Reilley [41]). In particular, while no universal approach exists to nonlinear mathematical programming problems involving arbitrary objective functions of multiple variables, convex, quadratic, and geometric programming methods are now well developed for use with convex (or concave) functions (e.g., see Beveridge and Schechter [ 141, Duflin et al. [ 191, Luenberger [35), Rockafellar [43]). These developments have been accelerated by the evolution of computer technology and advances in numerical analysis and approximation. One of the most useful nonlinear mathematical programming methods appears to be dynamic programming (abbreviated DP). In the last thirty years, DP has developed many useful structures and has broadened its scope to include finite and infinite models and discrete and continuous models, as well as deterministic and stochastic models (e.g., see Bellman [8], and Bellman and Lee [ 131). In fact, the DP concept has been used to solve a variety of optimization problems analytically and numerically: For the general problems see Aris 123, Bellman [6-83, Bellman and Dreyfus [ 111, and Beveridge and Schechter [ 143; for the particular problems, see Angel and Bellman [I], Bellman [9, IO], Bellman and Kalaba [12], Bertsekas [ 151, Dano [ 161, Dreyfus [ 171, Dreyfus and Law [IS], Jacob-
l
The author
NSERC
Funds
was supported of the
University
in part
by NSERC
of Canada
Grant
A4091
and
the President’s
of Regina.
287 0022-247X/86
$3.00
CopyriShl :() 1986 by Academc Press. Inc. All rights of reprodwlm nn any form reserved.
288
CHUNG-LIE
WANG
son and Mayne [31], Kaufmann and Cruon [32], Larson [33], Lee [34], Mitten [37], Nemhauser [40], Tou [44], and White [58]. The DP approach has been long regarded as most suitable to optimization problems which are sequential and recursive in scheme (cg., see Denardo [38], Wald [45]). A major objective of this paper is to demonstrate, by examples, that DP can also be used to treat optimization problems with nonsequential and/or nonrecursive schemes. DP is conceptionally simple: Its foundation is the principle of optimality, which is spelled out by Bellman [8, p.831 (see also Aris [2], Dreyfus [ 171, Denardo [38], Nemhauser [40]) as follows. Principle of‘ Optimality. An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. As observed by Bellman [8, p. 831, a proof of the principle of optimality by contradiction is immediate. (See also Aris [2, pp. 27, 1331, Dreyfus [ 14, pp. 8, 141, Nemhauser [40, p. 331.) The key phrase of the principle is “state resulting from the first decision.” In other words, the first decision is, for instance, to choose a constraint (or constraints) (e.g., see Wang [48]) or a transformation, to adapt Lagrange multipliers (e.g., see Dreyfus and Law [ 181, Nemhauser [40]), to modify the objective function, or whatever else; the state resulting from the first decision must vary to ensure the remaining decisions constitute an optimal policy. (For certain particular situations, see, e.g., Aris [2], Beveridge and Schechter [4], White [SS].) The principle of optimality yields all kinds of workable functional equations as models for problems arising in many fields. As stated in Bellman and Lee [ 13, p. I], the basic form of the functional equation of DP is
f‘(p)=opt
CwP9%S(7IP,y)))l
where p and q represent the state and decision vectors, respectively, T represents the transformation of the process, and f(p) represents the optimal return function with initial state p. (Here opt denotes max or min.) Furthermore, DP problems just as the other mathematical programming problems may or may not possess constraints. However, in some cases, we may introduce one or more transition constraints in order to facilitate solving the problems (e.g., see Wang [Sl, 553). DEFINITION. A transition constraint of a mathematical programming problem is an additional constraint consistent with the original constraint (or constraints), designed to facilitate solving the problem.
289
DYNAMICPROGRAMMING
In this paper, we only consider the problem of finite and infinite discrete models. In subsequent sections, we reestablish most of the known problems cited from [2, 3, 14,401 using only two simple inequalities: the arithmetic and geometric (abbreviated AG) inequality A2G
(1)
and the Jensen inequality
f(A) G C~f(x) + M~)ll(~
+ h)
(2)
with equalities holding if and only if x= y; where (a + h) A = ax+ hy, G ‘+ b = x”y”, f is a convex function, a, h, x, y > 0. (Note: for the “only if equality case of (2), f is also required to be strictly convex [42].) By so doing, we shall be pursuing a second objective of the paper: to study the use of simple inequalities to streamline the DP approach in solving optimization problems. We adopt the notations and symbols from the cited sources with little or no modification for the purpose of comparing our results with theirs directly.
2. SEVERAL SIMPLE EXAMPLES In this section, we choose several simple examples to demonstrate not only the versatility and novelty of DP but also the much broader scope of the DP approach in handling optimization problems. We show also how to introduce a suitable transition constraint to an optimization problem. 2.1. Consider the problem &(a) = min(4xi
+ 4x:)
(3)
subject to 2x, + 3x, = a, Routinely,
with d,(u)=.*,
x,,x,~O,a>0.
(4)
we have
d*(u) = min [(a - 3x2)’ + 5x:]. ‘2 Setting x*=u--
5,
(5)
290
a manipulation
CHUNG-I.IE
WANG
yields (u-
4.u,)2 + 5s; =/‘(u)+
g(a).
(6)
where (7)
and (8)
Substituting
(6) into (5), we have d2(a) = min .I’(u) + ~(0 ). u
Rewriting (7), an application
of the AG inequality
(9)
(1) yields
and minf‘(u)= I,
The minimum
--&(9+100)~~.
is attained at
or u = $9 + 100)‘~‘.
From (9), (lo), follows &(a)
The minimum
= g(a) -&
&(a) is attained at
(9 + 10u)3:2.
(10)
DYNAMIC PROGRAMMING
291
2.2. Consider the problem &=min
r;,..q
[ax, +b(x,x,)-‘+cx,+d],
We introduce a transition 2.2.1.
a, h, c, d>O.
constraint for the problem
Consider the problem
(11)
(I I ) in two ways.
(11) subject to t > 0.
x1x2= I,
(12)
Combining &(f) = min [#,(I, x2) + cx2 + d] \‘-’
with #,(1, x2) = min [ax, + h(x,x,) r, = I
‘1 =af + h(tx,)-‘,
the minimum ‘+d
&(f)=2(a~1)"~+hr
is attained at atx, ’ = c.x2
with (12)
or x2 = (a/c) ‘!2,
Furthermore,
x, = (ct/a)l~*.
(13)
the minimum min q&(t) = 3(ahc)li3 + d
(14)
is attained at (act)‘:2 = /Jr --’ or f = (b*/c?c)‘!3. In turn, by (13) and (15), the minimum x1 = ( bc/a2) ‘j3,
(15)
q5* is attained at x2 = (ub/c2)‘,‘3.
(16)
292
CHUNG-LIEWANC;
2.2.2.
Consider the problem
(11) subject to
(1.Y , + (‘I 2 = .Y,
(17)
s > 0.
Combining &(s) = min [#,(.s, x1) + L’,v~+ d] \. with d,(s, x2) = min [ax, + h(x,.rz)
‘1 = s + ah(sx,)
‘,
the minimum c$~(.s) = s + 4ahcs ’ + d
is attained at s - (‘I, = (‘x2
with (16)
or x, = S/2& Furthermore,
the minimum
x2 = s/2c.
(18)
I++*given in (14) is attained at s/2 = ahc(s/2 ) 2
or s/2 = (abc ) “3.
In turn, by (18) and (19) the minimum given in (16).
3. A GENERALIZED
(19)
q5* is attained again at x, and x2
AC INEQUALITY
Consider the problem d”(u) = min C a,xf’
(20)
subject to nx,c/=a,
a > 0, ai, h,, Cj > 0, Xj 2 0, 1 G j < n.
(21)
DYNAMIC
293
PROGRAMMING
Here and in what follows x and n are used to designate x,“=, and fly=, whenever confusion is unlikely to occur. We introduce also s, = i j=
ci/bj,
r, = i (a,b,/c,)“‘*‘, /=I
I
l
Since ($,(a)= ~lfluLIIxI;‘=sl(flu)“.~‘,
(22)
Xl
&(a) = min (ax;‘2)‘lsl x2
+ a,~?]
(23)
The minimum &(a) =s,(r,a)““’ is attained at
which is equivalent
to
alb, -x+-
ab f,‘x+
([,a)““.
Cl
In general we can readily derive #K + , from dK for any K 2 1 by exactly the same procedure as above ((22) and (23)) to derive #2 from 4,. So, we obtain inductively that the minimum q5Jc.z)= .~,(l”Q)“S~
(24)
is attained at
-a,b, Xbi = . . . =nab I Cl
x? = (fnQ)‘!Jfl.
C”
(25)
It is now clear that a generalized AG inequality (26)
409/l l&2-2
294
CHUNG-IJE WANG
follows from (20). (21) and (24). Equality holds in (26) if and only if the .v, satisfy (25). Remark. Compare the AG inequality (26) with the results given in Beckenbach and Bellman [4, p. 63, lwamoto [27,28], and Wang [47.48].
4. INEQUALITY
OF WEIGHTED
The monotonicity of weighted means M(r) inequality for any real numbers s < I,
MEANS
provides
M(s) < M(r)
a very useful (27)
where
A proof of (27) can be found in [4,23,35]. Here we generalize the inequality (27) through the DP approach (see also Wang [SO]). In doing so, we consider the problem Opt c a,.$
(28)
subject to 1 h,x; = a,
u>O,a,,h,>O,.r,>O.
(29)
There are two cases for the problem (28).-(29) to be considered: Opt=min for s
(30)
subject to (29 ). In this case, the convexity of the function x”’ for 0
DYNAMIC
295
PROGRAMMING
Since #,(a)=
min 0,x; =i,(a/i,,)‘:“, h&r;7 ”
&(a) = min [#!(a -h,x;) v
+ uzs;]
=*[;r, (qyv+p2(~)‘:‘] >, min E., [Z (UqA) ‘2
+E
(ks)]“‘.
The minimum d2(u) = i.z(u/i.2)“”
is attained at a-=- - h,x;
h,X;
PI
P2
Using the same argument as mentioned obtain inductively that the minimum
in the previous section, we
+h,,(u) = ;.,(u/i,,,)“’
(31)
is attained at h,X”,
PI
-
. . . =h&=fi.
Pn
A,
(32)
For the case s < t < 0, the above result can be duplicated by using the concavity of the function x’~’ and opt = max. It is now clear that the inequality (27) follows from (30), (29), and (31) with uj = h, (1 < j d u). Equality holds in (27) if and only if x, = . . . = s,, (by (32)). Remark. Equations (30) (29), and (31) produce an inequality which generalizes not only the inequality (27) of weighted means but also the Holder inequality (e.g., see Wang [56, 571 or below).
296
CHUNG-LIE WANG 5. PROBLEMS CONCERNING
CROSS-CURREKT
EXIXACTIOY
Regarding the n-stage cross-current extraction process, we refer to Beveridge and Schechter [ 14, pp. 222, 254, 664, 6821 for details. For our purpose, we consider only the problem (33)
max C -)‘,
where
and I
P, + I = P,( 1 + m’9, h
(34)
Here B and m’ are constants while p, and 9, are variables as indicated [ 14, p. 2541. Using the principle of optimality,
we obtain the recurrence relation for
k>l
M,dpk+,)=max i :,=max CM, l(~k)+~kl /-I
In particular, M,(p,)=max
YI
[p2-p2(l
=max p2--p2(l 41 [
+m’q,) +m’y,)
‘- Bq,] I --$(I
+m’y,)+-$
The maximum
(; > I:2
M,(P,)=P,-2
py + -$
is attained at P2(1 +m’q,)-I=$(1
+m’q,)
or 9,=--$[(%)“2-
in
11
with
pf=-$p,.
1 .
DYNAMIC
297
PROGRAMMING
For k = 2, we have
M,(p,) =max CW(p3(1+ m’q2Y) + hb32)l YZ
The maximum M2(p3)
=
P3
-
3
is attained at
or 1]
q2=--$[(q)“‘-
with
P:=-$p:.
Using the same argument as above, we obtain inductively that the maximum
is attained at
6. PROBLEMS CONCERNING CHEMICAL REACTION Problems of chemical reaction in a sequence of stirred tanks have been intensively studied by Aris [2]. For our purpose, we cite only the following problem from Aris [2, p. 20 3: min C
Oj
(35)
subject to ej
2 O,
Cl =Y
(36)
(‘HUNG-I.IE
298
WANG
where
“I= 1 +tl,(k,
1
+kJ’
Here k,, kz, 7 are parameters (or constants) while C,and 0, are variables as indicated in [2, p. 201. Using the principle of optimality, we obtain the recurrence relation for k>l
= min [fk (I
,(ck) + 0,].
(38)
Rewriting (37) as 1 < j < n, where (k,+k,)c,=k,,
we have (39)
From (38) and (39), follows l;(cj)
= min .fi(c2) + &(s‘2 [ c -c,
I =k,y
I)]
[
(’ -c
a+d-2 C‘.- ;’
C(,- (‘2
The minimum
jZk3)=k+p)‘;‘-l] I Zk is attained at L“. - (‘2 -=1) c, - ‘i
(’ --c
3
(‘$, - c-2
or
0, = 82= $f2(c2).
I
Finally, minimum
DYNAMIC PROGRAMMING
299
we conclude by the same argument
as used above that the
is attained at
0,= ... =(I”=&[(ce;f:;‘)‘-n-
11.
Remark. Concerning various modifications of the problem, consult Aris [2] for complete details. It should also be noted that there are misprints on page 26 of [2].
7. OPTIMAL
ALLOCATION
ON SAMPLING
Optimal allocation of sample sizes in multivariate stratified random sampling has been carefully studied in Arthanari and Dodge [3]. For our purpose, we cite only the following problem from [3, p. 301: #,(a) = min C CJx,
(40)
subject to 1 a/x, = a,
Using the principle
1
a>O,a,,x,>O,
of optimality,
(41)
we obtain the recurrence relation for
k>l #&(a)=min
x(I
d& [(a-a&x&)+-
Ck akxk
ak
1 .
(42)
In particular, #,(a) = min
a,x, = a
Cl/x, = w,/a,
where w&= [w:‘! , + (c&a&)“‘]‘,
k = 1, 2,...; H’()= 0.
(43)
300
CHUNG-LIE
WANG
From (42) and (43), follows &(a)=min
C2a2
L+-
.m [ a - azxz
a2-v2 1
, ,2a - a2x2
3min
w2 w,’ 7 r2 [
I
+
a2x2 (C2az)“’
(C2a2)“”
I’
The minimum
is attained at Ii2
-= WI a - azx2
( Cza2)“’ a2x2
or alxl
a2x2
(C,a,)‘/2=(C2a2)l/2=~.
Finally, minimum
we conclude by the same argument
as used above that the
Ata) = w,la is attained at x,=a(C,a,)“’
I
8.
1 dj
a,C (C,aj)“2,
INEQUALITIES
AND
DP
Inequalities such as the AG inequality, the Holder inequality, and the Minkowski inequality, etc., can be established by the functional equation approach of DP (e.g., see Beckenbach and Bellman [4, p. 63, Iwamoto [24-301, and Wang [47-533). In the above, we demonstrated that DP problems can be inductively established by simple inequalities (1) or (2). In fact, as studied in Wang [54-551, inequalities can be used to establish DP problems directly. 8.1. For the problem (20)-(21), we apply the AG inequality Wang [54, p. 1573 for complete details).
(see
DYNAhiK
301
PROGRAMMING
8.2. For the problem (28t(29), inequality (e.g., see Wang [57, p. 5543):
we apply a generalized
Holder
for 0 < s < t or s < 0 < t, where A, is given above. The sign of inequality is reversed in [44] for s < t < 0. In both cases equality holds if and only if the Xi satisfy (32). Using (44) we establish the problem (28)-(29) exactly as indicated by (31) and (32). 8.3.
For the problem
(33)-(34),
by using (34) recursively we have (45 1
Regrouping
inequality,
the summation we obtain
in (33)
using (45), and applying
the AG
n”n+‘) p~ll”,‘l’+nB=M,(p,+,). m’
QPn+l The maximum
maxC Y,= WP, + I ) is attained at $(l
tm’q,)=
..’ =-$(I
tm’q,)=p,+,
or 4,=
. . . =q~=~[(m’~+l)“l”t’)-,]
with ,;+I
=;
B
Pi’, ,r
1
fl(l
+m’q,)
-1
302
CHC‘NG-LIE
8.4. summation
For the problem (35)-(36)
WANG
we apply the AG inequality
to the
The minimum min C 0, = .t,(c~,, , , 1 is attained at 0, = . 8.5.
= !I,,
For the problem (40). (41), rewriting the summation
in (40) as
follows, (46 1
where M’,, is given above, and applying the arithmetic inequality to the right-hand side of (46L we have
and harmonic
The minimum min 2 C,;.Y, = w,,(‘u is attained at
CL (C,a,)“‘x,=
\C’2 c’,, ... =(C,,a,,)~~“.r,,=~
9. BELLMAN-NEMHAUSER MODEL PROBLEMS
Although we have directly established in the the previous section the DP problems given in Sections 3-7 by using pertinent inequalities, there are
DYNAMIC
303
PROGRAMMING
certain DP problems which cannot be solved by using inequalities alone. In this regard, we consider a Bellman-Nemhauser model problem of an infinite number of decisions as follows: min i
[C,d,P+CJx,-d,P)],
c,, c,>o,
p> 1,
(47 1
II T I
subject
to
O
X” I = KG -d,), Using the principle
of optimally,
1,
O
n= I.2 )....
(48)
we obtain the recurrence relation for
k>l /,(x)=mjt
[C,dg+C,(.r,-dk)P+,f~
,(h(xk-dk))].
(49)
In particular, /,(,~,)=m)
~C,~~+C2(x,-d,)Pl
where k, = C’, C,/Cp
with
’
C = C’,‘” 1
1)+ c;:‘”
I I,
The minimum .l‘,(x,)=k
l,Vf
is attained at
C;:‘P c;qx, - d,) CllP’P 2
II =
Cl!PfP I
or d,
=
c:“p
“X,/C.
I)
(51)
304
CHUNG-LIE
WANG
From (49) and (50), follows f2(x2) = min [Cpd, + CAx, - d,)P+j‘,(h(.~, 4 =mF
-d,))]
[C,dp+(C,+k,h”)(~~-d~)l’]
a k,x$,
(52)
where C,(C2+k,h”) k2= [C, l!lp-Il+(C2+~,~P)~!(P
‘I]“-”
Replacing C2 by C2 + k, hP in (50) and (52) and noting (52), it is easy to see that the minimum
./-2(x2)= k,x$’ is attained at (C2+k,bP)“(P d,=CI,‘P-I)+(C2+k,bP)li(P-I)X2’
‘)
In general, using the same argument as given above, we conclude inductively that the minimum
eL(x,) = k,,x%,
(53)
where C,(C,+k,, l,+(C2+k,
kn = [qp
,hP) J,P)‘/fP-“]P-1’
(54)
n = 1, 2,... with k. = 0, is attained at
Finally, passing to the limits, we obtain from (53) and (54) f(x)
= kx”,
where k can be found from CI(C2 + kbP) k=[C
;l(~-l)+(C~+kbp)ll(p
‘)]p
’
305
DYNAh4ICPROGRAMMING
by the Newton-Raphson C4, P. 561).
method
(e.g., see Beveridge
and Schechter
Remark. For p=2, our result coincides with that of Nemhauser [40, p. 2141 and includes that of Gue and Thomas [22, p. 1761 as a special case. Various numerical results of the case p = 2 can also be found in [22,40]. For 0 < p < 1, we can readily treat the problem (47t(48) as a maximum problem and obtain the same result.
10. CONCLUDING
REMARKS
In the results given above we have demonstrated that the simple inequalities (1) and (2) can be used to establish a wide range of DP problems in a unified and concise manner. For our inequality approach, we have not required the functions adopted in the problems to be differentiable. On the other hand, those DP problems solved in [2, 3, 14, 22,401 by using the direct calculus method or the lagrange multiplier method require the assumption of the differentiability of functions. The problem (47k(48) for 0 < p < 1 is indeed a special case of the famous model of the DP problem introduced by Bellman [8, pp. 11,443:
Ax) =,yx. ‘.
MY) +4x - Y) +ftcY + m - r))l.
(55)
The problem (55) has been intensively studied by Bellman [S] in many respects. Very recently, Iwamoto [28] has given two inverse versions of the Bellman allocation process (55). In this connection Bellman and Lee [ 133 have shown more than one way to approach functional equations (such as (55)) in DP. In Section 8, we briefly mentioned an approximation technique. For many DP problems, approximation techniques produce many useful numerical solutions (e.g., see Bellman [7, 83, Dreyfus and Law [ 183, Lee [ 343, Nemhauser [40] ). Finally, it might be worthwhile to point out that by employing the concept of a transition constraint, a further development of the idea of solving DP problems by pertinent inequalities appears to be promising (e.g., see Wang [Sl, 54,551).
ACKNOWLEDGMENT The author wishes to thank Professor E. Stanley Lee. for his suggestion of shortening the original manuscript by deleting some of the examples.
306
C‘HUNG-LIE
WANG
REFERENCES I. E. AXXI.
ANI)
Academic 2. R. ARIS,
Press, “Discrete
“Dynamic
R. BFI.I.MAN.
Programmmg
New York. Dynamic
1972. Programming.” “Mathematical ANL) Y. DOIXF.
3. T. S. ARTHAXAHI York, 1981. 4. E. F. BF(‘KF&HACH
R. BF.I.I.MAF;.
ANV
and
Ginn
Partial
(Blalsdell),
DiNcrcntlal Waltham.
Programming
“Inequalities.”
2nd
Equations.” Mass.,
in Statistics.”
rev. ed..
1964. Wiley,
New
Springer-Verlag.
Berlin.
1965. 5. R.
BELLMAN.
(1955). 6. R.
Some
aspects
BH.I.MA~‘,
Some
problems
( 1954). 37 48. 7. R. BELLEMAN. “Some Imbcdding. 1968.
and
8. R. BCLLMAS. 9. R. BELLMAN,
R.
12. 13.
AND
s.
Princeton.
R. BELLMAN ANI) 17 (1978).
E. S. LEE, I 18.
BF.VERIDC;E
S. E. I~EY~I:S Academic Press.
R. J. D~JFFIN. Application.”
21. 22.
V.
25.
New
ANI)
Press.
Appl. 29 (1970). Dynamic
inverse
Appl. 7 ( 1963).
21
Ec,onomrrric,cj
Princeton.
22
Invariant Lexington.
N.J..
Princeton, control.
1957.
N.J.. 1961. problems in
424-428. Programming.”
s.
Princeton
and
Lniv.
C.
ZENF.K.
programming.
and
.4equa/ronc~.s
Theory
and
Academic
Practice.” Press,
Introduction.”
Springer-Verlag.
of Variations.”
Academic
Theory
of
“Geometric
Dynamic
New
Press.
Programming,”
Programming--Theory
and
1967.
MCCORMICK.
Techniques,” “The VNR
New
An
and
programming
Control.”
the Calculus
Art
AND
P.
ANU M. E. THOMAS,
Stochastic
“The
dynamic
“Optimization:
%I~EC’~lTF.R.
and
in
in dynamic
Programming:
A. M. LAW. York, 1977.
G.
problem 322-325.
equations
Programming
R. L. Cue
York,
An
Dynamic
millan
Co.,
Univ.
Princeton IJniv. Press, and inverse optimal
1970. Programming
E. L. PETERSON. Wiley, New York,
FIACCO
Princeton
Functional
R.
ANI)
Unconstrained Minimization W. GI:.LI.ERT et al. (Eds.), Nostrand, New York, 1977.
23. G. H. HAWY, 24.
&fur/r.
Dynamic Programming, of Kentucky Press.
Univ.
“Applied
And.
and
ANv
programming,
Mathematics.
And.
KAI.AHA.
M&raw-Hill, New York. D. P. BIXTSCKAS, “Dynamic
20. A.
Sc,ripru
1962.
New York, 1975. 17. S. E. DREYFUS. “Dynamic New York. 1965.
19.
programming,
of dynamic
Biosciences.”
J. Moth.
R.
York, 1976. 16. S. DAF;O. “Nonlinear
18.
theory
Programming,” Control Process.” programming
J. Morh.
S. G.
the
F.. thtYl;M.
N.J..
Ma/h.
IS.
of dynamic
of Modern
R. BFI.I.MAN AND automatic control,
14. G.
theory
Mathematical
economics,
kILLMAN
Press,
the in
Vistas
the
“Dynamic “Adaptive Dynamic
10. R. BLXLMAN. mathematical 1 I.
of
273 277.
“Mathematical
Wiley, Concise
“Nonlinear
Programming:
New York, Encyclopedia Methods
Sequential
1968. of
Mathematics,”
in Operations
Van
Research,”
Mac-
1968.
J. E. LITTLEWOOII.
ANI) G. POLYA,
“Inequalities.”
Univ. Press, Cambridge, 1952. S. IWAMO1.0. Inverse theorem in dynamic programming, 113-134. S. IWAMOVJ. Inverse theorem in dynamic programming, (1977), 247-279.
2nd
cd.,
Cambridge
I. J. Marh.
Anal.
Appl. 58 ( 1977).
II.
Math.
And.
J.
Appl.
58
DYNAMIC
PROGRAMMING
307
26. S. IWAMOTO, Inverse theorem in dynamic programming, III, J. Moth. Anal. Appl. 58 (1977). 439448. 27. S. IWAMOTO, Dynamic programming approach to inequalities, J. Math. Anal. Appl. 58 (1977), 687-704. 28. S. IWAMOTO, Inverse functional equations for Bellman’s allocation process. Bull. /n/iorm. Cyhemef. 20. No. 34 (1983). 57-68. 29. S. IWAMOTO, A new inversion of continuous-time optimal control processes, J. Math. Anal. Appl. 82 ( 1981). 49-65. 30. S. IWAMOTO AND C-L. WANG, Continuous dynamic programming approach to inequalities, J. Marh. Anal. Appl. 96 (1983). 119-129. 31. D. H. JA~OBWN AND D. Q. MAYX, “Differential Dynamic Programming,” Amer. Elsevier, New York, 1970. 32. A. KAUFMANN ANU R. CKI:ON. “Dynamic Programming: Sequential Scientilic Management,” Academic Press, New York, 1967. 33. R. E. LARSOS. “State Increment Dynamic Programming.” Amer. Elscvier, New York. 1968. 34. E. S. LEE. “Quasilinearization and Invariant Imbedding,” Academic Press. New York. 1968. 35. D. G. L~JENHERGER, ‘Introduction to Linear and Nonlinear Programming,” Addison-Wesley, Reading, Mass., 1973. 36. D. S. MITRINOVIC, “Analytic Inequalities.” Springer-Verlag. Berlin, 1970. 37. L. G. MITTTES. Composition principles for synthesis of optimal multistage processes, Oprr. Rex 12 (1964). 61@619. 38. J. J. MODER ASI) S. E. ELMAGHKABY (Eds.), “Handbook of Operations Research: Foundations and Fundamentals, Vol. I.” Van Nostrand-Reinhold. New York, 1978. 39. J. J. MODER ANII S. E. EL~AGHHAB (Eds.), “Handbook of Operations, Research: Models and Applications,” Vol. 2, Van Nostrand-.Reinhold. New York, 1978. 40. G. L. NEMHAUSIX. “Introduction to Dynamic Programming,” Wiley, New York, 1967. 41. A. RALSTON AND E. D. REILLEY. JR. (Eds.). “Encyclopedia of Computer Science and Engineering,” 2nd. ed.. Van Nostrand. New York, 1983. 42. A. W. ROHEKTS AND D. A. VARRIXG, “Convex Functions,” Academic Press, New York. 1973. 43. R. T. ROCKAFTLLAK, “Convex Analysis,” Princeton Univ. Press, Princeton, N.J.. 1970. 44. J. T. Tou. “Optimum Design of Digital Control Systems.” Academic Press, New York. 1963. 45. A. WALL). “Statistical Decision Functions.” Wiley. New York, 1950. 46. C.-L. WANG, Convexity and inequalities, J. Marh. And. App/. 72 (1979). 355-361. 47. C.-L. WANG, Functional equation approach to inequalities. J. Marh. Anal. Appl. 71 ( 1979). 423430. 48. C.-L. WANG. Functional equation approach to inequalities, II. J. Math. Anal. Appl. 78 ( l980), 522-530. 49. C.-L. WANG, Functional equation approach to inequalities, III. J. Mufh. Anal. Appl. 80 (1981). 31-35. 50. C.-L. WANG, Functional equation approach to inequalities, IV, J. Murh. Anal. Appl. 86 (1982), 9698. 51. C.-L. WANG. Functional equation approach to inequalities. V, submitted for publication. 52. C.-L. WANG, Functional equation approach to inequalities, VI, J. Math. Anal. Appl. 104 (1984). 95-102. 53. C.-L. WANG. A generalization of the HGA inequalities, .Soochm~ J. Math. 6 (1980). 149 152.
308
CHUNC;-LIE WANG
54. C.-L. WAS, Inequalities and mathematical programming. br Genrrol fneyuulirte.\ 3 (Proceedings, Third International Conference on General Inequalities, Oberwolfach). (E. F. Beckenbach, Ed.), pp. 149-164, Birkhluser. Basel/Stuttgart, 1983. 55. C.-L. WANG. Inequalities and mathematical programming, II. Gen. Inequaliries, (1984). 381-393.
C.-L. 57. C.-L. 56.
Rev.
58.
WANG, WANG,
21(1979),
survey on basic inequalities, Notes Canad. Math. Sot. 12 (1980). 8-12. On development of inverses of the Cauchy and Holder inequalities, SIAM
A
550-557.
D. J. WHITE, “Dynamic Programming,” cisco. 1969.
Oliver & Boyd/Holden-Day,
London/San Fran-