Int. J. Engng Sci. Vol. 4, pp. 583-604. Pergamon Press 1966. Printed in Great Britain
SOME SUFFICIENT CONDITIONS LINEAR-PROGRAMMING
FOR CONTINUOUSPROBLEMSt
A. LARSEN and E. POLAK University of California, Berkeley, U.S.A. (Communicated
by L. A. ZADEH)
Abstract-This paper treats continuous-linear programming problems with time delays in the constraints, which are expressed in the form of difference-differential inequalities. Lagrange multiplier functions are used to adjoin the constraints to the cost functional and to compute, by explicit algorithm, a candidate solution. It is proved that if this solution is feasible, then it is also optimal. A similar method is developed for dealing with dual problems. When both the prima1 and the dual problems have solutions, the relationship between their solutions is exhibited and it is shown how, in some cases, solutions to the dual problem can be constructed from the solutions to the primal problem and vice versa. Finally, the differences between continuous linear programming problems and problems of classical calculus of variations and modern optimal control are discussed. The main advantage of the method proposed in this paper, for computing trial solutions, is that it is quite straightforward and simple and that it does not depend on the simultaneous existence of solutions to both the primal and dual problems.
I. INTRODUCTION THE class of optimization problems known as continuous linear programming problems, and sometimes referred to as bottleneck problems, has received considerable attention in the literature [l-3]. Because of their formal similarity to ordinary linear programming problems, it appeared that duality considerations might be helpful in searching for solutions. Thus, what to the authors’ knowledge was the first sufficient condition for the optimality of a feasible solution to a continuous linear programming problem was obtained by Bellman [l]. He formulated a dual problem and showed that if the dual problem had a solution then the extreme value of the primal and dual functionals was the same. He subsequently used this approach in many other papers, such as [3], to obtain optimal solutions for various problems in economics, engineering, business, etc. An unfortunate characteristic of this duality method, however, is that it takes a great deal of intuition to apply it successfully. More recently, Tyndall, in his Ph.D. dissertation [2], developed a very useful duality theorem which guarantees that for any problem satisfying readily checked assumptions, not only does an optimal solution exist, but the corresponding dual problem also has an optimal solution. However, he does not provide an algorithm for computing an optimal solution. Tyndall [2] showed that the existence of a solution to the primal problem did not guarantee the existence of a solution to the dual problem and vice versa, a major difference from ordinary linear programming problems. Thus, it should be stressed that the range of duality methods discussed above is restricted to problems whose duals also have solutions. We begin this paper by reformulating the general primal and dual continuous-linear programming problems with time delays in the constraints into a form that is somewhat
t The research herein was supported by the National Science Foundation under Grant GP-2413 and by National Aeronautics and Space Administration under Grant NsG-354 (S-2). 583
584
A LARSEN and E. POLAK
more convenient for us. Thus, treating the primal and dual problems separately, we construct functions which are optimal solutions, provided they are feasible and provided some additional simple conditions are satisfied. When the functions we obtain are not optimal, we have no specific method for finding optimal solutions. Thus, the main advantage of the method proposed is that it is constructive and does not depend on the simultaneous existence of solutions to the primal and dual problems. We also give sufficient conditions for the existence of a solution for the dual problem when the primal has a solution and vice versa. When both the primal and dual problems are solvable, we exhibit the related solutions. It will be seen that for the set of problems tractable by both methods, the hypotheses used in this paper are sometimes less restrictive than Tyndall’s [2]. We conclude the paper by discussing the relationship between the continuous linear programming problems considered and those treated in optimal control and in the classical theory of the calculus of variations. An example is worked out in the appendix.
2. NOTATION
For the most part we shall use standard mathematical notation. Matrices are denoted by capital letters such as B, C, D. Small letters such as a, b, 6 denote vectors. No distinction is made between row and column vectors; the meaning will always be clear from the context. Let x be an n component vector, and let _yi denote the ith component of x. xL_O+ .UiZOVi. x>o+xi>ovi. If x and y are n-vectors then sy will denote the scalar product. We define the function l(XJ=l
if
Xi>0
=0 if x,50 If A is an n x n matrix then A’ will denote the ith row of A, and Aj will denote the jth column of A.
3. STATEMENT
OF THE
PROBLEM
We begin by stating the primal and dual problems to be considered in a form with inequality constraints, partly so as to be able to prove the relation between the primal and dual problems, partly to exhibit the similarities and differences between these problems and the problems treated by Bellman [l] and by Tyndall[2]. We shall then restate these problems in a form, more convenient for us, with differential equation constraints and slack variables. Let a(*) and b(e) be bounded measurable functions mapping the real line R into a real n-dimensional vector space R”. Let B be a constant, n x n, nonsingular matrix such that at least one of the following two statements holds: (i) all the elements of B are nonnegative, (ii) all the elements of B- ’ are nonnegative. Let C and D be constant, n xn matrices. Finally, let X denote the set of all measurable vector-valued functions x(a) =(x,(e), . . . , x,,(a)) mapping R into R”, with bounded components xi, i = 1, 2, . . . , n, which satisfy the following three conditions: For &[O, T]
Some sufficient conditions for continuous-linear-programming
x(t)=0
for
problems
t
585
(3)
Similarly, let Y denote the set of all measurable vector-valued functions y( 9) =(yr( a), . . . , y,( *)) mapping R into R”, with bounded components yi, i= I, 2,. . . , n, which satisfy a dual set of three conditions: For %[O, T]
p(t) 2 0
y(t)Bza(t)+
y(t)=0 We shall consider
(4)
3
s: for
the following
(5)
Jt
y(s)Cds+
ry(s+l)Dds,
t>T.
continuous,
(6)
linear programming
problems:
Primal Problem
Find an X’EX such that
J
a(t)x’(t)dt
J
y’(t)b(t)dt=min
T
=max x0x
0
J
‘a(t)x(t)dr
(7)
0
Dual Problem
Find a y”eY such that T
Ty(t)b(t)dt
(8)
ysY s 0
0
The relation between the primal and dual problems stated above is directly analogous to the relation between ordinary primal and dual linear programming problems [4] and is summed up by Theorem 1. THEOREM 1. Ifx”d
and y’~ Y satisfy the condition
J
T
1
0
y’(t)b(t)dt,
a(t)x’(t)dt=
J
0
then x0 and y” are the solutions to the primal and dual problem, Proof. The following proof is an extension case D =O. Let XEX and YEY, then, by making following pair of inequalities :
respectively.
of the proof given by Bellman [I] for the use of (I), (2) (4) and (5), we obtain the
A. LARSEN and E. POLAK
586
(10)
We shall now show that the right hand sides of (10) and (11) are equal. First, a simple change in the order of integration of the second term in the right hand side of (10) yields (after the dummy variables in the right hand side have been renamed):
-j:[S:y(s)Cds]x(l)dr=-j;&)[j;Cx(s)d+.
(12)
Thus, the second terms of the right hand sides of (10) and (11) are equal. We now proceed to examine the third terms. Since v(t) =0 for t> T, we obtain for the third term of the right hand side of (10):
-jd[j:P(s+1)Dds]x(r)dr=-j:1[j~~-1y(r+l)Dds]x(i)di.
Since x(t) =0 for f ~0, we now obtain
-
(after renaming
(13)
the dummy
variables)
~ly(s+l)Dds]x(i)dt=-j~[j~=~(s)ds]Dx(r-l)d~
Now, interchanging
the order of integration,
we obtain
(14)
[see (11)],
-j;I@)[j;Dx(s-l)d+=-S:[S:g(s)d+x(rl)dr.
(15)
Thus the right hand sides of (10) and (11) are equal and hence for all XEX and ye Y
s T
1
u(t)x(t)dts
0
y(t)b(t)dt
.
(16)
0
Therefore, if there exists a pair of functions so&, y”e Y satisfying (9), then this pair must be solutions to the primal and dual problems, respectively. This concludes the proof of the theorem.
4.
REFORMULATION
OF
THE
PROBLEM
As is usually the case with standard linear programming problems, we find it more convenient to deal with equality rather than inequality constraints. Consequently, we shall now convert the integral inequalities (2) and (5) into differential equations by introducing slack variables.
Some sufficient conditions for continuous-linear-programming DEFINITION. Let K, be the set of allpairs offunctions (i) u( *) is measurable (ii) u(t) 2 0,
and z( .) is absolutely
587
problems
(z, u) which map R into R” such that continuous;
B-1[u(t)+b(t)+Cz(t)+Dz(t-l)]20
for tE[O, T];
(iii) z(0) =O; (iv)
(17) (18)
i(t) = B-‘[ - u(t) + b(t) + Cz(t) + Dz(t-
l)] for tE[O, T]
(19)
=0 otherwise. The matrices b, B, C, D in (17) and (19) are the same as in (2); from (2) by letting
z=
s
equations
(19) is obtained
’x(s)ds 0
and using condition (3). Because of (19), the second part of condition (17) is equivalent to i(t for &[O, T], which corresponds to (1). The pairs (z, u) which are elements of K, will be called feasible solutions to the primal problem. In terms of the above definitions, we can restate the primal problem as follows: Primal Problem I Let the functional
Jp be defined by
1
-u(t)+b(t)+Cz(t)+Dz(t-I)
dt.
(20)
Find a pair (z“, u”)eKp such that J,,(z’, u’)=
max J,(z, (2.UkKp
u) .
(21)
The function a( .) in (19) is the same as in (7). It is readily seen that (20) defines the same functional as (7) when the constraints (i) through (iv) are taken into account. Any pair (z, u&Kp which maximizes J, will be referred to as an optimal solution to the primal problem. We now reformulate the dual problem in a similar way. DEFINITION. Let Kd be the set of allpairs @“fimctions (w, v) which map R into R” such that (i) (ii)
z:(.) is measurable z>(t)2 0,
and cv(.) is absolutely
[v(t)+a(t)-w(t)C-w(t+
continuous,. l)D]B-’
2.0 for te[O, T];
(iii) w(T) =O; (iv)
G(t)=[u(t)+a(f)-w(t)C-w(t+l)D]B-’ =0 otherwise.
(22) (23)
for &[O, T]
(24)
588
A. LARSENand E. POLAK
Again, the matrices a, B, C, D are the same as in (5),and (24) is obtained from (5) by letting
ry
w(t) = -
(s)ds
st
and using condition (6). Because of (24), the second half of (22) implies that *(c)SO for @[O,T], which is equivalent to (4). We can now restate the dual problem. Dual Problem I
Let Jd be the functional defined by
J,(w, v)=
7{~(t)+~(~)-w(t)C-w(~+l)D}B-1b(~)dt. s
(25)
0
Find a pair of functions (I@, uO)E& such that Jd(wo,u’) = max J,(w, 21).
(26)
(w,VkKd
Again, the function b(e) in (25) is the same as in (8) and it is clear that equation (25) defines the same functional as equation (8) when the constraints are taken into account. Any pair (MY, D)EK~which minimizes Jd will be referred to as an optimal solution to the dual problem. 5. PRIMAL
APPROACH
We shall now consider the Primal Problem I and, by means of multiplier functions, obtain for it a sufficient condition for the existence of an optimal solution pair (z, u). Then, assuming that an optimal solution exists for the Primal Problem I, we shall make use of Theorem I to obtain a sufficient condition for the existence of a solution to the Dual Problem I. In the dual approach we shall reverse the order of this procedure. Multiplier functions
We begin by discussing differential equations whose solutions, if they exist, will subsequently be used as multiplier functions. Consider the following two differential equations:
X(t)= -([A(t)-a(t)]B-‘A(t)C+[A(t+ with the final condition
1)-a(t+
l)]B-‘A(t+
l)D),
(27)
A(T) =O; and
(28) also with the final condition 6(T)=O.
In (27) and (28), n(t), 6(t)~R”, a, B, C, D are the matrices appearing in the statement of the primal problem, and the diagonal n x n matrices A(t) and A(t) are defined by A(t) =diag(l((a(t)
- i(t))B;
I>), &[O, T] ,
(29)
= 0 otherwise A(t)=diag(l(ai(t)-6,(t))), =0
otherwise
&CO, T]
(30)
Some sufficientconditions for continuous-linear-programming problems
589
THEOREM 2. If the function a( a) is piecewise continuous on the interval [0, T] then the difirential equations (27) and (28) have unique absolutely continuous solutions, defined over the interval [0, T].
Proqf. We first consider the differential equation (27) over the interval [T- 1, T]. Over this interval the second term of the right hand side is zero and (27) has the form i(t)=
A(T)=O.
-[A(t)-a(t)]B-‘A(t)C,
(31)
The right hand side is piecewise continuous in t and for &[T- 1, T], satisfies a Lipshitz condition with respect to A, hence (31) has a unique, absolutely continuous solution over [T- 1, T]. This solution may now be extended to cover the entire interval [0, T]. We proceed similarly for equation (28). Feasible solutions determined by multiplier functions Whenever the Lagrange multipliers 1 and 6 exist, they may be used to define a feasible solution to the Primal Problem I, provided certain conditions are satisfied. THEOREM3. If the d$erential
equation (27) has a solution 1, then the dlxerential
equation i =B-
’ A(t)[b(t)
+ Cz(t) + Dz(t - l)] ;
z(0) = 0
which it dejines, has a unique solution which will be denoted by zl. d&erential equation (28) has a solution 6, then the dcrerential equation i=A(t)B-‘[b(t)+Cz(t)+Dz(t-
l)] ;
(32) Similarly, if the
z(0) = 0
(33)
which it dejines, has CIunique solution which will be denoted by z,. The matrices A and A are defined by (29) and (30) respectively. Proof. The right hand sides of (32) and (33) are linear in z, and, since all the elements of A and A are bounded baire functions of measurable functions, they are measurable in t [4], [5]. Hence (32) and (33) have a unique solution each (see [6] p. 97, pb. 1). This completes the proof. DEFINITION.Zf(27) has a solution, then we define the n x n diagonal matrix A1 by
A1+A=I,
(34)
where A is defined by (29) and Z is the identity matrix. Similarly, if(28) has a solution, then we define the n x n diagonal matrix A1 by A,+A=I, where A is depned by (30).
(35)
590
A. LARSENand E. POLAK
Let z1 be the solution of (32).
DEFINITION.
We de$ir?ethe function
u1 by the relation
ui(t) = A,(t)[b(t) + Cz(t) + Dz(t - l)] for tE[O,T]
(36)
=0 otherwise. Similarly,
let zs be the solution
of (32). We define the function
u,(t)=BA,(t)B-‘[b(t)+Cz(t)+Dz(t-
ug by the relation
l)] for te[O, T]
(37)
=0 otherwise. It is readily seen that if iA 20 and ui(t)20 for &[O, 7’1, then the pair (zi, ui) is a feasible solution to the Primal Problem I. Similarly, if ia 2.0 and u,(t)20 for te[O, T], then (zd, u,J is a feasible solution pair to the Primal Problem 1.
Formal solution of the Primal Problem We shall now examine formally the particular case when the matrix D in (19) is the zero matrix, n = I, i.e. all quantities are scalars, and B>O. We shall consider the problem rigorously and in full generality in the next section. In this particular case we are required to maximize T
a(t)B-‘[-u(t)+b(t)+Cz(t)]dt
J,(z, u)=
s subject to the constraints
(0
(38)
0
that
u( -) be a measurable
function
and z( -) be an absolutely
(ii)
u(t)LO, B-‘[-u(t)+b(t)+Cz(t)]ZO
(iii)
z(O)=O;
(iv)
i(t)=B-‘[
continuous
for E[O, T];
function; (39) (40)
-u(t)+
b(t)+ Cz(t)] for E[O, T]
(41)
= 0 otherwise. We now introduce a multiplier should lead us to a solution. functional 3, defined by
function A(*) and proceed to impose on it conditions which Adjoining (40) and (41) to J,, by means of 1 we get a new
T{a(t)B-‘[-u(t)+b(t)+Cz(t)]
3&z, u)= s
0
+n(t)[i(t)-B-‘(-u(t)+b(t)+Cz(t))]}dt
(42)
After choosing A(*) we shall maximize theintegrand in (42) at each instant &[O, T], with respect to (z, u), subject to the constraints (i) and (ii) above. First, introducing the condition 1(T) =0 and integrating T
a(t)i(t)dr s
0
Some suflicientconditions for continuous-linear-programmingproblems
591
by parts we convert the integrand in (42) to the form: a(t)B_ ‘[-u(t)fb(t)+Cz(t)]-~(t)z(t)-~(t)B-’[-u(t)+b(t)+Cz(t)]
(43)
The terms in the above expression can be grouped together in the following two ways:
-[a(t)-I(t)]B-‘u(t)+[a(t)-~(t)]B-’b(t)+[a(t)-iZ(t)]B-‘Cz(t)-~(t)z(t),
(9
or [a(t)-A(t)]B-I[-u(t)+b(t)+Cz(t)]-A(t)z(t)
(45)
It may now be seen from (44) that if we set A(t) =[a(t) - A(t)]B- ‘C for all those &[O,T], for which [u(t) - A(t)]B- ’ 2 0, then for these t (44) is maximized by setting u(t) =0 and letting z(t) be an arbitrary function satisfying (i) and (ii). Similarly, since I3 is positive it follows from (39) that [ -u(t)+b(t)+ Cz(t)] 2.0 must be satisfied, and if we set i(t) =0 for all those %[O, T] for which [a(t)-A(t)] ~0, the expression (45) is maximized by setting [-u(t)+ b(t)+ Cz(t)] =O. Summarizing these conclusions we obtain formally, provided (ii) is satisfied, a pair functions which maximize the integrand of (42) at each t~[0, 7”Jas solutions of the following set of equations defined on the interval [0, T]: l(t)=[a(t)-A(t)]B-‘A(t)c,
Bi,@) = AQ)[b(t) +-Cz,(t)]
=o
(46)
z,(O) = 0
(47)
A(T)
UAW = ~ltaw) + Cz,Wl
(48)
Thus, if (ii) is satisfied the pair of functions (z,, uA)belongs to the set Kp and hence it also maximizes the integrand of J,. The more general forms (27), (32), and (36), and similarly (28), (33), and (37) can be obtained by pursuing essentially the same line of reasoning. We shall now show rigorously that provided the multiplier functions 1 and 6 defined by (27) and (28) exist, the solution pairs (z,, UJ and (z,, ud) they determine are optimal if they satisfy the constraints (17), (18) and (19), for the matrix cases SbO and B-’ 20 respectively. DUALITY THEOREM Ia. If the multiplier function ;1 determined by (27) exists, if the solution pair (z,, ul), which it determines by means of (32) and (36) is feasible, and ifall the elements of the n x n matrix B are nonnegative, then the pair (z,, ul) is an optimal solution to the Primal Problem I.
ProojY We are required to maximize the functional J,(z, u) = a(t),-
s0
u(t) + Cz(t) f oz(t-
W
A.LARSEN~~~E.POLAK
592
subject to (z, u)EI$. We repeat here the conditions (17), (18), and (19) for functions in KP: (ii) u(t)>=0 and B-‘[-u(t)+b(t)+Cz(t)+
Dz(t-
1)]2_0 for t~[0, T]
(iii) z(0) =0 i=B-‘[-u(t)+b(t)+Cz(t)+Dz(t-1)]
(iv)
for &[O, T]
=0 otherwise. We now use the multiplier function 13.given by (27) to adjoin conditions (iii) and (iv) to Jp. We obtain a new functional 3, defined by
f,(z, u)= T {a(t)B-‘[-u(t)+b(t)fCz(t)+Dz(t-1)] s
0
+~(t)(i(t)-B-l[-u(t)+b(t)+Cz(t)+Dz(t-l)])}dt
(49)
As in the formal approach, we now convert (49) into a more suitable form for interpretation. First, T
I(t)i(t)dt= s
-
0
r
&)z(r)dt
s 0 = +
T{[A(t)-a(t)]B-'A(t)C s 0
+[~(t+1)-a(t+1)]B-‘A(t+1)D}z(t)dt Since A(t) =0 for t$[O, T], equation (50) becomes J;L(Q+dl=j:
[I@)-a(t)]B-‘A(t)[Cz(t)+Dz(t-l)]dt
Substituting the right hand side of (51) for T
s 0
n(t)i(t)dt in (49) and collecting terms we get :
3,(z, u)= ‘[a(r)-i(t)]B-‘C-u(t)+b(t)+A,(t)(Cz(t)-tDz(l-l))]dt, s
(52)
0
where the matrix A,(t) was defined in (34). We shall now maximize jr, over a set L, of functions (z, U) satisfying the conditions (i) and (ii) for the set K,, but not necessarily the conditions (iii) and (iv), thus Lpz Kp. We shall then show that a pair (z, U) which maximizes 3, over L, is also in K,. Consequently, this pair maximizes Jp over Kp, since Jp =jp over Kp. Since all the elements of B are nonnegative, it is clear that the second part of condition (ii) implies the condition - u(t) + b(t) + Cz(t) -t Dz(t - 1) 2_0 for &[O, T]
(53)
Some sufficient conditions for continuous-linear-programming Rewriting
(52) so as to exhibit the effect of this condition
3,(z, u)=
sT
problems
593
we obtain
{[a(t)-l(t)]B_‘[h(t)[-u(t)+b(t)J
0
+A,[-u(t)+b(t)+Cz(t)+Dz(t-
Now, from (29) and (34), the vector [a(t)-I(t)]B-‘A(t)20
l)ll)dt
(54)
and the vector [a(t)-i(t)]B-’
Al(t)50 for &[O, T]. Hence, since u(t)20 and because of (53), the integrand, and consequently also I,, is maximized by any pair of functions (z, U)E& which satisfy the condition u(t)=A,(t)(b(t)+Cz(t)+Dz(t-
1)) for &[O, T].
(55)
Now the pair (z,, uJ+ by hypothesis and satisfies (55), hence it maximizes 3, over L,. However, the pair (z,, u&&, and J,(z, u)=QJz, U) for all (z, u)EZ$, thus (z,, un) also maximizes Jr over K,,. This completes the proof. Remark. It should be pointed out that (z,, un) is not necessarily a unique feasible solution pair maximizing the functional J. Thus, suppose that a(t) - n(t) =0 for &I, a nonzero subinterval of [0, 7’J Then any solution pair (z’, u’)~Kr, such that z’(t) =zn(t), u’(t) =ul(t) for t#1, will also maximize the integrand of (54) and hence will also be an optimal solution pair. DUALITY THEOREM IIa.If the Lagrange multiplier 6 determined by (28) exists, if the solution pair (zd, ud) which it determines by means of (33) and (37) is feasible, and if all the elements of the n x n matrix B- 1 are nonnegative, then the pair (z6, u6) is an optimal solution to the Primal Problem I. Proof. The proof of this theorem is carried out in a manner similar to the one used for the proof of the preceding theorem and will therefore be omitted. Whenever the multiplier functions 1 or 6 or both exist, they may also define a solution to the Dual Problem I. The manner in which this may happen is summed up in the following two theorems. DUALITY THEOREM Ib. Suppose the multiplier function 2 defined by (27) exists and that it defines an optimal solution pair (z~, ui) by means of (31) and (36) for Primal Problem I. Let the functions (We, vi) be determined by /z as solutions of the equations *A(t)=[a(t)-w,(t)C-w,(t+ v&> =-[u(t)-
l)D]A(t)B-’
(56)
wA(t)C- w,(t+ l)D]A,(t)
(57)
v,(t)20
(58)
w,(T) = 0 If the pair (wl, un) satisfies the condition *,(t)zO,
for all &[O, T],
and the n x n matrix B- ’ commutes with the matrix A(t) in multiplication, then the pair (w,, vl) is an optimal solution to Dual Problem I, i.e. (wl, v&Kd and JII(wI, vl) = min Jd(w, v) . (w,~)&A
A. LARSENand E. POLAK
594
Proof. The right hand side of (56) is linear in w1 and measurable in t and hence (56) has a unique solution which in turn completely defines uI by means of (57). Hence, if (58) is satisfied the pair (w,, U&Z&. We now make use of Theorem 1 to prove that this pair is optimal for the dual problem by showing that it yields the same cost as the optimal solution pair (zl, UJ for the primal problem. Since by assumption the matrices B-’ and A(t) commute, it follows from (32) and (56) that kcXt)A(t) = GA(t). A(t = iA@) 9 (59) Hence, making
use of (20), (32) and (59) we get
s s T
J,(z,, u3 =
T
a(t)i,(t)dt=
a(t)A(t)i,(t)dt
0
for a(
Now, substituting
from (56) and making 1 [tiA(t)Bin(t)+
J,(zn, uJ=
(60)
0
use of (59) we get
w&W,(t)+
w,(t+ l)Di,(t)]dt
(61)
s Similarly,
making J,(w,,
use of (25), (32), (56), and (59) we get 1 [li)l(t)Bi,(t)-ti,(t)Cz,(t)-tiir,(t)Dz,(t-
uL)=
l)]dt.
(62)
s An integration that
by parts of the second and third terms in the right hand side of (62) shows T
T
GA(t)Cz,(t)dt=
-
s
0
1
w,(t)Ci,(t)dt,
T
\i)A(t)Dz,(t - 1)dt =
s 0
s 0
w,(t+ l)Di(t)dt. s 0
Hence Ja(wi, vn) =JP(zA, UJ which shows that the pair (w,, vi) is indeed optimal problem if (58) is satisfied.
for the dual
DUALITY THEOREM IIb. Suppose that the multiplier function 6 defined by (28) exists and that it defines an optimal solution pair (zd, ud) by means of (33) and (37) for Primal Problem I. Let the functions (w6, vd) be determined by fi as solutions of the equations
Gi)d(t)=[a(t)-w6(t)C-wg(t+
l)D]B-‘A(t)
N$T) = 0
(63)
v,(t)= -[a(t)-w,(t)C-w,(t+I)D]B-‘A\,(t)B
(64)
Zf the pair (w,, ua) satisfy the condition k&(t)>=0 2
va(t)20 for all tE[O,T],
and the matrices B- ’ and A(t) commute in multiplication, then the pair (wd, va) is an optimal solution to Dual Problem I. The proof of this theorem be omitted.
is completely
analogous
to that given above and will therefore
Some sufficient conditions for continuous-linear-programming 6. DUAL
problems
595
APPROACH
Duality Theorems Ib and IIb give constructive sufficient conditions for the existence of an optimal solution to the dual problem when a particular optimal solution to the primal problem exists. Such a solution to the primal problem may not exist (e.g. see [2]), and it is therefore preferable to construct a sufficient condition for the existence of an optimal solution to the dual problem in a manner not predicated upon the behavior of the primal problem. We begin with a formal treatment of a particular case of the dual problem. Formal solution of the dual problem We shall now examine Dual Problem I formally, following the same lines of reasoning that were used for Primal Problem I. We restrict ourselves again to the simpler case D=O, B>O, n = 1, i.e. we assume that all quantities are scalar. Thus we wish to maximize T JAW,
0
G=
[v(t)+a(t)-
w(t)C]B-‘b(t)dt
s
(65)
subject to the constraints that (i) v( .) be measurable and ~‘(a) be absolutely continuous, (ii)
v(t)ZO,
(iii)
w(T)=O,
[u(t)+a(t)-w(t)CIB-i~O,
(66) (67)
(iv) ti(t)=[u(t)+a(t)-w(t)CjB-’
for tE[O,T],
(68)
= 0 otherwise. We now introduce a multiplier function p(e) and proceed to impose on it conditions which should make the problem readily solvable. Adjoining (67) and (68) to JP by means of the multiplier p we get the new functional jd(w, u)=
T{[u(t)+a(t)-w(t)C]B-lb(t) s
0 +(6(t)-
[u(t)+a(t)-
w(t)CjB-‘)p(t))dt
(69)
By imposing the condition ~(0) =O, integrating T W/4tYt
s
0
by parts, and rearranging terms, we can put the integrand of (69) into either one of the following two forms: (70) or, Mt) + a(t) - 4t)ClB - ‘P(t) -
,@>I- NOi@>
(71)
A. LARSENand E. POLAK
596
Now, if we follow the approach used in the primal case and let b(t) = - CB-‘[b(t) -p(t)] for all t such that B-‘[b(t)-p(t)]>=O, then (70) is minimized by letting D=O. However, if we examine (71) we see that when B-‘[b(t)-p(t)]O). Hence we would have to require B-‘[b(t)-p(t)]20 for all &[O, T]. Note however, that this would also make the slack variable v =0 for all &[O, T]. It appears that a less restrictive assumption can be made as follows. Suppose that w,(.) and u,(a) are the optimal solution pair which we are going to obtain. Now let P(t) = - CB- ‘[b(t) - p(t)] for all those &[O, T] when [a(t) - w,(t)C] 2_0 and g(t) =0 for all those &[O, T] when [a(t) - wJt)C] ~0. Imposing the additional requirement that B-‘[b(t)-p(t)]>=O, t~[0, T], the expression (70) is minimized by setting v(t) =0 whenever [u(t)- w,(t)Cj20 and, when [u(t)- w,(t)CJ
P(t) = - CM(t)B-‘[b(t)-p(t)]
;
;
w,(T) =o
(72)
p(0) =o
(74)
where M(t) = l(u(t)- w,(t)C) for &[O, T] and M(t)=0 otherwise; M,(t) is defined by M(t)+M,(t)=l. Having completed the formal examination we proceed to show rigorously under what conditions the above reasoning does indeed lead to a solution of the matrix dual problem.
Feasible solutions and multiplier functions It is clear from the preceding that optimal solution pairs can, possibly, be obtained as solutions of the following set of equations (depending on whether BZO or B-’ 20): $,(t)=[a(t)-w,,(t)C-ww,,(t+
l)D]M(t)B-’
lgt)=[u(t)-w,(t)C-w,(t+
l)D]B-lN(t);
;
w,(T)=O.
(75)
w,.(T) = 0 .
(76)
The matrices a, B, C, D appearing in (75) and (76) are those appearing in the statement of the dual problem; the functions W, and w, map R into R” and the n x n diagonal matrices M(t) and N(t) are defined by M(t)=diag
l(ui(t)-w,(t)Ci-w,(t+
I)Oi) for te[O, T]
= 0 otherwise ; N(t)=diag
l((a(t)-
= 0 otherwise.
(77) w,(t)C- w,(t+ 1)D)B;‘) for t@, T] (78)
Some sufficient conditions for continuous-linear-programming
problems
597
THEOREM4. If the function a(.) which maps R into R” is piecewise continuous on the interval [0, 7’j then the differential equations (75) and (76) have unique, absolutely continuous solutions. This theorem may be proved in the same manner as Theorem 2; hence a detailed proof is omitted. Whenever the functions u’~ and w, exist, they can be used to define corresponding functions v,, v,, by means of the following equations:
v,(t) = -[a(t) - w,(t)C- w,(t + l)D]M,(t)
(79)
v,(t)= -[a(t)-w,(t)C-w,(t+
(80)
l)D]B-‘N,(t)B
where M,, N1 are n x n matrices defined by M+ M1 =I, N+ N, =I. Note that if spzO, ~20 - for &[O, Tj then they form a feasible solution pair. Similarly, if %;,zO and “~20, then (w,, v,) is a feasible solution pair. The functions wlcand w,,can also be used to define multiplier functions p and v by means of the following equations : ~(t)=CM(t)B-‘[~(t)-b(t)]+DM(t-l)B-1[~(t-l)-b(t-l)] /l(O)
=o.
G-31)
i(t)=CB-‘N(t)[v(t)-b(t)]+
DB-‘N(t-l)[v(t-1)-b(t-1)]
v(0) = 0 .
(82)
THEOREM5. If the differential equation (75) has a unique solution w,, then the dtfirential equation (81) which it defines has a unique solution ,u. Similarly, tf the dtyerential equation (76) has a unique solution w, then the difirential equation (82) which it defines has a unique solution v. This theorem may be proved in the same manner as theorem 3 and hence the proof will be omitted. We now give a theorem showing under what circumstances the formal treatment of the scalar case leads to a solution of the vector form of Dual Problem I.
DUALITY THEOREMIIIa. If the solution pair (We, vp) determined by (75) and (79) exists and is feasible, if all the elements of the n x n matrix B are nonnegative and tf the multiplier function ,u determined by (81) satis$es the condition
B-‘[-p(t)+b(t)]ZO for tE[O,T], then the solution pair (w,,, v,,) is optimalfor the Dual Problem I. Proof. We are required to minimize the functional J,(w, v)=
T[v(t)+a(t)-w(t)C-w(t+l)D]B-lb(t)dt s
0
subject to (IV,v)E&. We repeat here the last three conditions for functions in Kd.
(83)
598
A.LARSENand E. POLAK (ii) u(t)lO,
[u(t)+a(t)-~L’(t)C--)(‘(t+l)D]B-1~0
for %[O, T]
(iii) w(T) =0 ; (iv) G(t)=[v(t)+a(t)-M’(~)C-~(t+l)D]f3-’
for &[O, T]
=0 otherwise. We now use the multiplier function ,u determined and (iv) to Jd, thus forming the new functional &(w, u)=
by (81) to adjoin the conditions
(iii)
T{[V(t)+a(t)-w(t)C-w(t+l)D]B-‘b(t) cf 0 +(ti(t)-[o(t)+a(t)-w(r)C-w(t+
l)D]B-‘)p(t))dt
(84)
Integrating T *(MW 0
by parts, substituting for b(t) from (81) and making use of the assumption that M(t) t not in [0, T] we finally arrive at the following form for the functional 3,: $,(w, 0) =
1 {[n(t) + a(r)Mt)B-
=0
for
1t-b(t) - p(t)]
s +[o(t)+a(t)-w(t)C-w(t+
l)D]M,(t)B-‘[b(t)-&)]}dt
(gj)
We now minimize j, over a set L, of pairs of functions (M’,v) which satisfy conditions (i) and (ii) for functions in &, but not necessarily the conditions (iii) and (iv), thus L,I Kd. Since all the elements of B are nonnegative by assumption, it follows from (ii) that v(t)+a(t)-
n(t)C-
it(f+ l)DzO
for @[O, T]
and hence, since by assumption B- ‘[b(t)-p(t)]20 sequently, 3, are minimized by any pair of functions
for t~[0, T], the integrand, (w, u)EL~ satisfying
o(t) = -[u(t)
for &[O, T] .
- w(t)C-
n(t+
1) DIM,(t)
(86) and con-
We recognize that the pair (w,,, v,,) EL, by assumption, and that it satisfies this condition and hence that it minimizes 3, over Ld. However, (w,, v~)EZ& and, since j, = J, for all (IV, c) in Kd, it follows that (w,, v,J minimizes Jd over &, also, which completes the proof. We now state without proof the analogous theorem for the case B-‘20. DUALITY THEOREM IVa. Ifthe solution pair (w,, u,) determined by (76) und (80) exists and is feasible, if all the elements of the matrix B-’ are nonnegative and if the multiplier fknction v determined by (82) satisfies the condition b(t) - v(t) 2 0 for all tE[O, T], then the solution pair (w,, v,) is optimal for Dual Problem I.
(87)
Some sufficient conditions for continuous-linear-programming
problems
599
The following two theorems can be proved in the same manner as Duality Theorems Ib and IIb; hence their proofs will be omitted. Their purpose is to establish optimal solutions to the primal problem when optimal solutions to the dual problem are given by the above two theorems.
DUALITY THEOREM lllb. Suppose that the multiplier function u defined by (81) exists and that the related solution of the dual problem (w,, or) given by (73, (79) is optimal. Let the functions (z,,, u,,) be defined as solutions of the following equations: i,(t)=B-‘M(t)[b(t)+Cz,(t)+Dz,(t-l)]; t(,(t)=M,(t)[b(t)+Cz,,(t)+
z,,(O) = 0
DzJt-
I)]
(88) (89)
Zf the pair (z,, up) satis$es i,(t) 20, u,(t)2 0 for tE[O, T] and the matrices B- ’ and M(t) commute in multiplication, then (z,, uI,) is an optimal solution pair of the primal problem.
DUALITY THEOREM 1Vb. Suppose that the multiplier function v defined by (82) esists and the related solution to the dual problem (M’,,,u,,) given by (76) and (80) is optimal. Let the functions (z,, u,) be defined as the solutions of the.following equations: i,(t)=N(t)B-‘[b(t)+
Cz,(t)+
Dz,(t-
I)] ;
u,.(t)=BN,(t)B-‘[b(t)+Cz,.(t)+Dz,,(t-
z,(O) =o,
(90)
I)]
(91)
Zf the pair (zy, u,) satisfies i,(t)zO, u,(t)20 for tE[O, T] and the matrices B-’ and N(t) commute in multiplication, then (zy, u,) is an optimal solution pair,for the primal problem.
I. A SPECIAL
CASE
For the case where D is the zero matrix, and B the identity matrix, we can readily compare the results obtained here with the results of [2]. For this case, the results of [2] guarantee the existence of solutions to the primal and dual problems if (1) The elements
of C and the components
(2) The components
of b(t) are nonnegative
of a(t) and b(t) are continuous
for all tE[O, T]
over [0, T].
We shall show the following:
THEOREM9. Let B=Z, D=O, and let all the components of a(*) be piecewise continuous functions of time. Zf all the components of C and b(t) are non-negative for all t in the interval [0, T], then all the hypotheses of Duality Theorems IIIa and IIIb are satisjiedand theprimal and dual problems have solutions. Proof. Since all the components of a(*) are piecewise continuous, it follows from Theorem 4 that (75) has a solution. The fact that the components of C and of b(m) are nonnegative ensures that the solution pairs (z,, u,) and (w,, vp) are feasible and that (83) holds. (Note z,, =zY when B =I.)
600
A. LARSEN~~~E.POLAK
(i) The above case is a special case of the problem considered by Tyndall [2]. For this special case, however, we have not only relaxed the conditions for the existence of solutions obtained in [2], but we have actually obtained the solutions. It should, however, again be emphasized that the results of [2] apply to a larger class of problems. We must stress again that even for the case B=I, the existence of an optimal solution to the primal problem and the existence of a feasible solution to the dual problem do not guarantee the existence of an optimal solution to the dual problem and vice versa. This fact is contrary to the analogous situation in ordinary linear programming where the mere existence of feasible solutions to the primal and dual problems guarantees the existence of optimal solutions to both problems. This matter, together with an appropriate illustrative example, is further discussed by Tyndall in [2]. 8. CONCLUDING
REMARKS
Computational aspects There are no unusual computational difficulties involved in calculating the solutions to the differential equations encountered previously. The procedure one would follow, for example, in the application of Duality Theorem Ia would be to first integrateequation (27) backward in time to obtain A(t). From this (29) is used to obtain A(t) corresponding to l(t) and the solution to (32) is then obtained by integrating forward in time. If the solutions to (32) and (36) are feasible then they are optimal. A similar procedure is used in applying the other duality theorems. Finally, it should be pointed out that if the solutions obtained are not feasible and hence not optimal, then optimal solutions have to be sought by other means which, mostly, still remain to be discovered. Pontryagin’s Maximum Principle The reformulated problem is seen to beessentially a variational one. It cannot, however, even for the case without delay, be directly treated by Pontryagin’s maximum principle [7, 81 due to the nature of the constraints. In addition, the maximum principle is concerned only with necessary conditions for optimal solutions while we are concerned here with obtaining sufficient conditions. Calculus of Variations The reformulated problem can, however, for the non-delay case, be partially treated by the classical calculus of variations, keeping in mind that the conditions one might obtain are only necessary. The constraints such as (17) are handled by the method of Valentine [9] as discussed in [lo]. However, to be handled by the classical theory these constraints must satisfy two conditions which we repeat here from [IO] in terms of the notation of (17). We define a 2n component vector valued function of t, z(t), and u(t) (we will assume D =0 and B = I, the identity matrix) : r(t, z(t), 4t>> =h(t>,
. . . , u,(t), -ul(t>+b,(t)+
C’z(t>, . . . ) - u,(t) + b,(t) + B(t))
Then the constraints
appear as
46 4th u(t)>2 0
Some sufficient conditions
for continuous-linear-programming
problems
601
To be handled by the classical theory these constraints must satisfy the following conditions : (i) At most n components of r can vanish for any &[O, 2-J on any solution (z, U) to the primal problem to which the necessary conditions are applicable; (ii) On any solution (z, U) to the primal problem to which the necessary conditions are applicable, the matrix dri
(> auj
where i ranges over those indices where ri =0 has maximum rank. It is not difficult to construct examples where the first of these two conditions is violated, and yet the sufficiency conditions of Duality Theorem Ia are satisfied. To illustrate this consider the problem (n = 1) in the primal problem notation b(t)=
0, O
C=l,
a(t) = 1 )
D=O
B=l
OIf12
with [0, 21 the interval of interest. The solution to this problem from Duality Theorem Ia is readily obtained as u(t)=O,
OIf12
z(t)=O,
OS?< 1
=e i-1
-1,
1
For this problem r is a two component vector, and on the interval [0, 11 both components vanish, violating condition (i). In summary, we have approached the continuous linear programming problem from a point of view rather different from the one found in previous treatments. Furthermore, we have extended the problem to include systems with time delay constraints. Our method is constructive: Whenever optimal solutions can be shown to exist they can also be computed without any difficulty. Finally, for the method presented to be applicable it is not necessary that both the primal and dual problems have solutions simultaneously. In Remembrance. The authors would like to express their indebtedness to the late Professor EDMUND EI~ENBERG for many stimulating and fruitful discussions concerning the field of mathematical programming.
REFERENCES [l] R. A. BELLMAN,Dynamic Programming. Princeton University Press (1957). [2] W. F. TYNDALL,“A duality theorem for a class of continuous linear programming problems”, Tech. Rpt. No. 1, Dept. of Mathematics, Brown University (1963). [3] R. A. BELLMANand R. S. LEHMAN, Studies in bottleneck problems in production processes, Parts I and II, p. 492. Rand Corp. (1954). [4] G. HADLEY,Linear Programming. Addison-Wesley (1962). [5] P. R. HALMOS,Measure Theory. D. Van Nostrand (1950).
602
A.
LARSEN and E. POLAK
A. CODDINGTON and N. LEVINSON, Theory of Ordinary Diffhwttiul Equations. McGraw-Hill (1955). [7] L. S. PONTRYAGIN,The Theory of Optimal Processes. Interscience (1962). [8] A. LARSEN,“Optimal control and the calculus of variations”, Electronics Research Lab. Univ. of California Technical Report No. 462, July (1962). [9] F. A. VALENTINE,“The problem of Lagrange with differential inequalities as added side conditions”, Dissertation, Dept. of Mathematics, University of Chicago, Chicago, Illinois (1937). [lo] L. D. BERKOWITZ,J. Math. Analysis Applic. 3, (1961). [6] E.
APPENDIX
An example Examples of continuous linear programming problems occurring in practice are numerous [l-3]. Presented here is a variation of a problem treated by [2]. Consider a steel mill whose rate of steel production at time t is given by i2(f). Let iI denote the rate of stockpiling of produced steel. The rate of production of steel is limited by the initial capacity of the mill b2( = 1) and the time integral of the rate iz(t)-i,(t) at which produced steel is allocated to increase the capacity of the plant. Since the rate of stockpiling of steel can never be greater than the rate of production we have the constraint that i*(t) - i2(t) can never be positive. Over the time interval [0, I] it is desired to maximize the net output of steel, the time integral of L-,(t). Stating the above as equations we have
z,(O)=o,
qt) i 1+ ZZ(f)- Zl(f)) Under cannot
zz(O) = 0
the assumptions that steel will never be destroyed and that steel once stockpiled be used to increase the capacity of the plant we have
In the vector notation
of the Primal Problem
I we then have
Bi(f) 5 b(t) + Cz(t),
z(O)=O.
where
b(O=(O, 11,
To solve this problem
a(t)=(l,
0)
we shall make use of Duality
Theorem
Ha.
Some sufficient conditions for continuous-linear-programming
problems
603
The inverse of the matrix B is
and satisfies the hypothesis It is readily verified that
of the theorem
that all the elements
of B-’
be non-negative.
(5,=1-t
taJ, 11 d,=tsatisfy equation
The solution
1
(28) where for this case
to equation
(33) is verified to be
z1(t)= t
zz(t) = t and u’(t) = 0. Also, the solutions are feasible and consequently (z, U) is the solution to the problem. The physical interpretation is that at least over the interval [0, 1] it is to the greatest advantage to directly stockpile all of the produced steel, and use none of it to increase the capacity of the plant.
(Received
20 September
1965)
R&wm&-Les auteurs etudies des problemes de programmation linhire en continu avec temporisation des contraintes qui sont exprimees sous la forme d’inegalitts de differences et differentielles. Its utilises des multiplicateurs de Lagrange pour relier les contraintes a la fonction de cot% et pour calculer une solution propostk, par algorithmes explicites. Les auteurs montre que si cette solution est possible, elle est en meme tempso ptimum. 11sStudies une methode analogue pour traiter un probltme dual. Lorsque le probltme primal et le probleme dual ont chacun une solution, it existe une relation entre ces solutions et il apparait, dans certains cas, que les solutions du probleme dual peuvent Ctre trouvees en partant de celles du probltme primal et reciproquement. Les auteurs presente finalement une discussion des differences qui existent entre les problemes de programmation lineaire en continu, les problemes qui se traitent par le calcul classique des variations et les principes modernes du controle optimum. L’interet principal de la mdthode proposee, pour le calcul des solutions d’essai, est qu’elle est parfaitement directe et simple et qu’elle est independante de I’existence simultan&e de solutions pour le probleme primal ainsi que pour le probltme dual.
A. LARSENand E. POLAK
604
Zu~~~s~g-Di~ Arbeit befasstsich mit den Problemen der kont~nuierlich linearen Pro~ammierung mit zeitlich verztigerten Zwangsbedingungen, welche in der Form unterschiedsdifferentialer Ungleichheiten ausgetickt werden. Die Verkniipfung der Zwangsbedingungen mit der Kostfunktion, und die Berechnung einer mijglichen L&sung mit HiIfe eines expliziten Algorithmus erfolgt mit Hilfe von LagrangeM~tiplizicrfunktionen. Es wird nachgewicsen, dass, wenn eine derartige Liisung miiglich ist, diese such die optimale Lijsung darstellt. Eine &hnliche Methode wird zur Behandlung dualer Probleme entwickelt. Fiirden Fall, wo sowohl die primalen als such die dualen Probleme l&bar sind, werden die zwischen diesen Li5sungen bestehenden Beziehungen aufgezeigt, und es wird demonstriert, wie sich in einigen F%lIen die Lijsungen des dualen Problems aus den LGsungen des primalen Problems (und umgekehrt) konstruieren lassen. Zum Schluss werden die Unterschiede zwischen den Problemen der kontinuierlich linearen Programmierung und denen der klassischen Variationsrechnung sowie Probleme der modernen optimalen RegeIt~hn~ erortet. Die in dieser Arbeit vorg~chlagene Methode zur Berechnung von Versuchsl~sun~en hat den grossen Vorteil, dass sie unkompliziert und einfach ist, und dass dercn Anwendung nicht von der gleichzeitigen Existenz von Liisungen ftlr die primalen und dualen Probleme abhangig ist. Sommsrio-In questa monografia l’a tratta i problemi della pro~~rn~io~e lineare continua con i ritardi di tempo nei vincoli, the sono espressi sotto forma di ineguaglianze differenziale-differenza. Le funzioni de1 moltiplicatore iogrange sono impiegate per avvicinare i vincoli al funzionale di costo e per calcolare mediante algoritmi espliciti una soluzione can&data. E’ dimostrato the se tale soluzione e realizzabile, allora It anche ottima. Si sviluppa un metodo similare per trattare i problemi duali. Quando sia i problemi duali the i primali hanno soluzioni, il rapport0 tra le loro soluzioni i? esposto e viene dimostrato come, in qualche case, le soluzioni de1 problema duale possano venire costruite partendo ddle soluzioni de1 problema primale, e viceversa. Per ultimo, si discutono le differenze tra i problemi di ~rogramma~ione lineare continua e i problemi de1 calcolo classico di variazioni e del controllo ottimo modemo. 11 maggior vantaggio de1 metodo proposto in questa monografia, per soluzioni di prova di calcolo, B dato da1 fatto the b un metodo semplicee the non dipende dall'esistenza simultanea disoluzioni dei problemi primalie duali. A&T~~IcT-B pa6OTe o6cy~aamrcs Bonpocbr neIIpepIIBHO-nKHe&HOrO nporpaMMapoBaHun c 3ana3nMBaHIieMBOrpaHFIeHIIIIX,BblnaXZHHbIX BBHAe~a3HOCTHO-AHl$@2~eHLIIiaJIbHblX HepaBIIOCTeti.np3iMeHeHbK ~~ EarpamIceBoro bf~0~Ii~e~1fiim51 np~coe~~~eH~~ c~x3e2 x +yH~nIiIto~a_3y CTOii~0cfH II nfisr bbrnic~emiK, c IIOMOLIWO XIOApO6HO pa3pa6oTamIoro anrop@bfa, n~A~O~~Or0 pemeH&VI. fioKa3biBaeTCa,'ITOeCnH 3TOpemeHIieOCyme%TBEIMO, TO OHOIIBJIaeTCIITaZLK~eOuTtlMaJIbHblM. IIon06~b1ii MeTOa ocHobHax, KaK li ayanbnaa 3anaqa IwetOT pemetms, pa3suBaercn ,qJIR lQGUIbHbIX 3ana9, Korna noKa3bxBaeTc~coo~nomeH~e~e~~y~xpeme~aa~a,aTa~~KeaK,~~ie~oTo~bIxcjIy~a~x~0~~0no~po~rb pemeawn JIyaJIbHbIXsanav ~3 pemeau8 OCHOBHO~? 3aJIawi II 06paTHo. B 3aKJno~eHHe paCCMaTpKBaeTCri pasmma biernay sa,qauabfu:nnuekHor0 HenpeprnnHoro nporpaMbmpoeamnI EI 3aaaqawi XnaccwiecKoro aapHan~OHHOroktC~HCncHHff FICOB~MeHHOrOO~T~M~b~OrOpcry~~~O~H~~. ~IIaBHbIMupeEiMyIUecTBObf MeTORa,~~~O~e~~OrO B HaCTO~e~ PatiOTe, JOBI BbIq~CneHn~ ~Cn~TaT~nbHbIX PCmeHH~ .5IBJIXeTCFI er0 HenOC&K!~CTBeHHOCTb %InpOCTOTa,a TaKXCe He3aBUCHMOCTb OT OJIHOBpeMs?HXiOi-0 CymecTBOBaHKSI pemeHIf& OCHOBHO~~ IInyanbnaa
saaam.