Newton-type methods for stochastic programming

Newton-type methods for stochastic programming

MATHEMATICAL COMPUTER MODELLING PERGAMON Mathematical and Computer Modelling 31 (2000) 89-98 www.elsevier.nl/locate/mcm Newton-Type Methods for S...

765KB Sizes 0 Downloads 38 Views

MATHEMATICAL COMPUTER MODELLING PERGAMON

Mathematical

and Computer

Modelling

31 (2000) 89-98 www.elsevier.nl/locate/mcm

Newton-Type Methods for Stochastic Programming X. CHEN Department of Mathematics and Computer Science Shimane University, Matsue 69023504, Japan [email protected]

Abstract-stochastic programmingis concerned with practical procedures for decision making under uncertainty, by modellinguncertainties and risks associated with decision in a form suitable for optimization. The field is developing rapidly with contributions from many disciplines such as operations research, probability and statistics, and economics. A stochastic linear program with recourse can equivalently be formulated as a convex programming problem. The problem is often largescale as the objective function involves an expectation, either over a discrete set of scenarios or as a multi-dimensional integral. Moreover, the objective function is possibly nondifferentiable. This paper provides a brief overview of recent developments on smooth approximation techniques and Newton-type methods for solving twostage stochastic linear programs with recourse, and parallel implementation of these methods. A simple numerical example is used to signal the potential of smoothing approaches. @ 2000 Elsevier Science Ltd. All rights reserved. Keywords-Stochastic Two-stage stochastic

programming, Smooth approximation linear programs with recourse.

techniques, Newton-type

methods,

1. INTRODUCTION Mathematical

programming problems are used to model many important

decision problems in

engineering management and economics. For many practical decision problems, the problem data cannot be known with certainty for a variety of reasons, for example, measurement errors, information about the future, or unobserved events. Stochastic optimal decisions with taking this uncertainty into account.

programming

looks for some

The major class of stochastic programs consists of stochastic programs with recourse which is in wide use in financial planning. Stochastic programs with recourse assign to uncertain parameters a probability distribution (based on statistical evidence or not), design “recourse”, that is, the ability to take corrective action after random events have taken place, and minimize the cost of the master decision plus the expected cost of the recourse decision. The value of the objective function in stochastic programs with recourse is very expensive to compute because it involves a number of inner optimization problems. The challenge of solving such problems has led to many interesting computational and theoretical developments. In this paper, we focus on smooth approximation technique and Newton-type methods for solving two-stage stochastic linear programs with fixed recourse.

08957177/00/$ - see front matter @ 2000 Elsevier Science Ltd. All rights reserved. PII: SO895-7177(00)00075-3

Typeset

by &@-~$JX

X. CHEN

90

A version

of two-stage

stochastic

linear programs

minimize subject where E denotes

expectation

optimal

y E ?I?@, namely

recourse

Q(x,w) In the first stage

Ax = b,

to

and Q is calculated

problem),

problem),

the associated

presents

by finding

the cost coefficient

cost coefficient

(1.1)

x 2 0, for given decision

1W(w)y = h(w) - T(w)x,

A E !JPx” and the vector b E !lP are assumed the demand the random

[l] is

cTx + E, [Q(s, w)],

= min {q(~)~y

(a master

with recourse

vector

y 2 0} .

c E %*, the constrained

to be deterministic.

vector

matrix

In the second stage (a recourse

q(e) E !JP2, the recourse

matrix

IV(.) E !IPxn2,

vector h(.) E W2, and the technology matrix T( .) E !lP xn2 are allowed to depend on vector w E 0 c P, and therefore, to have random entries themselves. E,[Q(z, w)]

the expected

value of minimum

extra

costs based

on first-stage

decision

problem. events. Ignoring E,[Q(x, w)], (1.1) is a linear programming Fixing q = q(w) and W z W(w) in the second stage, problem (1.1) reduces linear

x and event w, an

programs

with jixed recourse. Fixed

recourse

problems

(2 ] Z = wy, This implies

that,

whatever

the first-stage

to the stochastic

were first formulated

and Dantzig [3] in 1955. A particular interesting instance of fixed recourse fied recourse, where the fixed recourse matrix W satisfies

and random

by Beals [2]

problems

is complete

y 2 0) = ?lP.

decision

x and the event w turn out to be, the second-

stage program Q(x, w) = min { qTy 1Wy = h(w) - T(w)x,

y 2 0)

(1.2)

for all w E R and all z 2 0 satisfying

Ax = b, then

it

is called a problem with relatively complete recourse. A special case of complete simple recourse, where with the identity matrix I of order mz, W = (I, -I).

fixed recourse

is

will always

be feasible.

For convenience,

If (1.2) is feasible

we let w be a discrete

vector and denote the expected recourse function

random

by 4(x) 5

E,[Q(x,w)l

= ~&(4~i,

(1.3)

i=l

where pi 2 0 and Cy=, pi = 1. Our discussion

can be readily

function. See the example in Section 5. Let us denote the objective function of problem f(x)

(l.l)-(1.3)

extended

to a continuous

random

by

= CTX + 4(x).

By the convexity of Q(., w), the objective function f is convex, and hence, the two-stage stochastic programming problem (l.l)-( 1.3) is a convex nonlinear programming problem. Difficulties in solving

this convex nonlinear

programming

l

f is not necessarily

l

convergence, f is very expensive to evaluate as f(x) problems (N can easily exceed 10,000).

These difficulties

(1) smoothing

provide

differentiable,

problem

a strong

are that

which prevents

motivation

the use of algorithms

= cTx + 4(x)

and 4 involves

with high rates of N optimization

for studying

approach, which paves the way for using algorithms with high rates of convergence to solve the stochastic program, (2) Newton-type methods to minimize the number of iterations and function evaluations required by the optimization algorithm, to find solutions of optimization problems in the second (3) parallel processing techniques stage.

Newton-Type

In the last decade, scribe two smoothing (l.l)-(1.3).

approaches

of problem

methods

1.3).

and the smoothing

In Section

approximation

of problem

the linear

model,

first-stage

decision

variables.

quadratic

recourse

functions

2.1.

Quadratic

The first approach

example

Moreover,

Let E > 0. A smooth

Using

the smooth

to the objective

on quadratic functions

recourse

the

model

Q. We will illustrate

we describe

two smooth

representations.

Unlike

with respect

to the

for any given x and w, the values of the

Approach

go to zero.

of (1.2) ( WT.z < q} .

max {(h(w) - T(w))~z

to Q was defined

in [4],

QE(x,w) = ma { -;2-z

+ (h(w)

approximation

QE, we can define a smooth

function

solution

for solving

programming

are differentiable

Smooth

is based on the dual problem

function

an approximate

5. In this section,

it is shown that

function

and

3, we dis-

FUNCTIONS

to Q(x, w) as some parameters

approximation

differentiable,

techniques

the stochastic

converge

Q(x, w) =

with fixed recourse

problem.

in Section recourse

2, we de-

In Section

(1.1))( 1.3) occurs on the recourse function

the quadratic

Program

for finding

5, we illustrate

to Q, which are based

recourse

problem.

processing

APPROXIMATION

on a concrete

functions

methods parallel

on the news vendor

In Section

is continuously

in the original

problems.

issues.

linear programs

problem

and quasi-Newton

2. SMOOTH The nonsmoothness

function

4, we consider

approaches

the nonsmoothness

stochastic

In Section

programming

91

on the three

in the smoothed

to the objective

Newton

(l.l)-(

to the two-stage

function

converges

cuss generalized stochastic

a lot of effort has been spent

The objective

monotonically

Methods

-

T(W)x)Tz,WTZ 5

q}

.

approximation

function

f by

cTx +

F~(x) =

2

Q&,wih.

i=l

An approximation solution to the optimal solving the optimization problem

solution

minimize subject

Z* of problem

(l.l)-(1.3)

can be found

by

F,(z), to

Az = b,

(2.1)

5 > 0.

Problem (2.1) is a special extended linear-quadratic problem (ELQP). Rockafellar and Wets [5,6] introduced the ELQP model to stochastic programs. Recently, several algorithms have been developed for solving ELQP problems [7-111. Let X = zE(x,w)

{x 1Ax

= argmax

= b, x 2 0}, 1

+ (h(w) - T(W)X)TB,

WT.2 < q , >

and 2 = {Z,(X,W) ) x E x,

w E 52).

We assume that there exists a p > 0 such that max,ez Z~,I 5 ,B. Here 2 is the set of optimal dual solutions of (1.2) on II: E X, w E 0. p is the maximum value of the 2-norm of elements on 2. If the feasible region {Z 1IYT z < q} is bound, then such ,B exists.

x.

92

CHEN

The differentiability analysis and the generalized Hessian of problem (2.1) were given in [7]. The error bound of the smooth approximation function to the original problem was established in [4]. These results show that the objective function in (2.1) is a good smooth approximation function of the objective function in (l.l),

which implies that (2.1) is a good smooth approach

to solving (l.l)-(1.3). THEOREM 2.1. (1) (See [71.) The function FE has a locally Lipschitz gradient

VF,(x)

= c- 5

T(w~)~z&,

wi)pi.

i=l

Furthermore, F, is twice differentiable almost everywhere. (2) ([See 41.) Suppose that (1.1)-(1.3) is a problem with relatively complete recourse. Then, for any 2 E X, there exists an E(X) > 0 such that for any E E (0, C(Z)],

Let x* be a solution of (1.1)~(1.3) an d P* be a solution of (2.1). Then, for any 0 5 E 5 E, = min{E(x*), E(f*)}, max {f (g*) - f (x*) , F, (CC*)- F, (Z*)} 5 f/3~.

Further, we assume that f or FE are strongly convex on X with modulus p > 0. Then,

2.2. Square Smooth Approach Now, we consider the second smooth approach, which was recently introduced by Birge, Pollock and Qi [12]. Assume that the associated cost coefficient vector 4 is positive, i.e., qi > 0 for i = l,... ,722. Replacing the objective function qTy in the second stage by (qTy)2, we obtain the square of the recourse function Q2(x,w) = min { (qTy)2 1Wy = h(w) - T(w)x, y > 0} .

(2.2)

By the positiveness of q, representation (2.2) means that the minimization problems in (1.2) and (2.2) have the same optimal solution. A smoothing approach to Q is considered to use the positive square root of

$Ztx,w)= min{ (qTy)2 + ~IIWY -

h(u) + T(u)xII~+E~ 1y

2

o}.

Here 11.II denotes the 2-norm, and k and Ek are two positive parameters. The parameter k reflects the relative importance of satisfying the Wy = h(w) - T( w ) x constraints compared to minimizing (qTy)2. The parameter Ek is added to ensure that T/J?2 Ek > 0. This is useful for establishing differentiability properties of ?+!$.We can replace k by an m2 x mz-diagonal matrix K such that each diagonal element of K “weights” a component of Wy - h(w) + T(w)x. Let N

Gk(X) =

CTX +c i=l

$‘k(x,‘d)Pi.

Newton-TypeMethods

Birge,

Pollock

objective

and Qi [12] showed

function

that

f of (l.l)-(1.3)

Gk is continuously

as k +

co.

93

differentiable

We summarize

their

and converges results

to the

by the following

theorem. THEOREM 2.2. ([See 121.)

The function

and VGk is Lipschitz

Gk is differentiable

continuous

that (1.2) is feasible for h(w) - T( w ) x and CYis the maximum value of the Bnorm dual solutions of (1.2). Then, $$(x, w) - &k monotonically converges to Q(x, w)

in X2”. Suppose of optimal

from below and for k large enough,

o

<

-

Q(x,w) - &‘~(x,w)Qbc, w)

3. NEWTON-TYPE Newton’s quadratic

method

(also known as a sequential

programming

generalized

Hessian)

been studied Newton-type

approximation

methods

the objective smooth

function

approximation

problems

- x.4’

programming

method)

[13]. The generalized for solving

Newton

LC1 optimization

Newton

is essentially method

a

(using

problems

by many researchers [8,14-171. Global convergence and superlinear methods for LC1 optimization problems have been established.

A function is called LC1 if it has a locally called LC’, if all involved functions are LC’.

generalized

<5

METHODS

quadratic

procedure

and quasi-Newton

&k

have

convergence

of

Lipschitz gradient. An optimization problem is Both functions F, and Gk are LC1. Replacing

in (1.1) by F, or G k, we obtain an LC1 optimization problem. The technique paves the way to apply the globally and superlinearly convergent

method

and quasi-Newton

methods

to solve stochastic

linear

programming

(1.1).

The Karush-Kuhn-Tucker

(KKT)

system

for (2.1) is

VFE(x) - ATX - u = 0, x, u 2 0,

b-Ax=O,

UTX = 0,

where x, u E Rn and X E R”. Let r = 2n + m and u = (x, X, u)~. equivalent to a system of nonsmooth equations

Then,

the KKT

system

is

(2.1) is defined

as

H(v) :=

where the “min” A general follows.

operator

version

denotes

the componentwise

of Newton-type

methods

minimum

with a line search

of two vectors. for problem

ALGORITHM 3.1.

Given constants r,~, ~1~7 E (0, l), CEO> 0 and an integer y > 0, choose x0 E X, Xc E !JIZm,uc 2 0, and an n x n-symmetric positive definite matrix Bc. Let vc = (x0, X0, uo)T and let S = [[H(wo)[[. (1) Solve the quadratic

program

vFc(Xk)Td

minimize

subject

to

i-

idT(Bk f akl)d,

A(xk + d) = b,

(3.1)

xl,+d>O.

Let dk be the solution of (3.1), and &+I and Uk+l be the Lagrange corresponding to A(xk + d) = b and x 2 0, respectively.

multipliers

at dk

x.

94

(2) Let

i& be the minimum FE

t > 0 such that

integer

(xk

CHEN

+ pTdk)

-

F,

(zk)

5

%pTVF,(zk)T&

Let zk+l = 2k + $“dk.

(3) Let

uk+l

=

Otherwise,

(4

Calculate &+I. zk+i

(5) If

Ak+l,Uk+l) T- If iiH(~k+l)ii/6

bk+l,

%

let

6 =

ok+1

iIff(vk+l)ll,

=

Take

let o!k+l = ok. a generalized

satisfies

with k replaced

Hessian

a prescribed

&+I,

optimization

or update

stopping

criterion,

BI, by a quasi-Newton terminate.

Otherwise,

formula return

to get

to Step 1

by k + 1.

Notice, that (2.1) is a convex programming for convex

5

problem

problem.

is the BFGS

The most successful

method

[13]. The BFGS

quasi-Newton update

method

formula

is given

by B k+l

=

Bk

-

BkW;Bk S;BkSk

+

YkYjcT ?/;Sk



where yk = vF,(zk+i) - vF,(zk) and sk = zk+i - xk. The method for calculating an element of the generalized Hessian for problem (2.1) was given in [7,9]. By the convexity of FE, all elements in the generalized Hessian are symmetric semidefinite.

4. PARALLEL Calculating

the objective

value F’(x)

N is either the number of scenarios, rule, which can easily exceed 10,000. putation

[18-201. For fixed recourse

COMPUTATION

involves solutions

of N quadratic

programming

problems.

or the number of points using in a numerical integration This provides a strong motivation for using parallel comproblems,

these quadratic

programming

problems

same feasible region. Hence, these problems can be solved efficiently on a parallel data parallel constructs expressed by Fortran 90 style matrix operations.

have the

computer

using

Based on the dual problem of (1.2) and SOR iterative methods for linear complementarity problems, Chen and Womersley [18] proposed a parallel algorithm for solving the following multiple quadratic

program: maximize subject

- +TM* to

WTz

5 q,

+ (h(Q)

- ?$Ji)x)Tz,

i=l,...,N,

(4.1)

where M is an m2 x ms-symmetric positive definite matrix. The multiple quadratic program in problem (2.1) is a special case of (4.1) with M = ~1, where I is the identity matrix of order m2. For given 2, let

and let D = diag(WTMelW), U”,V” E WzxN, Zk E %jmzxN, and B = qeT E PXN, where e E RN with all elements equal to 1. The jth columns of U”, Vk, Z” correspond to the jth problem of the multiple quadratic programs. Thus, the value of N has a major effect on the degree of parallelism available. ALGORITHM 4.1. ([See 181.) Give initial values Z” and U”. Let k = 0, V” = B - WZ”, and b” = I(min(V’, U”)]]. G’ive a relaxation factor X E (0,2), a step size s > 0 and a stopping control err > 0. while k 5 k,, and Sk 2 err ok+l = msx (0, U” - SD-lV”),

u”+’

= Aok+l + (1 - A)@,

Newton-Type zk+l

= -M-1(&

v”+l

=

B

_

Methods

95

_ M-lwTUk+l

wZk+l

6”+l = IImin (@+I,

hk+l)

11,

k+k+l, end. Algorithm

4.1 has been tested

CM5 parallel

computer.

In this section, stochastic

algorithms

were reported

5. THE NEWS

VENDOR

PROBLEM

approximation

on an example

the smooth

the news vendor problem.

goes to the publisher

and buys x newspapers

above

U. Then,

by some limit

with other parallel

results

we illustrate

optimization,

and compared

Numerical

the vendor

on a 16 processor

in [18].

In this situation,

every morning

at a price of c per paper. sells as many

of a basic problem

newspapers

a news vendor

This number as possible

in

is bounded at the selling

price s. Any unsold newspaper can be returned to the publisher at a return price r, with r < s. Demand for newspapers varies over days, and is described by a random variable w. We assume that the news vendor help the news vendor

cannot buy more newspapers and sell previous edition during the day. To decide on how many newspapers to buy every morning, we define v as the

effective sales and y as the number of newspapers returned to the publisher at the end of the day. We may then formulate the problem as a stochastic program with recourse. mince

s.t. 0 5 2 5 u,

+ E,(Q(z,w)],

(5.1)

where s.t. v 2 w, v + y 5 2, v 2 0, y 2 0.

Q(x, w) = min -sv - ry,

This model describes the news vendor’s profit. Here -Q(z) is the expected profit on sales and returns, while -Q(x,w) is the profit on sales and returns if the demand is at level w. The optimal solution of the recourse problem is given as v* = min(w, z),

(5.2)

y* =max(x-w,O)=a:--v*.

(5.3)

Using the relation between v* and y*, we can rewrite the recourse problem in (5.1) as st. y 2 2 -w,

Q(x, w) = min --sx + (s - r)y,

y 2 0.

(5.4)

Noticing s > T, we further simplify the recourse problem as Q(x, w) = -sx Since for any fixed w, Q(.,w)

+ (s -

r) max(z - w, 0).

is convex, &([Q(x,w)]

(5.5)

is convex, and hence, problem (5.1) is a

convex programming problem. If w is a continuous random variable, then Q(z) = &[Q(x,

w)] = -sx

+ (s - T) 1” F(w) dw,

where F(w) represents the cumulative probability distribution of w. Q is differentiable twice differentiable. If w is a discrete random vector, then Q(x) = -sx

+ (s -

r)

5

max(x - wi, O)pi.

i=l

Q is piecewise linear, and possibly nondifferentiable

at x = Wi, i = 1,. . . , N.

but is not

X. CHEN

96

The nonsmoothness of problem (5.1) arises from the piecewise continuity of Q(z,w). Now, we consider the quadratic smooth approximation. By the duality of problem (5.4), we have

Q(x, w) =

--sx +

s.t. z E s,

max(s - w)z,

s=

[O,s - ?-I.

Adding a quadratic term, we obtain s.t. z E s.

QE(z,w) = --sz + max(z - w)z - E.zTz, 2 The optimal solution is x-w

x-w -

> *

(x c

w)

=

-

Qs(x

>

w)

E

s,

& 0

=

,

&

x-w
E s -

1

x-w ->>--_, &

T,

where II,(u) denotes the projection of u onto the set S. Thus, the smooth approximation function is x-w E s, & - w)2,

recourse

&

QE(x,w) = -ax +

x-wwo,

0,

I (s-~)(x-w)-;(s-.)2, y>s--r. For any fixed w, Q,(., w) is convex and continuously differentiable.

:(x-w), y = -s +

Q:(x,w)

I

The derivative is

ES,

0,

x-w-co,

s - r,

x-w ->s-r. &

An element of the generalized second derivative is

It is easy to verify

IQc(x,w)( I .xc + IQ:h4

5 2s -

ftx- w)(s - r), 7-7

and

lQ~2'(x,w)I < -.

x-w

Hence, the smooth approximation continuous random variable, then QE(x) = -sx

+

(s - r)

recourse problem is stable as the parameter

/“-“‘T’ F(w) b 0

+;

E 1 0. If w is a

J,;,,,_,,(X - ‘J)FtW)do.

Newton-Type

QE is continuously twice differentiable.

Methods

97

The derivative is

Q:C4 = --s + i S,T.,,_,, F(w) b, and the second derivative is

Q;(x)= Furthermore,

;(F(x)- F(x

- E(S- r))) 2 0.

we have the error bound

IQ(x) - &&)I

I

$Cs - rJ2,

for any 2 E 93

By similar argument, we can show that if w is a discrete random variable, QE is continuously differentiable. The news vendor model is a simple recourse problem [21]. For such problems in multidimensions, we can achieve the same smoothing results by the continuity of VXQE(z, w) = l-Ls((a: - W)IE).

6. FINAL REMARKS This paper provides a brief overview of recent results on smooth approximation techniques and Newton-type methods for solving two-stage linear stochastic programs with fixed recourse. Tseng [22] numerically compared the two smooth approaches with Newton methods described in this paper for stochastic programming (l.l)-(1.3). Numerical results in [22] showed the good performance of the two smooth approaches with Newton methods. These techniques can be extended to multistage stochastic programming problems [20,23,24], and portfolio optimization problems [25]. For instance, we can employ smooth approximation in some stages, and obtain a smooth approximation function to the original problem. The other interesting issue is to study some algorithms combining Newton-type methods with other methods, for example, the stochastic decomposition method [1,26,27]. Birge, Chen and Qi [26] proposed a stochastic Newton method and proved that this method is superlinearly convergent with probability one.

REFERENCES 1. 2.

P. Kall and S.W. Wallace, Stochastic Programming, John Wiley & Sons, (1994). E.M.L. Beals, On minimizing a convex function subject to linear inequalities, J. Roy. Stat, Sot. B17, 173-184 (1955). 3. G.B. Dantsig, Linear programming under uncertainty, Management Sci. 1, 197-206 (1955). 4. X. Chen, A parallel BFGS-SQP method for stochastic linear programs, In Computational Techniques and Applications, (Edited by R.L. May and A.K. Es&on), pp. 67-74, World Scientific, (1995). 5. R.T. Rockafellar and R.J.-B. Wets, A Lagrangian finite-generation technique for solving linear-quadratic problems in stochastic programming, Math. Prog. Study 28, 63-93 (1986). 6. R.T. Rockafellar and R.J.-B. Wets, Linear-quadratic problems with stochastic penalties: The finite genera tion algorithm, In Stochastic Optimization, Lecture Notes in Control and Information Sciences 81, (Edited by V.I. Arkin, A. Shiraev and R.J.-B. Wets), pp. 545-560, Springer-Verlag, Berlin, (1987). 7. X. Chen, L. Qi and R.S. Womersley, Newton’s method for quadratic stochastic programs with recourse, J. Comp. Appl. Math. 60, 29-46 (1995). 8. X. Chen and R.S. Womersley, A parallel inexact Newton method for stochastic programs with recourse, Annals Oper. Res. 64, 113-141 (1996). 9. L. Qi and R.S. Womersley, An SQP algorithm for extended linear quadratic problems in stochastic programming, Annals. Oper. Res. 66, 251-285 (1995). stochastic 10. J. Sun, K.-E. Wee and J.-S. Zhu, An interior point method for solving a class of linearquadratic programming problems, In Recent Advances in Nonsmooth Optimization, (Edited by D.-Z. Du, L. Qi and R.S. Womersley), pp. 392-404, World Scientific, (1995).

98

x.

&EN

11. C. Zhu and R.T. Rockafellar, Primal-dual projected gradient algorithms for extended linear-quadratic programming, SIAM J. Optim. 3, 751-783 (1993). 12. J.R. Birge, S.M Pollock and L. Qi, A quadratic recourse function for the two-stages stochastic program, In Progress II in Optimization: Contributions from Australasia, (Edited by X.Q. Yang. A.I. Mees, M.E. Fisher and L.S. Jennings), Kluwer Academic, Nowell, MA (to appear). 13. R. Fletcher, Pmctical Methods of Optimization, 2 nd Edition, John Wiley & Sons, (1987). 14. X. Chen, Convergence of the BFGS method for LC’ convex constrained optimization, SIAM J. Control Optim. 34, 2051-2063 (1996). 15. F. Facchinei, Minimization of SC’ functions and the Maratos effect, Oper. Res. Lett. 17, 131-137 (1995). 16. J.S. Pang and L. Qi, A globally convergent Newton method for convex SC1 minimization problems, J. Optim. Theory Appl. 85, 633-648 (1995). 17. L. Qi, Superlinear convergent approximate Newton methods for LC? optimization problems, Math. Progmmming 64, 277-294 (1994). X. Chen and R.S. Womersley, Random test problems and parallel methods for quadratic programs and quadratic stochastic programs, Optim. Methods Software (to appear). 19. E.R. Jessup, D. Yang and S.A. Zenios, Parallel factorization of structured matrices arising in stochastic 18.

programming, SIAM J. Optim. 4, 833-846 (1994). 20. A. Ruszczyyriski, Parallel decomposition of multistage stochastic programming problems, Math. Programming 58, 201-228 (1993). 21. J.R. Birge and F.V. Louveaux, Introduction to Stochastic Programming, Lecture Note, Department of Industrial and Operations Engineering, The University of Michigan, (1995). 22. C.-H. Tseng, Two-stage stochastic programming with recourse, Master of Mathematics Thesis, University of New South Wales, (1996). 23. J.R. Birge, Decomposition and partitioning methods for multistage stochastic linear programs, Oper. Res. 33, 989-1007 (1985). 24. J.M. Mulvey, Financial planning via multi-stage stochastic optimization, Statistics and Operations Research Technical Report SOR, Princeton University 94-09 (1994). 25. R.S. Womersley and K. Lau, Portfolio optimization problems, In Computational ‘Techniques and Applications, (Edited by R.L. May and A.K. E&on), pp. 795-802, World Scientific, (1995). 26. J.R. Birge, X. Chen and L. Qi, A stochastic method for stochastic quadratic programs with recourse, AMR, School of Mathematics, UNSW, Sydney 94/Q (1994). 27. J.L. Higle and S. Sen, Stochastic decomposition: An algorithm for two-stage linear programs with recourse, Math. Oper. Res. 16, 65-69 (1991). 28. J.R. Birge and R.J.-B. Wets, Designing approximation schemes for stochastic optimization problems, in particular, for stochastic programs with recourse, Math. Prog. Study 27, 54-102 (1986). 29. J.M. Mulvey and H. Vladinirou, Stochastic network programming for financial planning problems, Management Sci. 38, 1642-1664 (1992). 30. L. Nazareth and R.J.-B. Wets, Nonlinear programming techniques applied to stochastic programs with recourse, In Numerical Techniques in Stochastic Programming, (Edited by Y. Ermoliev and R.J.-B. Wets), pp. 95-119, Springer-Verlag, Berlin, (1988). 31. S.M. Robinson and R.J.-B. Wets, Stability in two-stage stochastic programming, SIAM J. Control. Optim. 25, 1409-1416 (1987).