Pergamon
0969-6016(93)EOOO4-X
Int. l'rans. Opl Rex. Vol. 1, No. 2, pp. 241 250, 1994 Elsevier Science Lid. Copyright ,c 1994 IFORS Printed in Great Britain, All rights reserved 0969-6016/94 $6.0() + 0.00
Infinite-horizon Scheduling Algorithms for Optimal Search for Hidden Objects E U G E N E LEVNER The Hebrew University of Jerusalem, Israel This paper considers discrete search problems in which a decision-maker has to find objects lost or hidden in a given set of locations so as to minimize the expected losses incurred. Given a chance to look for a hidden object in the same location infinitely many times, this type of problem, in contrast to standard scheduling problems, has an infinite sequence as its solution. Thus we are concerned to find an algorithm that yields an optimal solution, rather than the optimal sequence itself. Using combinatorial techniques, fast optimal algorithms for solving the problems are obtained, and optimality conditions are presented for search criteria, under which the local-search algorithms yield the global optimum.
Key words: search, optimization, scheduling, sequencing, combinatorial analysis, sequential search policy
I N T R O D U C T I O N AND SUMMARY Suppose a fault occurs in a complex technological system containing several modules. Until it is eliminated, the fault causes damage to the whole system, the scale of damage being dependent on the location of the failure in the system and the time needed to detect the failure. Assuming that the modules are inspected sequentially, the problem is to find the failed module so as to minimize the expected losses incurred by the failure. Given a chance that inspecting a faulty module will yield no information as to whether or not the module is faulty, this type of problem has an infinite sequence as its solution. Thus we are concerned to find an algorithm that yields an optimal solution, rather than the optimal sequence itself. Such situations are often encountered in practice. Especially notable are those arising in fault diagnostics, searching for lost or hidden objects, and planning risky R&D projects. Recent reviews and texts that provide a deeper background and further references on search problems include Ahlswede and Wegener (1987), Stone (1989) and Benkoski et al. (1991). In this paper, we shall be interested in solving this problem type by locally optimal strategies (called also the 'most-inviting strategies', or 'index policies'). This is a strategy that, at each step, determines a current 'search effectiveness' E~ for each location i, and recommends searching in the location of the highest current effectiveness. Though being attractive because of their computational efficiency, locally optimal strategies do not guarantee finding an optimal solution for any search problem. Thus, the question arises: For which search criteria do the local strategies provide the global optimum? There is a substantial literature devoted to this question. Gluss (1959), Blackwell (1962) and Staroverov (1963) were apparently the first to show that a locally optimal procedure minimizes the expected total searching cost. Kadane (1968)and Stone (1989)extended this result to a situation in which times and probabilities of detecting the fault may change during the search. Hall (1977), Kimeldorf and Smith (1979) and Assaf and Zamir (1987) found similar results when the number of targets or the overlook probabilities are random variables. Chew (1967) and Kadane (1968) have considered a problem in which the probability of detection is to be maximized subject to a cost constraint, and showed that a locally optimal strategy is optimal when the cost is measured by the number of searches. Sweat (1970) has maximized the expected total discounted reward, and obtained an efficient optimization rule similar to that for the min-cost problem. Kadane and Simon (1977) have proposed a unified approach to the min-cost and max-reward (finite-horizon) search problems. Other
Correspondence: School o[ Business Administration. The ttehrew Unil'ersity o[Jerusalem. Mourn S,'opu.~. 91905 Jerusalem. Israel ITOR 1:2-I
241
242
Eugene Levner-ln/inite-horizon Scheduling Alyorithms
finite-horizon search scenarios and efficient algorithms maximizing the discounted total return have been obtained by Zuckerman (1989) and Granot and Zuckerman (1991). This paper considers the so called 'min-ioss search problems' which generalize the GlussBlackwell-Staroverov, Sweat and Kadane-Simon models. Using combinatorial techniques, fast optimal algorithms for solving the problems are obtained. The following optimality conditions for search criteria are derived: in the min-loss problem, a local-search algorithm yields the global optimum iff the loss function of the total searching time is linear or exponential; the loss function depending on the total cumulative damage, given in the multiplicative form, has the above property iff it is linear or logarithmic.
MIN-LOSS SEARCH: LOSSES D E P E N D I N G ON SEARCHING TIME Suppose that a complex system is composed of N independent, stochastically failing modules. Consider a situation when damage incurred by the failure (for example, survival considerations) is more important than the cost of the search. We assume that only one module can fail within the system's lifetime. The failure causes damage to the whole system and/or its environment; the scale of damage is assumed to be dependent on the location of the failed module and accumulated from the moment the failure occurred till the moment it is detected and eliminated. When the failure occurs, a series of sequential inspections must be performed in order to identify the failed module. The goal of the search is to minimize expected losses incurred by the failure of the system before the latter is detected. It is assumed that an inspection of each failed module, i, is imperfect, i.e. possessing a chance, a i, of overlooking the failed element. This leads to a chance of examining each module more than once, or, in other words, a search sequence may be, generally speaking, infinite. When all al = 0 we obtain, as a special case, the so-called 'perfect inspections' which are the subject of research in scheduling theory (see, for example, Kadane and Simon, 1977; Kelly, 1982; Lawler et al., 1989). When the system fails, each module i, i = 1..... N, is characterized by the following parameters: Prior probability p~ of being faulty, 0 < Pi < 1; probability a t of overlooking the failed element, 0 _< a~ < 1; expected time ti to inspect module i; 'loss rate" per unit time c~ (in the case of linear losses); maximally possible loss level, C~ (for exponential losses). Given the above input data, the optimal search strategy will depend on the form of the loss functions and the search 'scenario'. In this section, we shall study two types of expected loss functions, linearly or exponentially increasing in time. We shall consider the search scenario specified by the following conditions: (i) the modules are inspected sequentially; (ii) for any specified search strategy and any location of failures, the outcomes of searches are assumed to be independent; (iii) the search process ends when the failed module is found. Each sequential inspection strategy specifies an infinite sequence s = (s[I]~ s[2] ..... s[n] . . . . which states that the module inspected at the nth step of s is s[n]. For any given s, we shall use the following notation: P~,.sj: probability that the failure is detected at the nth step; tr,~.sl: time spent for inspecting module s[m] in s; ct,,.s3: 'loss rate' assigned to module s[n] in s; Ct,.sl: maximal possible loss level assigned to sin] in s. Then the time spent to detect the failure at the nth step of s,Ti,,,.~l will be expressed as follows:
International Transactions in Operational Research Vol. 1, No. 2
243
t!
Tv,..~l = ~ ti,,.~};the linear losses incurred before the failure is detected and eliminated at the nth step of s will be: q,..,jTt,.,,i; the exponentially increasing losses (with a discounting factor d(d > 0) being given) will be as follows: Ct,.sl[l - exp(-dTt,.sl) ]. In accordance with conditions (i) and (iii) of the above search scenario, the expected linear losses are defined as follows:
FI(s ) = ~ PI,.,Ict,,~ITI,.~I• n=l
Similarly, the expected exponentially increasing losses in our scenario are:
Fz(s ) = ~ Pt,.~Ct,,~l[1 - exp(-dTt,.~) ].
(1)
n=l
Let us define parameters Qii as follows:
Qij=piai-l(1-ai),
i = 1,2 . . . . . N;
j=
1,2 . . . .
(2)
Notice that parameters Q~i do not depend on the sequence s. An optimal search strategy is determined by the following:
Theorem I. Consider the exponential min-loss problem. Let the values of the ratio R~j = [CiQijexp(- dti)]/[1 - e x p ( - dti) ] for all values i and j be arranged in non-increasing order of magnitude, and let the sequence s* be such that: if the ratio Rij is the nth largest value in the ordering, then the nth step in s* is thejth search of module i. Then the sequence s* is optimal. The proof is given in the Appendix. A similar statement is valid for the linear min-loss search problem, with the objective function F l(s):
Theorem 2. Consider the linear min-loss problem. Let the values of the ratio c~Qij/t ~for all values i and j be arranged in non-increasing order of magnitude, and let the sequence s* be such that: if the ratio c~Q~j/t~ is the nth largest value in the ordering, then the nth step in s* is the jth search of module i. Then the sequence s* is optimal. The proof is omitted as being similar to that of Theorem 1.
Remarks. 1. If in the conditions of Theorems 1 and 2, it happens that at some step Rij
Rk, n for some i, j, k, m, then at this step any one of two variants, the jth search of module i or the ruth search of module k, is permissible. 2. If we set all a i = 0, Theorem 1 degenerates into the well-known O(n log n)-time algorithm for solving the finite-horizon scheduling problem studied by Tanaev (1965) and Rothkopf (1966). 3. In the case when all ci = !, Theorem 2 degenerates into the Gluss Blackwell Staroverov 'most-inviting' rule. =
MAX-REWARD SEARCH WITH M U L T I P L I C A T I V E A R G U M E N T In this section, we shall treat a multiplicative search criterion, more general than that investigated by Sweat (1970). Consider again a complex system composed of N independent, stochastically failing modules. This time, we concern ourselves with the maximization of reward offered for prevention of cumulative losses incurred by a module failure.
Eugene Levner-lnfinite-horizon Scheduling Algorithms
244
For each module i, i = 1. . . . . N, there are known parameters Pi, ti, and a~, i = 1,2 . . . . . N, as in the above min-loss problem. In addition, the following parameters are assumed to be given: Di: a reward offered (at time 0) for the detection and correction of the failed module i; d: a discounting (non-negative) factor showing the decrease of reward over time, during the process of search for the failure; b~: a factor of relative decrease of the probability of finding the failure in module i in the process of search, introduced by Sweat (1970); in other words, the probability of finding the failure in N
module i on some step of the search contains the factor H b~'"1, where u(i) is the number of times i=1
module i has been inspected before this step, 0 < b~ < 1. Given the above data for all i, i = I . . . . . N, the problem is to determine an optimal sequential inspection strategy maximizing the expected reward for finding the failed module. Consider a strategy s = (s[l], s[2] . . . . . s[n] . . . . ). Let us use the following notation in addition to that of the previous section: Dt..sj: the reward offered (at time 0) for detection of failure of the module which is inspected at the nth step of s, Pt,,.~J: the probability of detecting the failure at the nth step of s, if there were no discounting of probability of detection. n
Then the time needed to detect the failure at the nth step of s is, as before: TE..,~}= ~ tt,..sl; the m
1
reward discounted before the failure is detected at the nth step of s will be: Dt..~}exp(-dTt.m); the N
discounted probability of detecting the failure at the nth step of s will be: Pt...,J 1-[ b7"'''~L where i=1
u(i,n,s) denotes the number of times module i was inspected before the nth step of s. Now, the expected discounted reward R(s) obtained when using the search strategy s can be written as follows: N
R(s) = ~ Pt..~l 1-] bT"'"'~Ol..slexp( - dTt..~). n=l
i=1
This is a generalization of Sweat's objective function turning into the latter when the damage Df..~lexp( - dTt..~) is equal to 1 for all modules i = 1. . . . . N. Using the same arguments as in Theorem 1, the following generalization of Sweat's Rule (1970) can be proved: Consider the max-reward problem with objective R(s), and suppose an optimal solution exists. Let the values of the ratio R~j = [Qi~biDiexp( - dti)]/[1 - biexp( - dti) ] for all values i a n d j be arranged in non-increasing order of magnitude, and let the sequence s* be such that: if the ratio R~j is the nth largest value in the ordering, then the nth step in s* is thejth search of module i. Then, the sequential strategy s* is optimal. We do not discuss here conditions for the existence of the optimum; in fact, they differ only in minor details from those of Sweat (1970). Concluding this section, let us consider an infinite-horizon generalization of the Granot Zuckerman (1991) multi-failure search problem. The search scenario assumes here that the searcher stops whenever, after a sequence of successful steps ('a faulty module is discovered' ), he comes across an unsuccessful step ('a fault is not discovered'). This problem has the following objective:
(°'
G(s) =
Dl.,.~lexp -- d ~ n=l
m=l
/H'
tlm.,~l
(I -- Pt.,.~l)"
/m=l
Again using interchange arguments, a highest-efficiency rule can be proved according to which the values of ratio R~j = [l - (l - Qij)exp( - dti)]/D ~, for all values i and j are to be arranged in
International Transactions in Operational Research Vol. 1, No. 2
245
non-increasing order of magnitude, and the sequence s*, minimizing G(s) is such that if ratio R o is the nth largest value in the ordering, then the nth step is s* is the jth search of module i. The latter rule degenerates into the Granot-Zuckerman O(n log n)-time algorithm if we set a~ = 0 for all i. Economic interpretations of the ratio Rij in the latter special case have been considered by Granot and Zuckerman (1991). A similar 'ratio rule' may be obtained for an infinite-horizon extension of the Kadane-Simon (1977) multi-failure search scenario.
O P T I M A L I T Y C O N D I T I O N S FOR MIN-LOSS SEARCH P R O B L E M S Consider now a more general min-loss problem in which not only the probabilities of detection Ptk..~] but also the times t , i = 1,...,N, are allowed to change from step to step in the process of the search depending on the 'history' s (which indicates which locations have already been searched up to the current step, say, k, and in which order): t~ = t~(k,s). Let s = (s[1 ], s[2] . . . . ) denote a trajectory of the search, s[k] being the location's number chosen at the kth step, s[k]~ {1,2 ..... N}. Let f(.) be an arbitrary loss function of total searching time T~t,] = ~ t~[m], and F(s) be the m=l
corresponding expected losses:
F(s) = ~ c~t.jf~[,l(T~t.])P~t,j(n,s ),
(3)
n=l
where P~t,l(n,s) denotes the probability that the failure is detected at the nth step of s, in location s[n]. We are concerned to find for which search criteria of this type local strategies provide the global optimum. Suppose that at each search step, k, there exists a real-valued function El = E~(k) depending only on the index i and current parameters p~, a~, c i and t i of location i (the function El will be called an 'index function', or 'search efficiency'). A local-search policy in which, at each step, one inspects a location of currently greatest search efficiency E~ will be referred to as an 'index policy generated by Ei'. We shall say that an objective function F(s) in the min-loss search problem 'permits an index policy', if there exists an index function E~ = E~(k) such that for all values of N, c~, ti(k,s), and Pstk](k,s), i = 1. . . . . N; k = 1,2 . . . . . the index policy generated by the function E i is optimal. Consider now a current step, say, k, of the search, and let m be the location with maximal effectiveness of Ei at this step, l and w denoting any other locations. Let s be an optimal sequence, Hk(s ) be a subsequence of s presenting a 'history' of the search before the step k in s. Consider a sequence r adjacent to s, that is, differing only in the order of searching at two locations, m and l:
s = ([Hk(S)],m,l,v . . . . ); r = ([Hk(s)],l,m,v .... ). Denote the sum
~
(4)
ti(k,s) by t.
i~Hk(s}
Consider the function Dm~(t) = F(s) - F(r). By (3) and (4),
Dm~(t) = F(s) - F(r) = cmf~(t + tm(k,s))Pm(k,s) + ctft(t + t~(k,s) + tt(k + l,s))Pl(k + I,s) - [cJt(t + tt(k,r))Pt(k,r) + c,.f,~(t + tt(k,r) + tm(k + 1,r))P,.(k + I,r)].
(5)
Observe that if an objective F(s) permits an index policy then for any fixed m and l, the function Dml(t) has the same sign (namely, Dm~(t) < 0), for any 'pre-history' t. Indeed, assume that contrary to this assertion, Dm~(t) > 0 and, at the same time, for some sequence
q = ([H'k(q)],m,l . . . . ) (where Hk(S ) =/: H'k(q)) and t' =
~ i~H~(q)
ti(k,q), it will be that D,.t(t') < O.
246
Eu,qene Levner-lnfinite-horizon Scheduliny Alyorithms
Then, by the definition of the index policy, from D,,t(t) > 0 it follows that E,, > Et while from D,.t(t' ) < 0 it follows that E,. < E t. This is a contradiction which establishes the assertion. Notice that this assertion is valid for any instance of the mass rain-loss problem, and hence it must hold for any positive t. Now we can state the key result.
Theorem 3. Let.~(t) be strictly increasing and sufficiently smooth (there exists the 3rd derivative) functions on R+. The expected loss function F(s) permits an index policy iff: (i).~(t) = ait + b i, or (ii)fi(t) = alexp(at) + b i,
i = 1,2 ..... N.
The theorem is an infinite-horizon extension of the theorem by Kladov and Livshitz (1969) which was obtained for corresponding deterministic scheduling problems. Although our proof differs only in minor technical details from that of Kladov and Livshitz, we shall present it in the Appendix for the sake of complete presentation of the whole picture. The statement of Theorem 3 agrees with the results of Kladov and Livshitz (1969), Rothkopf and Smith (1984) and Rothblum and Rothkopf (1991) who have studied classes of objective functions permitting (stationary or dynamic) index policies in finite-horizon scheduling problems with invariable data (see also Gittins, 1969). Consider now the search problem having the loss objective function with multiplicative arguments:
M(s) = ~ D~t.lf~t.l n=l
d~t,~1 P~tml(S[m],s).
(6)
1
Using the same arguments as those of Theorem 3, the following optimality conditions can easily be obtained: Let f~(t) in (6) be increasing and sufficiently smooth (there exists the 3rd derivative) functions on R +. The objective function M(s) permits an index policy iff: (i)f/(t) = all + b i, or (ii)f/(t) -- ailog(at ) + bi,
i = 1,2 ..... N.
A R E A L - W O R L D A P P L I C A T I O N : F A U L T DIAGNOSTICS O F E L E C T R O N I C EQUIPMENT* The search models described above have been successively applied to actual industrial problems of optimal fault diagnostics for special power electronics equipment, the so-called uninterruptible power systems (UPS), produced by an Israeli company, Gamatronic Electronic Industries Ltd (Jerusalem). The UPS is the device which serves to provide the user with reliable and continuous power whenever voltage in the electricity supply network rises too high or drops too low. In fact, the UPS can be considered as a small autonomous power station which supplies power independent of various disturbances in the mains including spikes, transients and blackouts. The UPS contains a battery and hence can supply the user (for example, a communication network, a broadcasting station, an electronic memory system in a bank, an individual computer, etc.) with many hours of power even in the case of a complete line failure. The main units of the UPS are known to be stochastically deteriorating modules whose quality degradation may cause the unit's failure. If the failure of an UPS occurs the identity of the defective unit responsible for the accident is unknown. Then a series of inspections of risky units is initiated in order to identify and repair/replace the faulty unit. If the identification/replacement procedure turns out to be too long both the customer and the producer experience some financial losses. The producing company is interested in finding a diagnostic procedure which minimizes the losses incurred. *The case study was performed together with Mr Udi Friedlander (Gamatronic Electronic Industries Ltd, Jcrusalem}.
International Transactions in Operational Research Vol. I, No. 2
247
We illustrate the application with an actual industrial example. Parameters for the UPS units mainly responsible for failure were obtained by continued observation over a long period of time (over 10,000 h) and time-averaging of statistical and expert data. The UPS costs are known, for all UPS types and for main electronics customers. In the example considered, the UPS cost is $3000. Other parameters for basic units of the UPS are presented in Table 1. The information has been partly censored, for presentation purposes and to retain the confidentiality of certain technological and commercial data. Table 1. Parameters for basic UPS units Failure probability
Overlooking probability
i Unit
p~
ai
Inspection duration t i (min)
1. 2. 3. 4. 5. 6.
0.13 0.11 0.01 0.25 0.40 0.10
0.020 0.020 0.0025 0.0025 0.0050 0.0025
2.0 2.5 5.0 5.0 4.0 5.0
Input interface Output interface Control unit Inverter Battery Power unit
According to statistics and expert estimates, economic losses of the producer turned out to be increasing exponentially [which agrees with the above formula (1)]. For a specific UPS and a specific customer, if MTTR (the mean time to repair the faulty module) is larger than half an hour then the producing company loses up to 10% of its chance to sell an additional UPS to the specific customer in the future, and this chance falls exponentially as MTTR grows (see Fig. I). 3000L i = $2450
L,= $ 2 1 5 0 ~ L/= $ 1 7 0 0 ~
2000 --
3
Li = $ 1 0 0 0 J
t0oo]--
/o
,o 0
""
~
',
60
~20
!
!
i
1
I
;
180
~40
300
360
.t20
-180
MTTR. T i (rain) Fig. 1. Producer's losses estimated statistically as a function of MTTR (mean time to repair the failed module).
As can be seen from the graph in Fig. 1, the losses L i, incurred if module i failed, grow exponentially where the exponent factor d for the curve in Fig. I is found to be 0.00351 (min- 1); Ti = MTTR, and the maximally possible loss level, Ci, is the same for all modules: Ci = 3000($), i = ! ..... 6:
L, = C,[I - exp( - d T , ) ] = 3000(1 - exp( - 0.00351T~)). Table 2 presents computations performed in correspondence with the instructions of Theorem 1. The strategy minimizing the losses, due to Theorem 1, will be as follows: s * = (5,1,4,2,(6,3,1,2,5,4) . . . . ), the sub-sequence (6,3,1,2,5,4) being repeated periodically, infinitely many times, after the initial, aperiodic phase of the solution, which is (5,1,4,2). As for the criterion's optimal values (for the infinite-horizon argument) they are easily calculated using the formula for the geometrical progression sum. The optimum value F 2 ( s * ) = 92.1($). We compared the obtained minimal losses with those incurred by some other, manually chosen, strategies. In this numerical example, as well as in many other real data instances, we observed a significant improvement over the values of solutions generated manually. For example, let us
248
Eu,qene Levner-lnfinite-horizon Scheduling Alqorithms Table 2. Current ratio, Rii = [CiPla~- I(I - ai)exp( - dti)]/[l - exp( - dti) ]
Item i
1
2
3
4
I. 2. 3. 4. 5. 6.
54,222 36,672 1689 42,231 84,399 16,892
1085 733 4.2 106 422 42
21.7 14.7 0.01 0.26 2.11 0.11
0.43 0.29 0.25 x 10 4 0.6 × 10 3 0.01 0,26 × 10 3
Input interface Output interface Control unit Inverter Battery Power unit
compare the minimal losses with those incurred by a manually chosen strategy s o under which modules 1 through 6 are searched cyclically in the order 1,2,3,4,5,6 until the failure is found: s o = ((1,2,3,4,5,6) . . . . ). We have F2(so) = 146.0($). This numerical example illustrates that in real life problems the optimal search strategy can be very advantageous. The company Gamatronic Electronic Industries Ltd decided to use the above OR model for wider applications aiming to develop other search scenarios and to embed the suggested approach into a user-friendly PC-based diagnostics system.
SUMMARY A N D C O N C L U S I O N S
This paper introduces a new class of search problems, the so-called 'min-loss search problems', and demonstrates that they are optimally solvable by local algorithms. Using a combinatorial approach, optimality conditions are derived for search criteria under which the local-search algorithms yield the global optimum. We believe it is possible to apply further the above approach for solving a wider class of discrete search models. In particular, it may be useful for investigating more complicated search scenarios (involving, e.g. the non-sequential search of multiple failures; inspections subject to precedence constraints, and search among moving, changeable and forged targets). Acknowledgements - The author wishes to thank Prof. Shmuel Zamir (Hebrew University of Jerusalem) for stimulating discussions, and Mr Yossi Gorren (Gamatronic Electronic Industries Ltd, Jerusalem) for help in collecting data and preparing the illustrative materials. The very detailed and helpful comments of the Editor, Professor Rolfe Tomlinson, and two anonymous referees are gratefully acknowledged.
REFERENCES Ahlswede, R. & Wegener, 1. (1987). Search Problems. New York: Wiley. Assaf, D. & Zamir, S. (1987). Continuous and discrete search for one of many objects. Operations Research Letters, Vol. 6, pp. 205-209. Benkoski, S.J., M onticino, M. G. & Weisinger, J. R (1991 ). A survey of the search literature. Nal'al Research Lo~listics, Vol. 38, pp. 469~194. Blackwell, D. (1962). Notes on dynamic programming. Unpublished notes, University of California, Department of Statistics. Chew, M.C. (1967). A sequential search procedure. Annals ~?]Mathematieal Statistics, Vol. 38, pp. 494-502. Gittins, J.C. (1989). Multi-Armed Bandit Allocation Indices. New York: Wiley. Gluss, B. (1959), An optimum policy of detecting a fault in a complex system. Operations Research, Vol. 7, pp. 468-477. Granot, D. & Zuckerman, D. (1991). Optimal sequencing and resource allocation in R&D projects. Manaqement Science, Vol. 37, pp. 140-156. Hall, G. J. (1977). Strongly optimal policies in sequential search with random overlook probabilities. The Ammls ~?lStatistics, Vol. 5, pp. 124-135. Kadane, J. B. (1968). Discrete search and the Neyman-Pearson lemma. Journal ~['Mathematieal Analysis and Applieations, Vol. 22, pp. 156-171. Kadane, J.B. & Simon, H.A. (1977). Optimal strategies for a class of constrained sequential problems. The Ammls ~l Statistics, Vol. 5, pp. 237 255. Kelly, F. P. (1982). A remark on search and sequencing problems. Mathematics o/Operations Research, Vol. 7, pp. 154-157. Kimeldorf, G. & Smith, F. H. (1979). Binomial searching for a random number of multinomially hidden objects. Mana,qement Science, Vol. 25, pp. 1115-1126. Kladov, G. K. & Livshitz, E. M. (1969). On a scheduling problem minimizing the sum of penalties. Kyhernetiea, Vol. 6, pp. 99-100 (in Russian).
International Transactions in Operational Research Vol. I, No. 2
249
Lawler, E. L., Lenstra, J. K., Rinnooy Kan, A. H . G . & Shmoys, D.B. 11989). Sequencing and ,scheduling: algorithms and complexity. Technical Report, Eindhoven University of Technology, Eindhoven. Rothblum, U . G . & Rothkopf, M. H. 11991 ). Dynamic recomputation can't extend the optimality range of priority indices. R U T C O R Research Report, Rutgers Center for Operations Research, Rutgers University, New Brunswick. RothkopL M . H . (1966). Scheduling independent tasks on parallel processors. Mana.qement Science, Vol. 12, pp. 437-447. Rothkopf, M. H. & Smith, S, A. (1984}. There are no undiscovered priority index sequencing rules for minimizing total delay costs. Operations Research, Vol. 32, pp. 451-456. Staroverov, O. V. {1963). On a searching problem. Theory of Probability and Its Applications, Vol. 8, pp. 184-187. Stone, L.D. 11989). Theory of Optimal Search, 2nd edn. Arlington, VA: Operations Research Society of America. Sweat, C. W. (1970). Sequential search with discounted income, the discount a function of the cell searched. The Annals ~l' Mathematical Statistics, Vol. 41, pp. 1446-1455. Tanaev, V. S. (1975). Some optimized functions in one-stage production. Dokladi Akademii Nauk Belorusskoi SSR, Vol. 9, pp. 11-14 (in Russian). Zuckerman, D. 11989). Optimal inspection policy for a multi-unit machine. Journal of Applied Probability, Vol, 26, pp. 543-551.
APPENDIX
Proof of Theorem 1 First, we have to show that the min-loss problem is well-defined. In fact, the m i n i m u m in this problem cannot be greater than the expected cost of the strategy, s o, under which modules 1 through N are searched cyclically until the failure is found. N Let us show that the latter cost is finite. Denote T = ~ t~. Then we have: i=l
F2(so) = pill -- al )Clexp( -- dtl) ~ a~exp( - d k r ) + p2(l - az)C2exp( - dt 1 - dt2) ~ a~exp( - dkT) + ... k=O
+ p~(! - aN)CNeXp( --- dTI ~ a~exp( -- dkT) k-O
< N max [pi(1 - ai)C i ~. a~exp( - dkr)]. I ~i<--N
k=O
The latter expression is evidently less than N max [pi(l -- al)Ci/(I - a/exp( - dT))] < < . l<~i~N
Now let us show that the sequence s* is feasible. Notice that due to condition (ii) of the search scenario, in the above notation, Q~j = PI,,~'J" It can be seen from (2) that the ratio R~j decreases as j increases for any module i. Indeed, for any module i, parameter Q,j decreases a s j increases while parameters d, C~ and t~ remain the same. Hence, for any module i, the procedure s*, by its construction, first assigns the jth inspection, and only after that does it assign the next, (j + 1)th, inspection of this module i,j = 1,2,3 . . . . Assume that for some optimal sequence r*, r* =~ s*, we have that, for some step n, [C[,,.,,1Pt,,.,.]exp( - dt[..,. I)]/[ I - exp( - dtl,.,.i)] < [CI,, + I.,*lPt,, + t.,.lexp( - dtt, + 1.~*1)]/[I - exp( -- dtl, + Jx.i)]Assume. without loss of generality, that at the nth step of r* module i is inspected, while at the (n + [notice that i :/: j, due to relation (A I )]. "Interchange" the i and j, that is, consider a new sequence, q, which be inspected at the nth step and module i at the in + I )th step, and which agrees with the sequence r* From (1) and (A 1 ), we immediately obtain F t(q) - F t(r* ) < 0, which contradicts the optimality of r*, the proof.
(A I ) I )th step module j assigns m o d u l e j to at all other steps. and this completes
Proof of Theorem 3 In order to prove Theorem 3 first consider the following lemma.
Lemma. Let .q~(x), .q2(x) denote some functions of a finite-dimensional vector argument, x, and w(t) be a function of a scalar parameter t,t ~ R+ (R + is the positive semi-axis). Let the equation 01(x) - w(t )~421x) = 0 have a solution x o for any t ~ R +, and .q2(xo) q: 0. Then the sign of the function ~ll(x) - w(t),q2(x ) does not depend on t i f f w(t) is a constant. For the case of two-dimensional argument x, the lemma was proved by K ladov and Livshitz (I 969l. The same arguments
250
Eugene
Levner
-h?finite-horizon
Scheduling
Algorithms
arc wdid for the m-dimensional x, m > 2: A s s u m e that w(t) ~ const: w(tl) = c]. w(t2) = c 2, ('j < c 2. Let x * be the solution of the equation g(x) = (c I + c2)/2. Denote ,qt(x) - w(t)~/2(x) by G(x,t). Then we have: G(x*.t]) = .q(x*) - c a = g(.v*) - (t' 1 + c2)/2 - (c I - c2)/2 = - (c I - c2)/2 > 0. Similarly, G(x*.t,) = ,q(x*) - ('2 = (('1 - c2)/2 < 0. Hence. the sign of the function g l ( x ) - w(t)g_,(.v) depends on t if w(t) = const. The l e m m a is proved. N o w we are ready to prove T h e o r e m 3. The "if" part of the theorem is validated by T h e o r e m s I and 2. Let us prove the 'only if" part. C o n s i d e r a current step. say, k, of the search, and let m be the location with m a x i m a l effectiveness E~ at this step. / d e n o t i n g any other location" s = ((Itk(s)).m,I . . . . ); r = ((Hk(s)),l,m . . . . ). Let us use the T a y l o r series e x p a n s i o n of the function D.,(t) presented in (5):
Dmt(t) = c'.,P.,(k.s)f'.,(t)t,.(k,s) + c'lPl(k + I,s).fi(t)(t,.{k,s) + tl(k + I,s)) - -
ctPt(k,r).li(t)tt(k,r) - c,.Pm(k + I,r).f',.(t)(tt(k,r) + t,n(k + I,r))
+ o(t.,(k.s) + t,(k + 1,s) + t.,(k + I,r) + t~(k,r)) 2 = ~l[c.,P,.(k,s)t.,(k.s) - c.,P,.(k + I,r)(tl(k,r) + t,.(k + l,r))]f',,,(t) - [ciPl(k,r)tl(k.r) - ciPl(k + I,s)(t,.(k,s) + tl(k + 1,s))].["l(t) I + o(t~(k,s) + tl(k + l,s) + t~(k + I.r) + tl(k,r)) 2. For sufficiently small t.,(k,s), t ~(k + I ,s), t,.(k + l.r) and tj(k,r), the sign of D,.t(t) is determined by the expression in braces. Sincef;.(t) > 0, we can divide the latter expression by./~,(t) > 0, and take.f'l(t)/f'(t) as w(t). For any fixed value of the w(t) there exist arbitrarily small, positive t,,,(k,s), T~(k + I,s), t,.(k + I,r) and tt(k,r) t u r n i n g this expression into zero. Then t a k i n g into account the l e m m a , we o b t a i n that the sign of Dmt(t) does not depend on t iffl](t)/l",,(t) = C = const. Thus,./i(t) = C/~,,(t) + L. (where C and L are constants). C o n t i n u i n g the T a y l o r series e x p a n s i o n in the case C = I, and using the l e m m a once more we o b t a i n that .["~,(t )/f'.,lt ) = A = const. Hence, if C :/: I then in the case of A = 0 we have.f~,(t) = 0. Then./~,(t) = a,.t + b,. and If(t) = Cir.(t) + L = a d + h~. for any I. If C 4:1 and A :~ 0 we have f~,(t) = Af',,,(t). Solving the latter differential e q u a t i o n , we obtain./~(t) = a,.exp(At) + h,,, and .[i(t) = atexp(At) + h I, for any I. If C = I we have./i(t) =.l.,(t) + L. This is a d e g e n e r a t e case which does not yield any new productive classes of objective functions p e r m i t t i n g an index policy. The theorem is proved.